text
stringlengths 5.5k
44.2k
| id
stringlengths 47
47
| dump
stringclasses 2
values | url
stringlengths 15
484
| file_path
stringlengths 125
141
| language
stringclasses 1
value | language_score
float64 0.65
1
| token_count
int64 4.1k
8.19k
| score
float64 2.52
4.88
| int_score
int64 3
5
|
---|---|---|---|---|---|---|---|---|---|
Department of Agriculture and Water Resources, March 2018
- Australian Government response to the Senate Committee report – Management of the Murray-Darling Basin [PDF 389KB]
If you have difficulty accessing these files, contact us for help.
The Murray-Darling Basin Plan (Basin Plan) was signed into law on 22 November 2012 and tabled in the Parliament on 26 November 2012. It passed the Parliament without disallowance in March 2013.
The Senate Rural and Regional Affairs and Transport References Committee presented its final report for the inquiry Management of the Murray-Darling Basin on 13 March 2013. The report made 23 agreed recommendations, with a further four recommendations from Senator Nick Xenophon. The Government’s response to each of these recommendations is set out in this document.
Committee majority recommendations
The committee recommends that the Murray-Darling Basin Authority develop a concise and non-technical explanation of the hydrological modelling and assumptions used to develop the 2750 GL/y return of surface water to the environment, to be made publicly available.
Plain English summaries of the scientific basis of the 2750 GL/yr figure are provided in the document ‘Delivering a healthy working Basin: about the draft Basin Plan’ and in fact sheets on hydrological modelling and the Environmentally Sustainable Level of Take (ESLT). All are available on the Murray-Darling Basin Authority (MDBA) website. The MDBA has also provided non-technical explanations of other aspects of Basin Plan modelling, which are publicly available.
The committee recommends that the MDBA specifically include the predicted range of impacts of climate change on water runoff when implementing the relevant risk management strategies under chapter 4 of the Basin Plan.
The predicted 2030 climate scenarios for the Murray-Darling Basin are well within the historic variability of the system based on the 114 year modelling record used in developing the Basin Plan. The Basin Plan is an effective framework for adapting to climate variability because it will ensure that the Basin state water resource plans to be accredited by 2019 are able to operate over a range of climate scenarios including wet and dry sequences. Further, the periodic review cycle for the Basin Plan (and state water resource plans) means that new and better estimates of climate impacts on water availability can be factored in when available.
To develop the settings in the Basin Plan, the MDBA collaborated with CSIRO, the then Department of Climate Change and Energy Efficiency, the Bureau of Meteorology and the Victorian Department of Sustainability and Environment in the $16.5 million South Eastern Australian Climate Initiative (SEACI) research partnership. The SEACI findings guide the risk management strategies relating to the implementation of the Environmental Watering Plan, the Water Quality and Salinity Management Plan, water resource planning and the water trading rules under Chapter 4, Section 4.03(3)(a) of the Basin Plan.
The MDBA has also worked with CSIRO to assess the likely impacts of climate variability on water availability across the Basin. Twenty-four river systems models developed by various agencies across the Basin have been pulled into an Integrated River Systems Modelling Framework to assess links between future climate scenarios and:
- flows in various parts of the Basin;
- water allocations to different water users;
- water availability in dry and wet years; and
- impacts on key environmental assets.
The Basin Plan requires that reviews under the provisions of Chapter 6 have regard to the management of climate risks and include an up-to-date assessment of those risks.
There is an obligation under Chapter 4 of the Basin Plan to improve knowledge over time of the projected impact of climate on water requirements. An updated assessment of the projected range of impacts of climate variability on water runoff will form part of the work to improve this knowledge.
Under Chapter 10 of the Basin Plan, States must consider the risks to the availability of water from climate variability when developing water resource plans, and set out how water will be managed during extreme dry periods.
Consistent with Recommendation 20, the committee recommends that the government develop a clear research strategy on the future impacts of climate change on water runoff in the Basin. The strategy should also include a process for integrating the results of the research into the adaptive management process under the Basin Plan.
Potential impacts on the Basin’s water resources from climate variability will be taken into account in future reviews of the Basin Plan. As noted in the response to Recommendation 2, reviews of the Basin Plan under Chapter 6 have regard to the management of climate risks and include an up-to-date assessment of those risks, which provides the opportunity to integrate the results of research into the adaptive management process under the Basin Plan.
The requirements under Chapter 4 of the Basin Plan for improved knowledge about the effects of climate variability provide the framework within which this work is to take place.
The committee recommends that the MDBA model a range of possible future intercept scenarios and publish the results so that each state can better plan for the impacts of the interception on its overall consumptive water allocation.
Agreed in principle.
The Basin Plan took interception activities into account in determining the Environmentally Sustainable Level of Take (ESLT). Thus, for the first time, the estimate of interception is included in the Sustainable Diversion Limit (SDL) for each river valley and for the SDL for the Basin as a whole. In addition, under Section 4.03 of the Basin Plan, the MDBA is required to further improve knowledge of the impact of interception activities and land use changes on Basin water resources.
Under Chapter 10 of the Basin Plan, Basin states are required to manage risks to water resources in water resource plans. In developing water resource plans, the Basin states must monitor and regulate interception activities with a significant impact on water resources. To ensure consistency with the Basin states’ land use policies, modelling of future interception scenarios would be best undertaken by Basin state governments.
The committee recommends that, in undertaking its adaptive management approach to the Basin Plan, the Murray Darling Basin Authority clearly considers, assesses and incorporates all elements that could impact environmental watering requirements. This includes climate change, interception activities, coal seam gas mining, surface-groundwater connectivity and possible negative effects such as over watering caused by increased river flows. This information should be clearly set out in non-technical language and be made publicly available in a timely manner.
The Basin Plan requires that the new Basin state water resource plans, which will be in place by 2019, identify and assess any risk to water resources in the area and include local management arrangements where necessary. It specifically requires these plans to take into account potential climate change, interception activities and surface and groundwater connectivity. The MDBA has developed a plain-English guide on water resource plan requirements called the Handbook for Practitioners, which is available on the MDBA website.
The MDBA sets environmental watering priorities at a Basin-scale annually, taking into account all of the above factors where applicable. This regular review of priorities considers prevailing climatic conditions and, together with the five-yearly review of the Basin-wide Environmental Watering Strategy and the Basin states’ long-term (regional) watering plans, will ensure that new information is incorporated into assessments of environmental watering requirements. These reviews, together with non-technical explanations, are made available on the MDBA website.
The committee recommends that before 2016 the MDBA undertake a thorough review of the groundwater aspects of the Basin Plan including:
- the methodology and the assumptions underpinning the groundwater BDLs and SDLs; and
- the connectivity of all groundwater and surface water resources to ensure that the modelling used in the Basin Plan is scientifically sound.
Chapter 6, Section 6.06(1) states that the MDBA may conduct research and investigations into aspects of the work underpinning SDLs or other aspects of the Basin Plan. The MDBA has undertaken such research.
For example, the MDBA entered into a strategic research partnership with the National Centre for Groundwater Research and Training (NCGRT). A key part of the research is to benchmark the methodology and the assumptions used to determine the Basin Plan groundwater SDLs. The research is also looking at methods to determine the connectivity between surface and groundwater resources at an SDL resource unit scale. This research will inform future reviews of the Basin Plan.
The repealed Sub-section 6.06 (6)-(9) of the Basin Plan specified reviews of the Baseline Diversion Limits (BDLs) and SDLs for three groundwater areas in New South Wales and Victoria within two years of commencement of the Basin Plan. All three reviews were completed by expert panels by November 2014. The reviews recommended that an increase in the groundwater SDLs in each area would be acceptable, provided the Basin states embed more stringent local management rules for the groundwater areas in the relevant water resource plans. These changes were made in the Basin Plan Amendment Instrument 2017 (No.1), which commenced on 14 November 2017. This instrument also repealed the provisions of Sub-section 6.06 requiring the three reviews, as they are spent provisions now the reviews are complete. On 14 February 2018 this instrument was disallowed by the Australian Senate.In documenting the methods and assumptions used in determining the groundwater BDLs in The Addendum to the proposed Groundwater Baseline and Sustainable Diversion Limits: methods report (July 2012), the MDBA addressed comments made by an external expert panel involved in the review of the BDL methodology.
The committee also recommends that in conducting this review the MDBA should consult with a range of scientific experts. To ensure reliability, the final review findings should be peer reviewed by the CSIRO. To ensure transparency, the results of the review should be published by the MDBA.
The MDBA has worked with groundwater experts from organisations including the CSIRO and other independent experts in developing groundwater extraction limits for the Basin Plan. A panel of independent experts (which included a CSIRO representative) peer-reviewed the methods and models used to determine the groundwater SDLs, and CSIRO was represented on the panels undertaking the three groundwater reviews mentioned in Recommendation 6. The three review reports are available on the MDBA website.
The committee recommends the MDBA conduct further research into how effective the works and measures programs are for delivering environmental outcomes and the cost effectiveness of such projects in comparison to other forms of water recovery. This research should also include the socio-economic impacts to irrigation communities of increased levels of 'buyback'.
Chapter 7 of the Basin Plan provides for an SDL adjustment mechanism. The mechanism enables the Basin SDL to be adjusted up or down by no more than five per cent as long as social, economic and environmental outcomes are not compromised. SDL adjustments can be achieved through either supply measures (works and measures or changes to river operational rules that achieve equivalent environmental outcomes with less water) or efficiency measures (recovery of additional water for the environment without detrimental social or economic impacts).
The 2013 Intergovernmental Agreement on Implementing Water Reform in the Murray-Darling Basin agreed between the Australian Government and all Basin state governments contains a protocol that sets out how governments will assess and agree to a package of adjustment measures including constraint, supply and efficiency measures. The Australian Government has made up to $34.5 million available to Basin States to develop business cases for prospective supply measure projects. Agreed supply measures will be funded by the Australian Government up to the market value of environmental water that would otherwise have been recovered from held water entitlements.
The MDBA continues to collect social and economic data to inform its role in evaluating and reviewing the Basin Plan, including through the northern Basin review. The social and economic assessment conducted as part of the northern Basin review is available on the MDBA’s website. The Water Act 2007 and Basin Plan require regular reporting of socio-economic impacts. In December 2017 the MDBA published its first Basin Plan Evaluation, including of the effects of water recovery at the community scale, with more community-level analysis to be released by April 2018. A second report is due in 2020 and subsequent reports every five years thereafter. Further, as part of the Six Point Agenda for delivering the Basin Plan announced on 25 November 2017, the Australian Government will establish a robust program of monitoring and evaluating the long-term socio-economic outcomes and impacts associated with Commonwealth-funded water recovery programs.
The committee recommends that the MDBA and SEWPaC provide ongoing public updates to Basin stakeholders on progress in securing water savings from environmental works and measures.
The Protocol to the 2013 Intergovernmental Agreement on Implementing Water Reform in the Murray-Darling Basin provides for the Basin Officials Committee and the MDBA to develop and maintain a joint work program identifying all SDL adjustment measures, including supply measure environmental works and measures. The work program is endorsed at each Murray-Darling Basin Ministerial Council meeting and subsequently published on the MDBA website, including details of measures and expected water savings.
Basin state governments are responsible for developing specific SDL adjustment proposals, including any necessary consultation with the community. The MDBA’s role is to assess the final package and recommend to the Minister the amount of any SDL adjustments.
As requested by the Murray-Darling Basin Ministerial Council, the Australian Government passed amendments to the Basin Plan (through the Water Legislation Amendment (Sustainable Diversion Limit Adjustment) Act 2016) in November 2016 to provide for a second notification step by 30 June 2017. This has provided Basin state governments with an opportunity to develop and refine projects that can further improve the outcomes of the Basin Plan while ensuring the continued success of irrigation in the Basin through sound investment in infrastructure whether on or off farm.
A final package of SDL Adjustment Mechanism projects was agreed by the Murray-Darling Basin Ministerial Council on 16 June 2017. The MDBA has proposed a SDL adjustment offset of 605 gigalitres in its Draft Determination Report released on 3 October 2017. An outcome of this magnitude will likely mean that, once all contracted water recovery has been delivered, no further water recovery will be required to bridge the SDL gap in the Southern Murray-Darling Basin.
Consistent with Basin Plan requirements, in late 2017 the MDBA consulted with Basin State governments and the public on its proposed SDL adjustment determination. The MDBA then proposed an amendment to the Basin Plan for the SDL Adjustment Mechanism that was subsequently adopted by the Minister for Agriculture and Water Resources. The Basin Plan Amendment (SDL Adjustments) Instrument 2017 commenced in law on 13 January 2018, the day after registration on the Federal Register of Legislation. The instrument was tabled in both Houses of the Federal Parliament on 5 February 2018 for 15 sitting days as a disallowable legislative instrument.
The committee recommends that greater detail on the socio-economic costs and benefits of any proposed constraints removal be presented to affected communities and the public in general. Such information should be publicly updated in a timely manner when changes occur or new information is obtained by the MDBA and SEWPaC.
The MDBA published the Constraints Management Strategy (the Strategy) in November 2013, which evaluates the risks and opportunities associated with addressing constraints. The Strategy is being used to help inform the development of constraints measure proposals by Basin states. The Strategy advises that projects should
- recognise and respect the property rights of landholders and water entitlement holders;
- not create any new risks on the reliability of entitlements;
- be identified in consultation with affected parties to determine if impacts can be appropriately addressed and mitigated to enable changes to proceed;
- identify and aim to achieve net positive impacts for the community;
- be worked through in a fair and transparent/equitable way; and
- work within the boundaries defined by the Water Act 2007, the Basin Plan and relevant state water access and planning systems.
Basin state governments are responsible for developing proposals for relaxing flow constraints in the Basin. In the process, proponent states will consult with potentially affected landholders and communities in order to better understand local issues of concern and to identify any potential for adverse impacts.
A number of constraints projects have been included by Basin jurisdictions in the notification of ‘supply’ measures to the MDBA for consideration under the SDL adjustment mechanism. As such, these projects have contributed to the SDL offset arising from the operation of the mechanism, thereby enabling more water to remain available for use in irrigation agriculture.
Australian Government funding for constraints measure projects will be limited to those projects where any adverse third party impacts can be addressed to the satisfaction of landholders and communities.
The MDBA produces annual progress reports to Basin Ministers on developments in matters covered by the Strategy. The reports are available on the MDBA website.
The committee recommends that further consultation regarding constraints management and the additional 450 GL/y should remain a high priority for the MDBA and SEWPaC. To ensure consultation is adequately undertaken, the committee recommends that the MDBA and SEWPaC develop and publish a strategy that identifies and provides solutions for previous shortcomings (see chapter seven) in the government's consultation process for developing the Basin Plan.
The Australian Government regards community input as a fundamental component of the development of proposals to address constraints on the management and use of environmental water. As agreed under the 2013 Intergovernmental Agreement on Implementing Water Reform in the Murray-Darling Basin, Basin state governments are the key decision makers for addressing constraints within their jurisdiction, and responsible for associated stakeholder consultation.
Australian Government funding for constraints measure projects will be limited to those projects where any adverse third party impacts can be addressed to the satisfaction of landholders and communities.
The Murray-Darling Basin Ministerial Council commissioned an independent expert analysis on how best to design, target and resource efficiency measure programs to recover 450 gigalitres by 30 June 2024, consistent with the Basin Plan legal requirement to achieve neutral or improved socio-economic outcomes. The study has taken into account information arising from the MDBA’s evaluation of the Basin Plan impacts and any other relevant information, and will provide Ministers with a comprehensive set of information on the socio-economic impacts of the recovery of the 450 gigalitres through efficiency measures, consistent with the Basin Plan legal requirement for neutral or beneficial socio-economic outcomes. This evaluation, supported by other relevant analysis such as studies by State governments, will form the basis of knowledge to inform the expert advice on design efficiency measure projects to mitigate such impact. The independent expert analysis report was publicly released on 19 January 2018 and is available on the Department of Agriculture and Water Resources website. The Ministerial Council has received a briefing on the report and will consider the pathway for efficiency measures in 2018.
The committee recommends that the government develop a water trading information and support program aimed at helping possible "distressed sellers" understand their financial options and risks relating to water trading.
Agreed in principle.
The water market has significantly evolved in the last 15 years, particularly in the southern Murray-Darling Basin. Water entitlements and water allocations form the bulk of water that is traded in the Basin. Information on both entitlement and allocation water trading is available on State, Territory and Australian Government websites. This information covers the specific rules and associated water trading application forms required to trade water within and between the State and Territory jurisdictions.
The technology that supports water trading is becoming increasingly sophisticated, enabling more diverse trading platforms for buyers and sellers. The market includes forward purchasing of water allocations and carryover products, however the development of secondary markets is still in its infancy. As these products develop, market participants will be educated about the service through the service delivery organisation.
Current and historical information on water trading is available in a range of publications, such as the Bureau of Meteorology’s ‘National Water Account’ and the ‘Australian Water Markets Report’, which is now published by ABARES. The Australian Government also publishes a quarterly summary of water entitlement market prices prepared by an independent consultant. All reports are available on the Department of Agriculture and Water Resources website.
Information on the average prices paid for water entitlements through past open market tenders conducted by the Department of Agriculture and Water Resources (formerly part of the Department of the Environment) is reported on the Department’s website.
The information on the state registers, Commonwealth department websites and the publications listed above is made available to help irrigators who are considering selling their water entitlements to make an informed decision. In addition, irrigators may seek advice from brokers and agents to gain a local understanding of the state of the water market.
Past performance of the water market is not a reliable indicator of future performance, as the market fluctuates due to several factors, primarily climate. Individual participation in the water market is a reflection of their business model, personal circumstances and risk profile.
Individuals facing financial hardship can access financial counselling services through the Rural Financial Counselling Services (RFCS) program. The purpose of this program is to provide free support to primary producers, fishers and small rural businesses who are suffering financial hardship, and who have no alternative sources of impartial assistance, to manage the challenges of change and adjustment. This support would form part of an overall case management approach to assist the client to become more financially viable.
The committee recommends that the government undertakes explicit auditing and reporting of the extent and impact of sleeper and dozer licences on the Basin Plan.
The committee recommends this audit be publicly released and that updated audit information is incorporated into the MDBA's reporting on the Basin Plan at regular intervals.
Agreed in principle.
The Basin Plan sets sustainable limits on water diversions, regardless of use and behaviour across the range of individual entitlement holders. This includes a requirement for state water resource plans to account for changes over time in the extent to which water allocations are utilised. Auditing and reporting on this issue will take place through implementing the Basin Plan.
Any reports produced as a result of research or investigations that the MDBA conducts under the Basin Plan (Chapter 6.06 and Chapter 13), or at the request of the Ministerial Council, will be published on the MDBA website.
The committee recommends that the MDBA commission an independent review of the possible effects of using a range of assumptions of water entitlements types (e.g. high and low reliability) in the hydrological and socio-economic modelling of the Basin Plan. In the case where the results for certain water entitlement assumptions show that the objectives of the plan will be compromised, the MDBA should develop a policy which will ensure that this arrangement of water entitlements will not be realised.
Agreed in principle.
MDBA modelling and analysis for the Basin Plan was based on an assumption that the SDL reductions would be achieved through the recovery of a representative mix of entitlements in each valley.
The Australian Government agrees that using a range of assumptions in any future modelling, including water availability for consumptive purposes by entitlement type, may help inform any future review of the Basin Plan.
The committee recommends that the Australian National Audit Office (ANAO) review the Nimmie-Caira proposal. To the extent possible and in collaboration with the NSW Audit Office if necessary, the review should amongst other things examine the process undertaken by relevant parties for determining the value of all aspects of the Nimmie-Caira proposal. The review should also examine any factors that may impact on the value for money for the government and the tax-payer of the proposal should it proceed. The ANAO should report on this review prior to the approval of the Nimmie-Caira proposal by the Department of Sustainability, Environment, Water, Population and Communities.
The Australian National Audit Office (ANAO) reviewed the project and the ANAO report on the funding and management of the project was tabled in Parliament (out of session) on 21 April 2015.
The committee recommends that the MDBA update the socio‑economic modelling of the local impacts of the Basin Plan. There should be a strong focus on the communities likely to be most affected by the Basin Plan and strategies should be developed to address the impacts. All such information should be publicly released and presented in a form that is accessible to stakeholders, local community members, and parliamentarians. This modelling should also include tabular or graphical data depicting the location and volumes of buyback on an irrigation district basis.
The MDBA has released the following reports into the impact of Basin water reform at a local level. These reports are available from the MDBA website:
- MDBA 2016, Northern Basin Review. Technical overview of the socioeconomic analysis.
- EBC, RMCG, MJA, EconSearch, Geoff McLeod, Tim Cummins, Guy Roth and David Cornish 2011. Community impacts of the Guide to the proposed Murray-Darling Basin Plan.
- Arche Consulting 2011. Basin case studies: the socio-economic impacts of sustainable diversion limits and Water for the Future investments. An assessment at a local scale.
- ABARE-BRS 2010, Indicators of community vulnerability and adaptive capacity across the Murray-Darling Basin – a focus on irrigation in agriculture. A revised version of this report was released in April 2013.
- ABARE-BRS 2010, Environmentally sustainable diversion limits in the Murray-Darling Basin: Socioeconomic analysis.
The Water Act 2007 and Basin Plan require regular reporting of socio-economic impacts. This occurs in a number of ways including through Basin Plan annual reports and five yearly reports on the socio-economic impacts of the Basin Plan. The MDBA published the first of these reports in December 2017, with community-level analysis to be released by April 2018. A second report due in 2020 and subsequent reports every five years thereafter.
In relation to public release of water recovery data, monthly updates of water recovery at a catchment level are published on the Department of Agriculture and Water Resources website. The reporting includes monthly updates of environmental water recovery to indicate progress made, by catchment, to bridge the gap to the SDLs contained in the Basin Plan.
The committee recommends that the Government develop a formal process for long-term and integrated engagement with key stakeholders on the implementation of the final Basin Plan.
The MDBA, Commonwealth Environmental Water Holder (CEWH) and Basin state governments have committed in the Basin Plan Implementation Agreement to a collaborative approach to working with the community. This includes efficient, coordinated processes that build on existing Basin State arrangements and recognise long-standing consultative structures and mechanisms.
The Basin Community Committee, Northern Basin Aboriginal Nations and the Murray Lower Darling River Indigenous Nations are key means for the MDBA to engage on Basin Plan implementation. The MDBA is also ensuring peak bodies and regional communities can stay abreast of, and contribute to, issues across the Basin Plan through overarching and technical meetings.
The CEWH is supported by six Local Engagement Officers, who live and work in Basin communities. These officers, in conjunction with state government officials, engage with local communities about how to best use environmental water including through environmental water advisory groups.Basin States are also responsible for water resource management in their own areas, including consultation over the development of supply and constraints proposals as discussed previously, and the development of water resource plans for accreditation under the Basin Plan.
The committee recommends that the MDBA provide a clear explanation of how 'localism' is to be implemented under the Basin Plan.
Implementation of the Basin Plan and associated reforms is a cooperative endeavour involving the Australian Government (including the MDBA) and Basin state governments in consultation with the Basin community. The MDBA is committed to ensuring that local communities are engaged in the management of their part of the river system. Such opportunities include input to the northern Basin review, the SDL adjustment mechanism (through Basin states), the constraints management strategy (now via Basin states), the development of Basin annual watering priorities, and future reviews of the Basin Plan. Each year, the MDBA publishes information on how local knowledge and expertise has been applied by respective governments in its annual reports on the effectiveness of the Basin Plan.
In addition, the CEWH has established a ‘good neighbour policy’ which guides the management of Commonwealth environmental water and articulates the approach to localism in environmental water management. This approach involves working with local communities and interested stakeholders to design and implement watering actions and listening in order to understand people’s issues and concerns. The policy focuses on collaboration, transparency and continual improvement.
The MDBA is building on connections with communities across the Basin through partnerships with six local organisations to host Regional Engagement Officers. The Regional Engagement Officers assist the MDBA to engage more effectively with Basin communities.
The Regional Engagement Officers also work collaboratively with the Commonwealth Environmental Water Office to reduce duplication and increase cooperation between government agencies. The six partner organisations are:
- Greater Shepparton City Council – Shepparton, Victoria
- Leeton Shire Council – Leeton, NSW
- RDA Darling Downs and South West Qld – St George, Queensland
- North East Catchment Management Authority – Wodonga, Victoria
- SA MDB Natural Resource Management Board – Murray Bridge, South Australia, and
- Wentworth Shire Council – Wentworth, NSW
The MDBA increased its regional presence and links with Basin communities by opening offices in Toowoomba, Queensland; Albury-Wodonga, on the NSW and Victorian border; and Adelaide, South Australia. Regionally based staff assist the MDBA to improve information exchange with communities, and give communities a better understanding of the MDBA’s work.
The committee recommends that the government develop and publish a detailed policy for agricultural productivity, environmental and water resource R&D in the Murray-Darling Basin. This policy should reflect a greater priority in this area and incorporate the specific research areas identified in recommendations throughout this report.
Agreed in principle.
The Australian Government agrees that research and development (R&D) plays a major role in agricultural productivity and is important for growth and improvement in the profitability and sustainability of Australian agriculture, including irrigated agriculture in the Basin.
The Australian Government has prepared the National Water Use in Agriculture Research, Development and Extension (RD&E) strategy, with an updated version launched in December 2015. The strategy aims, through research and development and extension, to support farm water productivity whilst enhancing environmental and social sustainability.
The national approach to rural R&D was endorsed by all States and the Northern Territory in the National Primary Industries Research, Development and Extension Framework. With the Basin contributing approximately 40 per cent of the national income derived from agricultural production, the Basin regions’ needs are strongly reflected in RD&E priorities and direction.
The Australian Government has been investing in environmental water research through the $10 million Murray-Darling Basin Environmental Water Knowledge and Research Project, which seeks to improve the knowledge available to support the evolving needs of environmental water managers. This project, administered by the Commonwealth Environmental Water Office, will improve understanding of the complex ecological systems in which environmental water is managed and will inform the application of environmental water into the future.
That the Government commission the Australian Bureau of Agricultural and Resource Economics and Sciences to undertake a cost-benefit analysis of potential water-efficient crops (including non-paddy rice) in the Murray-Darling Basin.
Agreed in principle.
The Australian Government continues to work with rural industries to constantly re‑evaluate existing, and identify new R&D priorities.
There is a significant research focus at both Commonwealth and State government levels on water use efficiency, including water efficient crops. Further to this, State government extension officers, private consultants, farm business management advisers and agronomists continue to provide advice on water efficient crops and water efficiency technologies to farmers.
ABARES has produced industry-specific reports for irrigated dairy, wine grapes, cotton, horticulture (excluding wine grapes) and rice. These industry specific reports contain information on: trends in farm financial performance (e.g. farm cash income, rate of return); proportions of farms trading water (both temporary water and permanent entitlements); trends in water use (including areas irrigated and water application rates by crop type); use of irrigation technologies (including by crop type). These (and previous) reports also include some discussion on key factors that drive farm business decision-making, and therefore influence changes in size and types of crop/livestock enterprises, irrigation technologies used and water use. The reports are available on the Department of Agriculture and Water Resources website.
The Australian Government considers it important that R&D priorities are evaluated on an ongoing basis and that future investment directions are industry driven to ensure that limited resources are allocated to best address industry requirements.
The committee recommends that the Government commission research into innovative agricultural soil use and farming practices that will improve agricultural productivity and water efficiency in the Murray-Darling Basin.
The National Soil RD&E Strategy released in 2014 directs the Australian Government’s investment in rural R&D including investment directed towards agricultural soil use research, sustainable farming practices and water use efficiency. This research aims to improve understanding of a number of soil related functions including soil chemical balances that will lead to increased productivity through better soil management and water use efficiency.
The committee recommends that the Government prioritise R&D into water infrastructure to meet the needs of farming communities, agricultural production, and the environmental health of the Murray-Darling Basin.
Agreed in principle.
The Water Use in Agriculture Strategy developed under the National Primary Industries Research, Development and Extension Framework is addressing all levels of water use and water management. It includes a focus on irrigation water infrastructure delivery and management that will complement the Australian Government’sinvestment through the Sustainable Rural Water Use and Infrastructure Program (SRWUIP) in key rural water use, management and efficiency projects in the Basin.
Additional comments by Senator Xenophon - Recommendations
The MDBA conduct urgent modelling of a number of figures above the 2750 GL/y figure, up to 4000 GL/y. This modelling must be publicly released with both a technical and non-technical explanation and conducted in a timely manner.
Agreed in part.
In 2011 the MDBA modelled the scenarios of 2800GL and 3200GL with and without the present operating constraints. The results of this modelling were published by the MDBA in The proposed ‘environmentally sustainable level of take’ for surface water of the Murray–Darling Basin: Method and outcomes (MDBA 2011), and Hydrologic modelling of the relaxation of operational constraints in the southern connected system: Methods and results (MDBA 2012), which are both available on the MDBA website.
Urgent modelling be undertaken to establish the comparative efficiencies of irrigation communities in the Murray-Darling Basin to ensure fair treatment of irrigators, particularly with respect to allocating funds for water efficiency projects.
The Australian Government does not believe such modelling is required given the way water efficiency programs have been managed since September 2013.
The SRWUIP is a national program that invests in rural water use, management and efficiency. Water savings achieved through this program contribute substantially towards ‘bridging the gap’ to the SDLs under the Basin Plan and also return water savings to irrigators and regional communities. The focus of irrigation modernisation investment has been on improving the efficiency of off-farm delivery systems and on-farm irrigation systems, and on returning a share of the water savings to the environment. Comparing average efficiency of different regions is not an appropriate guide for where investments should be made. Across the Basin individual farm efficiencies (and delivery system efficiency) can vary significantly within districts and within broader regions.
Programs funded through SRWUIP, such as the On-Farm Irrigation Efficiency Program, are assessed on a competitive grants model basis against merit criteria outlined in the program guidelines to ensure the best applications are selected for funding. This aim is to achieve the greatest gain in efficiencies for the total dollars invested in eligible activities, thereby minimising any potential need for water purchase to ‘bridge the gap’ to the SDLs under the Basin Plan.
A large proportion of SRWUIP funding is committed to State Priority Projects (SPPs) under the 2008 Murray Darling Basin Intergovernmental Agreement. Whilst the projects that are to be funded under the SPPs are usually determined by the relevant State, the Commonwealth’s investment principles for this funding are that:
- projects must be able to secure a long-term sustainable future for irrigation communities;
- projects must deliver substantial and lasting returns of water to the environment to secure real improvements in river health; and
- projects must be value for money in the context of the first two tests.
Irrigators must receive recognition for their past water efficiencies. In the absence of any prior recognition for past water-saving efforts, the guidelines for the Sustainable Rural Water Use and Infrastructure Program and other similar programs should be amended to allow irrigators to apply for funding for research and development as well as for emerging technologies projects.
The Australian Government is committed to research and development within the irrigation industry and has been involved for a number of years in funding of institutions such as the Cooperative Research Centre (CRC) for Cotton Catchment Communities, and previously the CRC for Sustainable Irrigation Futures. Further, the well-established system of agriculture research funding provided by the rural research and development corporations is a primary source of research and development funds for the sector.
Assessment guidelines for programs funded under the SRWUIP are not restrictive regarding the type of technology proposed for farm level water saving projects. Applicants can propose the technology that is best suited to their business enterprise, be it a well-established technology or one which has recently emerged, provided the proponent is prepared to return a portion of the water savings to the Commonwealth in return for the investment.
The MDBA urgently provide evidence that the current market-based buyback approach will not distort the water and commodity market. In absence of any available evidence, the MDBA conduct urgent modelling on the impact the market-based buyback approach will have on those who have not accessed funds under the Federal Government’s $5.8 billion Sustainable Rural Water Use and Infrastructure Program and other similar programs.
Agreed in part.
The Australian Government’s approach to water recovery in recent years has been to prioritise investment in productivity-enhancing water infrastructure and to cap surface water purchases to within 1,500 gigalitres.
The government will review the need for any future water recovery, following the northern Basin review and the operation of the SDL adjustment mechanism. | <urn:uuid:3ef65335-e1df-4579-a587-20b21c74bbef> | CC-MAIN-2021-21 | https://www.awe.gov.au/about/reporting/obligations/government-responses/aus-gov-response-mgt-mdb | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990584.33/warc/CC-MAIN-20210513080742-20210513110742-00576.warc.gz | en | 0.930483 | 7,882 | 2.703125 | 3 |
Elk Management For Montana Landowners
by James E. Knight, MSU Extension Wildlife Specialist (retired)
A Rocky Mountain elk (Cervus elaphus) is an impressive animal. Mature bulls average 700 pounds while cow elk weigh in at about 500 pounds. The majestic antlers of a bull elk can weigh more than 40 pounds. Elk calves are born in late May or June after a gestation period of about 250 days or approximately eight months. A newborn calf weighs almost 30 pounds and is usually a single. The birth of twins occurs less than one percent of the time.
Cow elk can be productive breeders from the ages of 2 to 14, and sometimes beyond this age. Yearling cows do not usually breed, but when they do the calf survival rate is lower than in older cows. A cow hides her newborn calf for 2 to 3 weeks as the calf eats, sleeps and gains strength. During this period, the calf avoids predators by remaining absolutely motionless, even when danger is just a step away. This strategy is quite effective due to the calf’s mottled coloration and its lack of scent. Odors from the birthing area can attract predators so the cow tries to remove them by consuming the placenta and birth membranes, as well as the dirt and vegetation that was soaked by birth fluids. The cow also ingests the urine and feces produced by the calf. After a few weeks, the cow and calf join other mother -child pairs, forming large herds on summer ranges. During this time, bulls are solitary or live in small groups, usually spending their time eating and loafing on windy or breezy knolls or ridges where they can protect their growing antlers and minimize bothersome flies.
In August, antlers finish growing and the bulls begin thrashing them against trees to remove the velvet. The bull s also begin sparring, and by late August dominance is being established. By the time bugling and harem formation begins, a bull’s priority is to keep less dominant bulls away from his harem of cows. The continuous attempts of younger or less dominant bull s to steal cows creates a constant preoccupation for the “herd” or dominant bulls. There is no doubt that the “satellite” bulls breed some cows when the dominant bull is away chasing intruders or rounding up drifting cows.
In the Rockies, the peak of the rut, or breeding season, is in early October. Because the calving period is usually spread over approximately 22 days in late spring, it follows that almost all cows are bred within a three- week period around early October. During the rut, cows and calves continue feeding to build their bodies’ condition for the upcoming demands of winter.
By early fall, calves could survive independent of their mothers but they continue to stay with the herd. Although the bull seems to control the herd during the rut, it is an older cow that decides when and where the herd goes to avoid real or perceived danger. By mid- October, most of the rut activity declines and bulls begin to drift off and become solitary. During this period, bulls must regain condition lost during the rut and put on fat reserves for the coming winter. The depth and condition of snow seems to trigger the elk migration to winter areas.
Deep, settled or crusting snow conditions make it difficult for elk to get to grass, and the cows, calves and younger bulls move toward traditional wintering areas. Many older bulls spend the winter in areas where snow is in excess of four feet and at elevations much higher than the cow herds. This is apparently a strategy to avoid predation.
Elk have evolved in the presence of predators, such as wolves, that use a culling strategy to detect the prey easiest to kill. If bulls, weakened by the rigors of the rut, are seen by predators as being less active, slow or lingering, they will become the predators’ preferred targets. Mature bulls use three strategies during winter to avoid predators like wolves and, to a lesser degree, coyotes and mountain lions:
- They stay away from cow herds, which attract predators.
- They avoid areas where predators are likely to be, such as open valleys.
- They favor deep snow and other conditions, which are difficult for and have little attraction to predators.
Indeed, the more physical condition a bull lost during the rut, the more it is to his advantage to stay away from larger herds, where predators watch and wait, even if it means depending on already depleted fat reserves in poor foraging situations. As winter snow recedes and spring vegetation greens, elk usually move to higher elevations. In many situations, calving occurs in the upper reaches of winter range. Here, brush and shrubs provide the habitat needed to conceal calves during and after birth.
Limiting factors are influences that determine whether a wildlife population increases, decreases or remains stable. It is important to understand that there is seldom one factor that, by itself, causes a reduction or an increase in an elk population. It is usually the interaction of several factors that determine the fate of a population. For example, predation may seem to be a factor causing an elk population to decline when in fact restricted winter habitat, deep snow or the lack of alternate prey may be what allows predation to have a major impact.
Traditionally, we have looked at the concept of food, water, cover and space as the primary components that determine how suitable a habitat is for wildlife. While this is true, it oversimplifies our understanding of how various factors affect habitat.
Several other factors may not be as important on their own, but when they are combined with the four primary habitat components, the value of the habitat may be immediately enhanced or reduced. For example, other land uses can greatly impact elk use of suitable range.
Although some elk herds have become accustomed to high levels of human activity, elk will generally choose areas with less disturbance by humans. When repeatedly disturbed, elk tend to avoid even quality habitat if they have an alternative. Urban sprawl in prime elk winter ranges has caused elk to overly concentrate on the remaining areas.
Both forage quantity and quality are important limiting factors for elk. The availability of various types of forage under various kinds of conditions is equally important. Elk need forbs, as well as grasses, to achieve a nutritional level that allows them to grow and develop well. If snow cover or ice eliminates the availability of grass, the presence of woody browse becomes very important. An elk herd that does not have woody browse available and is dependent on grass will not do well during a winter with deep snow cover or freeze- thaw conditions.
The availability of browse and grass is especially important to mature bulls wintering in marginal ranges away from main elk herds. The availability of small forest openings, grassy hillsides, exposed knobs and ground shrub or aspen growth will determine the survival of bulls stressed from the rigor of rut as well as severe winter conditions. The primary factor that limits the size of an elk herd is the winter food supply. Few landowners have enough land to provide year -round range for elk. This is especially true over a several year period when drought or extreme winter weather causes elk to use areas beyond their normal range.
By looking at the entire “landscape,” the landowner will better determine which forage needs can be best satisfied on their property and which are available on adjacent lands. Equally important to forage quantity is forage quality. There may be an abundance of grass available, but elk still will not do w ell if there are no forbs. Just because forage is present on elk range does not mean the forage will provide an adequate ration to maintain elk. There must be enough forage, and it must be of good quality.
Winter weather can be a limiting factor for an elk population even when habitat is good. Severe cold weather and very deep snow can reduce available forage. This is especially true when sudden, extreme snowfalls prevent elk from moving through passes to lower elevations. Weather can also indirectly affect elk by making them more vulnerable to predators. Deep or crusting snow will allow wolves, coyotes or mountain lions to more easily catch elk. A severe winter will also weaken elk, making them less wary, slower and less able to run great distances.
Disease is seldom a population- limiting factor in elk herds, but it can significantly affect small portions of populations. Normally, elk are healthy and able to withstand parasites and disease but occasionally they contract some sicknesses. Lethargy, limping or weakness resulting from some diseases make elk more vulnerable to predation.
- Arthritis in elk is usually caused by bacterial infections or injury. The obvious sign is joint swelling, which is sometimes accompanied by pus. Arthritic j oint swelling is common in elk but only occasionally does it lead to a fatal condition in itself.
- Brucellosis is a bacterial disease spread from bison to elk in Montana, Wyoming and Idaho. Fortunately, this disease is currently restricted to the elk her ds near Yellowstone National Park and is not present in other parts of the West. Characteristics of brucellosis are abortion and infertility. Brucellosis is contagious and spread by infected animals that shed the bacterial organisms for several years, especially through fetal membranes (including afterbirth), uterine secretions and milk. Brucellosis is likely spread in elk through contact with the aborted fetuses of bison or contact with bison fetal membranes. Other than at the winter feed grounds in Jacks on Hole, Wyoming, elk have a low likelihood of coming in contact with aborted fetuses or fetal membranes of other elk because cow elk go off to be by themselves when calving. Bison, however, calve while staying within the herd. This makes contact with bison fetal materials more likely among members of a bison herd. It is interesting to note that in South Dakota’s Wind Cave National Park, bison and elk were infected with brucellosis at a level similar to bison and elk in Yellowstone National Park. However, when brucellosis was eliminated from bison at Wind Cave through a program that removed bison testing positive for the disease, brucellosis disappeared from the elk herd. It is very likely that brucellosis could be eliminated from the northern Yellowstone elk herds if such a program were initiated there. Political and social objections will likely prevent this from happening.
- Another problem elk may encounter is necrotic stomatitis. This produces a variety of diseases in elk, including foot rot. The disease may cause pusproducing pneumonia or abscess formation in almost any body organ or joint cavity. The bacterial organism causes tissue destruction and many infected animals die. The disease is most common in elk living in poor ranges during winter, when wounds in the mouth are caused by abrasive woody vegetation, stems of hay, or seeds of some grasses. Because elk regurgitate and chew their cud, these wounds become infected by bacteria that occur as a normal part of the digestive tract. Most well -nourished elk have antibodies against the organism and will recover.
- Tuberculosis is another serious disease which has been reported in elk living in captivity or under semi -wild conditions. Fortunately, when deer or elk are infected, the disease progresses rapidly and the animal dies. Tuberculosis is probably self -limiting in free -roaming deer and elk.
- Elk, like other ruminants, are susceptible to bluetongue and epizootic hemorrhagic disease (EHD). Both are transmitted by biting midges. These viral diseases only occur in summer and fall. They disappear with a killing frost. Although deadly in deer, there are no reports of widespread mortality in elk herds.
- Chronic wasting disease (CWD) is a rare but possible disease elk may contract.
- Parasites are very common in elk. In healthy animals the infestations are not serious. When elk are weak or suffering from other ailments, the effect of parasites can further weaken the animals. There are two types of parasites: external and internal. Mites, ticks, flies, mosquitoes and lice are examples of external parasites. Internal parasites include the liver fluke, a concern only because it affects whether humans can consume the liver, and tapeworms, including one that migrates to the lungs, causing cysts, nematodes and roundworms.
Elk have always had to cope with predation. In general, a healthy elk population can withstand normal predation. Elk predators include bears, wolves and mountain lions. Black and grizzly bears can be a major cause of calf mortality. A three- year study in Idaho used radios to track newborn calves to determine mortality factors. About half of the calves were killed during the first three weeks of the study, mainly by black bears. When the cows and calves rejoined the larger herds, the predation stopped. Once elk pass the calf stage, the major predators are wolves and mountain lions. However, the low density of lions and their preference for deer as prey make them a problem only in localized areas.
Currently, wolves have the potential to be major elk predators. Long ago, elk evolved with wolves and developed predator -avoidance strategies that normally allowed them to co- exist with wolves. However, the unnatural densities of wolves resulting from their reintroduction to Yellowstone National Park is having significant impacts on the elk herd in the Yellowstone area. In some years, elk herds in some drainages of south- central Montana had no calves surviving at the end of winter. Fortunately, when other herds have good survival rates, the overall elk population can be maintained. This example, however, does indicate the kind of pressure wolves can place on elk population recruitment.
When young elk fall to predators at a high rate, the consequences on the population is exaggerated because they are counted on for reproduction for many years to come. Wolves use a culling strategy when they hunt. They trot around the edges of, and sometimes within, an elk herd, trying to identify the easiest target. If they detect an elk that is somehow different, they will focus on that animal until they catch it or it escapes. The elk they select is usually young, but sometimes an older elk that is prime and healthy is targeted for some known or unknown reason. Wolves injure many elk. They bite and even drag down elk that eventually escape back to the main herd. Later, these sore or limping elk are the ones identified as weak or sick and are targeted by wolves looking to make a kill. The best defenses elk have against wolves are attempting to outrun them, escaping into water and slashing with their forelimbs. After repeated run-ins with wolves, elk eventually exhibit behavior and strategies that make them less vulnerable. These include forming smaller herds and dispersing to less -crowded areas so wolves must travel farther and work harder to locate them. In the future, elk will benefit if wolves regain some of the natural social behaviors they currently lack due to their forced reintroduction.
Although unnatural, multiple females are currently breeding in wolf packs. Regaining natural behaviors would limit the number of breeding individuals to one pair within a given pack. Regaining these natural behaviors would also increase the wolves’ tendency to maintain greater distance between packs, which would reduce the number of wolves in a given area. Until these factors are in place, wolves will have an increasing and unnatural impact on elk in the West. The impact of wolves was demonstrated by the results of a study carried out in Jasper and Banff national parks. The study determined that packs consisting of five to six wolves killed three elk every two weeks, or a little more than one elk per wolf per month. If this rate of predation occurs in the northern Rocky Mountain States, the 650 wolves in the estimated 43 packs in this area kill about 700 elk per month, or about 8,500 elk per year. If wolf numbers exceed five wolves per 1,000 elk, elk reproduction will be suppressed and elk herds will decline.
Elk depend on their habitat for nourishment and product ion. The quality of an elk herd directly reflects the quality of its habitat. Competition among animals for scarce food may make it difficult for elk to make use of a high quality habitat. It is important to understand that competition occurs only when the commodity is limited. The mere presence of other animals does not mean competition is occurring; but, when other animals, both wild and domestic, are trying to get the same, scarce resource, none of the animals have the benefit of quality habitat.
Elk are more selective feeders than cattle but less selective than deer. Cattle have broad, flat muzzles that allow them to clip large swaths of grass, while deer have pointed muzzles that allow them to pick selected forage. An elk muzzle falls in between. Elk eat grass, but they will select forbs if they are available. Montana studies have determined that elk summer diet is made up of 30 percent grasses and 64 percent forbs. One study of Montana elk showed a summer diet made up of 100 percent forbs. When forbs were not available, elk winter diet consisted of 84 percent grass and 16 percent shrubs, while elk fall diet consisted of 74 percent grass and 26 percent shrubs. On average, 6 to 9 percent of elk diet throughout the year consists of shrubs.
Elk are primarily grazers and secondarily browsers. Unlike most ruminant grazers, such as cattle, the nutritional needs of elk require that they have higher quality food than what they can obtain through non- selective grazing on grass or grasslike forage. Forbs are the diet component that best allow elk to address their nutritional needs. Elk are ruminants. They have a four-chambered stomach through which food passes during various stages of digestion. The first chamber, the rumen, contains great quantities of bacteria and protozoa (microflora) that reduce plant materials to nutritional materials. The microflora are very specialized; some break down one plant species while others break down another. These microflora are such specialists that if an elk changes its diet drastically, the new food may not be digested until the population of appropriate microflora builds up. That is why artificially fed elk are sometimes found dead on feed grounds even though their stomachs are full.
Protein is an important nutrient for animals. A lack of protein will negatively affect how cells develop for body maintenance, growth, reproduction and lactation. In ruminants like elk, the microflora in the rumen use nitrogen compounds to create the protein to meet the body’s needs. Crude protein (approximately 6.25 x nitrogen) is normally used to describe the quality of the diet rather than the nitrogen content. Rumen microorganisms (the microflora) must have enough nitrogen to properly digest carbohydrates and fats. Elk need 6 to 7 percent crude protein in their diet for maintenance, 13 to 16 percent for growth and as much as 20 percent to maximize weight gain. Forbs are higher in crude protein than other kinds of forage.
Elk expend energy to digest food, to move, grow and reproduce. They expend more energy during cold temperatures as they try to stay warm. To maintain their body condition, one day’s worth of energy must be derived from one day’s worth of food. When an elk does not eat enough food, such as during the rut or severe winter weather, most of the energy must come from body fat. Energy expenditure is measured in kilocalories. As an example, one ounce of sugar produces about 100 kilocalories. An elk cow needs 6,035 kilocalories each day during winter, 6,585 kilocalories during spring, 6,850 kilocalories during summer and 6,452 kilocalories during fall. This amount of energy is expended for routine activities, but elk will require more energy during migrations, gestation, lactation and when maintaining body heat during winter. Herbaceous vegetation (grass and forbs) provides more energy than browse and shrubs. Elk can get 1,300 kilocalories per pound (dry weight) from typical mid- July forage. Elk eat about 2 percent of their body weight per day, so an average cow weighing 500 pounds eats nearly 10 pounds of dry feed each day, which provides 13,000 kilocalories.
High energy is needed during gestation and lactation. Gestation requires an additional 800 kilocalories per day during the final days. During lactation, a cow needs an additional 4,000 kilocalories per day to produce milk. During winter there are low reproductive demands, and energy expenditures for activities is at its lowest point. However, energy expended to maintain body temperature is very high. At 32 degrees F, a cow elk loses 5,342 kilocalories to heat loss each day, while a calf loses 2,771 and a bull loses 7,227. At zero degrees F, the expenditure for cows is 6,128 kilocalories, calves, 3,184 and bulls, 8,283 kilocalories. Standing in zero degrees F with a wind blowing at 14 miles per hour, elk expend nearly twice the energy they would at the same temperature with no wind. During extreme weather, more energy can be expended through activity and heat loss than can be acquired by foraging.
During these times it is more efficient for elk to bed down in shelter and live off their body reserves. It is easy to see the importance of energy in an elk diet. When energy expenditure is greater than energy intake, stored fat must be used. For every 5,000 kilocalories of energy an elk gets from stored fat, one pound of body weight is lost. When fat reserves are depleted, elk lose weight even faster because energy then must come from protein (muscle). Protein contains only 60 percent as much energy as fat. Weight losses of one to one-and-one-half pounds per day are common during winter. High-energy foods, such as perennial herbaceous forage, will best satisfy the energy needs of elk.
Nutritional deficiencies encountered by elk may be traced to energy, nitrogen or minerals, but not usually to vitamins. Because they are ruminants, elk have no need for dietary vitamin C. Vitamin C is produced in tissue. Vitamin E is obtained through consumption of green forage and storage of the vitamin. Vitamin D is available in the body and is activated by the sun. Vitamin A, which is stored in fat, is plentiful through green vegetation in summer and browse in winter. All B complex vitamins and vitamin K are synthesized in the rumen of elk.
How Elk Meet Their Vitamin Requirements
- Vitamin A: Stored in fat and liver. Plentiful in green vegetation and browse.
- Vitamin B: B complex vitamins are synthesized in the rumen by the microbes.
- Vitamin C: Not needed in the diet because it is synthesized in tissue.
- Vitamin D: Present in muscle and fat and activated by the sun.
- Vitamin E: Obtained in green forage and stored in the liver.
- Vitamin K: Synthesized in the rumen by the microbes.
Numerous minerals are necessary for elk to grow, develop and metabolize well. Several of them are discussed below.
- Both phosphorus and calcium are important for strong bones and teeth. Phosphorus is also important for reproduction, red blood cells and transporting nutrients throughout the body. Nutritional problems arise when high calcium levels combine with low phosphorus levels. Phosphorus levels should be about 0.23 percent of the diet. Calcium should be no more than five times the phosphorus level or a phosphorus deficiency can occur. In some parts of the Rocky Mountains, phosphorus may be lacking in range vegetation either seasonally or year round. Analysis of vegetation is required to determine if a phosphorus deficiency exists. Your county extension agent can show you how to do this. Forbs are an important source of calcium and phosphorus in the elk’s diet.
- Elk also need some sodium in their diet. Among other things, sodium affects the regulation of pH and plays a role in the transmission of nerve impulses. Elk may use salt blocks, natural salt licks or drink brackish water to meet their sodium needs. Most vegetation is low in sodium, but elk attraction to salt is usually a non- essential luxury.
- Selenium is often thought to enhance antlers in elk. However, selenium is required at very low dietary levels, and at high levels it can be toxic.
- Other minerals such as potassium, chlorine, magnesium, sulphur, iron, and iodine are very important but found at adequate levels in common range plants. Trace minerals such as copper, cobalt, zinc and manganese are also reportedly found at adequate levels in vegetation.
Elk prefer habitats that are close to water. Studies have shown that 80 percent of elk summer habitat is within .25 to .5 mile of water. Lactating cows are especially dependent on a water source. During winter, elk satisfy their water requirement by consuming snow when open water is not available.
Elk Habitat Requirements
The needs of elk vary with conditions and the seasons. Typically, mountainous terrain with alpine meadows and lush valleys are considered to be ideal elk habitat. We also know there is excellent elk habitat in the breaks of the Missouri River and in the rolling foothills of many parts of the West. The best way to determine what type of elk habitat may be lacking on your land is to understand how each habitat component is used and then to apply this information to your land. It cannot be over -emphasized that management of elk habitat requires looking beyond your own land. A “landscape” vision is needed to understand what is available and what is needed. In some parts of the Rocky Mountains, elk remain in the same area year round because all their habitat needs are met.
However, long movements often occur to address seasonal needs. In general, ideal elk habitat consists of 40 percent cover and 60 percent forage areas. Elk prefer moderately steep, south- facing slopes during winter because of the warmer temperatures and reduced snow pack. Forest stands interspersed with grassy openings provide food, thermal cover and travel lanes.
Components of Foraging Habitat
Elk diets vary according to the season. Consuming high- energy foods is the best way for elk to store reserves and minimize the need to deplete these reserves. Elk prefer foraging areas that are most attractive. In winter, this m ay be a south- facing slope. In summer, shade or a breeze that deter insects may be most attractive. The notion of attractiveness can also include seclusion, protection from wind or a combination of factors.
- Spring forage includes early -greening grasses and forbs that are highly palatable, succulent and nutritionally rich. Elk need a low fiber/high protein diet composed largely of grasses, sedges and early forbs. Green- up occurs first on south- and west -facing slopes, so elk tend to occupy these the most. Elk move to higher elevations following the growth of new, young forage to maximize their nutritional plane. In this way, they can best replenish body reserves and satisfy increased nutritional demands during gestation, lactation and antler growth.
- During summer, elk diet is composed of 60 to 100 percent forbs, if they are available. Preferred forbs include dandelion, geranium, asters, clovers and milkvetches. As forbs dry in late summer, elk utilize more grasses and shrubs.
- Fall begins a period when herbaceous (leafy) vegetation contains reduced protein but is still a good source of energy. Grass averages 73 percent of the fall diet and elk begin to use more shrubs.
- Grasses may make up as much as 84 percent of an elk’s diet in winter. Elk do best on winter ranges where herbaceous vegetation is available rather than range that contains a lot of browse. This is, of course, not true if deep snow prevents access to the herbaceous forage. During times of deep snow, elk will seek out herbaceous forage on south- and west -facing slopes and wind- swept ridge tops. These areas often have shallow, dry soils and, although production is limited, forage quality is usually better than on adjacent sites with deeper soils. These plants usually have more protein and are more palatable. Browse plants are used more during winter than any other time of the year. Quaking aspen, mountain maple, serviceberry, ceanothus, chokecherry, red- osier dogwood, mountain mahogany, willow and winterfat are choice browse species. Choice grass species include rough and Idaho fescue, bluebunch and western wheatgrass and sandberg bluegrass. West of the Continental Divide, browse is the primary winter food; grasses tend to be the primary winter food in areas east of the divide.
Cover is important to elk for security or escape and for protection from extreme weather. Elk use security cover most during calving and periods of disturbance, such as hunting season. Once calves are old enough to move with the herd, elk spend more time in the open. In areas that have significant hunting pressure, elk spend most of their time in, or near, large blocks of escape cover away from roads. A very important type of cover for elk is thermal cover. During summer, it provides shade; during winter, thermal cover reduces heat loss and wind velocities. A cover of dense conifers reduces heat loss, particularly on very cold nights with clear skies.
Before undertaking any enhancement of elk habitat on your land, it is important to determine what kind of habitat is provided in adjacent areas. For example, it would be unnecessary for you to develop forage when ample year -round supplies are available nearby. After studying the habitat needs of elk, comparing those needs to what is available on and near your land and understanding the limitations of your location, you’ll be prepared to undertake worthwhile habitat enhancement. If you are at a high elevation where winter snow is excessive, it will do little good to create winter forage because elk migrate to lower elevations.
However, you might consider enhancing fall or spring habitat in order to help elk enter winter in better condition and recover from winter more rapidly. Some landowners will have the opportunity to improve winter elk range. For these people, it is possible to minimize winter deaths and maximize calving success by enhancing the production of high- energy foods. This must be done while ensuring that adequate thermal cover is available. If winter snows are not deep enough to prevent elk from using south- and west -facing slopes or wind- swept ridges, landowners can try to increase herbaceous forage production in these areas. Perennial vegetation is preferred because it cures with a higher protein content and is more dependable than annuals.
Fertilizing with nitrogen and phosphorus will generally increase productivity, protein and other nutrient content and palatability. Unfortunately, it will also reduce fiber content. If fertilizing is practical, it will result in greater quantities of high -energy, high- protein forage such as forbs. Be cautious when using fertilizer because undesirable plants may also benefit.
Prescribed burning can provide many of the benefits of fertilizing, usually with less expense. Burned areas result in increased yields of highly nutritious forage that greens up earlier in the spring, but the area will have reduced cover.
Prescribed livestock grazing is another way to enhance elk habitat. Removing decadent grasses stimulates new growth and allows forbs to establish. The new growth of forage that appears after cattle grazing is more palatable and more nutritious than ungrazed forage. It may be necessary to erect temporary electric fences to force cattle to use old grass stands, if this is practical for your herd. Studies in Montana have shown that old Idaho fescue stands must be 75 percent grazed by cattle for forbs to have increased production and improved palatability.
You will have to observe the vegetation on your land to decide how many years’ rest from grazing is necessary to maintain the benefits. Intensive grazing followed by one or two years rest seems to provide the best results for elk forage. The purpose of prescribed grazing is to set the grass stand back to an earlier stage of development, in a manner similar to what the great herds of bison did long ago.
Another way for landowners to impact the elk on their land is by harvesting timber. Elk habitat can be enhanced by including aspen management in the timber plan. Young to middle- aged stands of aspen interspersed with grassy openings and conifer clumps provide excellent foraging, loafing and thermal cover for summer, fall and early winter. Sprouting and regrowth of aspen can be stimulated by clear -cutting, bulldozing or burning five to 20 acre stands of existing aspen. Look at the natural boundaries of the aspen clone to develop irregular edges rather than squares or rectangles. Do this on a 20- to 30- year cycle to provide continuous availability of this habitat type.
If you have several aspen stands, treat them in rotation to make sure that early growth stages are always available. Landowners should also consider clear -cutting patches to stimulate the growth of brush species and to create permanent forest openings for grass and forb production. If you have bull elk wintering on your property, you can enhance their survival by creating numerous half -acre grassy openings within dense timber stands.
Planting Food Plots
Food plots are seldom a practical way to enhance elk habitat. The large herds they may attract will quickly deplete the crop, and a lack of natural forage growing in the area could lead to winter hardship. Mowing, flailing, burning or grazing existing vegetation and then fertilizing is a more realistic way to enhance large areas. A phosphorus fertilizer will favor the forb component over the grass component of the stand. Be cautious when using fertilizer because undesirable plants may also benefit.
Chaining (dragging a large anchor chain between two tractors) or bulldozing sagebrush or other shrubs that have become too plentiful can create openings where grass and forbs can grow. Openings should be five to 20 acres and irregular in shape. If you plan to reseed, use a grass-forb mixture. Clovers, alfalfa, small burnet and orchard grass are examples. Check with your local county extension agent for recommended varieties and seed sources in your area. Using herbicides can also create grassy openings in sagebrush or other dense shrub communities. Be sure to use herbicides that will not impact the forb component of the forage. Again, five to 20-acre openings with irregular shapes are ideal. Check with your county extension agent for recommendations on herbicides, rates, timing and follow-up treatments. | <urn:uuid:bedf6697-f7da-4dcf-843e-862cadaea5f6> | CC-MAIN-2021-21 | https://animalrangeextension.montana.edu/wildlife/private_land_wildlife_mgmt/elk-mgmt.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991224.58/warc/CC-MAIN-20210516140441-20210516170441-00016.warc.gz | en | 0.945632 | 7,338 | 3.53125 | 4 |
That said, in the worst case, java takes O (n) time for searching, insertion, and deletion. Edit/Delete element on a HashMap runtime. Comment dit-on "What's wrong with you?" Fastest way to determine if an integer's square root is an integer. When we talk about collections, we usually think about the List, Map, andSetdata structures and their common implementations. HashMap. The most common complexity classes are (in ascending order of complexity… That said, in the worst case, java takes O(n) time for searching, insertion, and deletion. In the scope of this article, I’ll explain: Keep reading and use a table of contents for quick navigation. If we have a lot of collisions before Java 8 time complexity could grow even to O(n) because it’s an array of linked lists inside of HashMap and if all values are associated to the same cell (bucket) you need to iterate through the whole list to find required value. The complexity can be understood by seeing how the method has been implemented. What's the difference between a method and a function? An array is the most fundamental collection data type.It consists of elements of a single type laid out sequentially in memory.You can access any element in constant time by integer indexing. What is a difference between HashMap and HashTable (all methods are synchronized in HashTable and it doesn’t allow nullable key and value). Initialize HashMap data structure example: A string is a type of key in the map, Integer is a type of a value. In this tutorial, we'll learn about various ways of initializing a HashMap in Java. Earlier work in this area in JDK 8, namely the alternative string-hashing implementation, improved collision performance for string-valued keys only, … When is HashMap growing. Insert: O(1) HashMap operation is dependent factor of hashCode implementation. Below programs are used to illustrate the working of java.util.HashMap.clear() Method: http://kickjava.com/src/java/util/HashMap.java.htm, Episode 306: Gaming PCs to heat your home, oceans to cool your data centers, High performance Scala/Java collection needed. That is why simple searching could take O(n) time in the worst case. In the ArrayList chapter, you learned that Arrays store items as an ordered collection, and you have to access them with an index number (int type). A hash table, also known as a hash map, is a data structure that maps keys to values. There are 2 methods to retrieve a value from HashMap: There are 2 useful methods to remove the value in Java HashMap: If you want to remove entries by the specific value you can get entrySet() and use removeIf() method: removeIf() method accepts a Predicate (functional interface) that return true if the entry should be removed. Time complexity of HashMap: HashMap provides constant time complexity for basic operations, get and put if the hash function is properly written and it disperses the elements properly among the buckets. I once checked the source code and what I remember is that there is a variable named size that always hold the number of items in the HashMap so size() is O(1). Let's say I am iterating over an Array (stores event ids) with a size of n (may be in billions). If you have any questions – leave a comment. Lets starts with simple example to understand the meaning of Time Complexity in java. At my school we have received a chart with with time complexity of different operations on data structures. Summary. Since we have used a HashMap we can perform insertion/deletion/searching in O(1) time. HashMap. It's usually O(1), with a decent hash which itself is constant time but you could have a hash which takes a long time Well, the amortised complexity of the 1st one is, as expected, O (1). Time complexity describes how the runtime of an algorithm changes depending on the amount of input data. It provides the basic implementation of Map interface of Java. We'll use Java 8 as well as Java 9. This question shows that candidate has good knowledge of Collection . First of all, we'll look at Big-O complexity insights for common operations, and after, we'll show the real numbers of some collection operations running time. Posted by hrachh July 18, 2020 Posted in Data structures, Hashmap, Java Tags: Coding, Data structures, Hashmap, Java, Time complexity. In the ArrayList chapter, you learned that Arrays store items as an ordered collection, and you have to access them with an index number (int type). LinkedHashMap is also a hashing data structure similar to HashMap, but it retains the original order of insertion for its elements using a … Hash_Map.size() Parameters: The method does not take any parameters. The java.util.HashMap.putAll() is an inbuilt method of HashMap class that is used for the copy operation. This code is only O(n) if all of the keys in the HashMap have the same hashCode, which is unlikely and/or a bug. Earlier work in this area in JDK 8, namely the alternative string-hashing implementation, improved collision performance for string-valued keys only, … (more specifically, ArrayList, HashSet and HashMap). Short story about a explorers dealing with an extreme windstorm, natives migrate away. The complexity of remove() will be different accordingly, as rightly pointed by @JavaGuy, remove has amortized complexity O(1+a), where a depends on how many items are in the slot for the hash key of the removed object. It still has to go through the list to check if that element already exists by comparing the key along with hash (Ref - source code). In these cases its usually most helpful to talk about complexity in terms of the probability of a worst-case event occurring would be. Then, HashMap
, V> will have O(k) amortised complexity and similarly, O(k + logN) worst case in Java8. Amortized Time complexities are close to O(1) given a good hashFunction. Your loop adds at most n-1 key/value pairs to the HashMap. Click on the name to go the section or click on the runtimeto go the implementation *= Amortized runtime Note: Binary search treesand trees, in general, will be cover in the next post. Setup. Introduction. For a hash map, that of course is the case of a collision with respect to how full the map happens to be. … So how HashMap put() method works internally? On an average the time complexity of a HashMap insertion, deletion, the search takes O (1) constant time. In what sutta does the Buddha talk about Paccekabuddhas? The one i have no idea of is values() - I'm not sure whether this method will just somehow "copy" the HashMap, giving a time complexity of O(1), or if it will have to iterate over the HashMap, making the complexity equal to the amount of elements stored in the HashMap. On an average, the time complexity of a HashMap insertion, deletion, and the search takes O(1) constant time in java, which depends on the loadfactor (number of entries present in the hash table BY total number of buckets in the hashtable ) and mapping of the hash function. HashMap is used to store data in the form of key-value pairs. If the execution time is proportional to the logarithm of the input size, then it is said that the algorithm is run in logarithmic time. HashMap has complexity of O(1) for insertion and lookup. Chaining Drawbacks In the worst case, deletion and searching would take operation O(n). Unbelievable result when subtracting in a loop in Java (Windows only?). Are new stars less pure as generations goes by. The HashMap can contain only one null key but many null values. Iterating over this set requires time proportional to the sum of the HashSet instance's size (the number of elements) plus the "capacity" of the backing HashMap instance (the number of buckets). ArrayList#add has a worst case complexity of O(n) (array size doubling), but the amortized complexity over a series of operations is in O(1). Hashmap works on principle of hashing and internally uses hashcode as a base, for storing key-value pair. The following table is a summary of everything that we are going to cover. Difference between TreeMap, HashMap, and LinkedHashMap in Java, It depends on many things. tailMap. A hash function is an algorithm that produces an index of where a value can The Hashmap contains array of nodes. I’ll explain the main or the most frequently used methods in HashMap, others you can take a look without my help. How does BTC protocol guarantees that a "main" blockchain emerges? Most of the candidates rejection chances increases if the candidate do not give the satisfactory explanation . a String).. One object is used as a key (index) to another object (value). Mind you, the time complexity of HashMap apparently depends on the loadfactor n/b (the number of entries present in the hash table BY the total number of buckets in the hashtable) and how efficiently the hash function maps each insert. Generally if there is no collision in the hashing value of the key then the complexity of the the containskey is O(1). Time Complexity of HashSet Operations: The underlying data structure for HashSet is hashtable. Implement the same improvement in the LinkedHashMap class.. Time complexity to store and retrieve data from the HashMap is O(1) in the Best Case.But it can be O(n) in the worst case and after the changes made in Java 8 the worst case time complexity … See my complete response @, While technically true, this answer could be misleading for some. # Output $ javac TwoSum.java $ java TwoSum 4 2 7 11 15 9 0 1 METHOD 2. Capacity is the number of … HashMap, TreeMap and LinkedHashMap all implements java.util.Map interface and following are their characteristics. Before looking into Hashmap complexity, Please read about Hashcode in details. How were scientific plots made in the 1960s? HashMap allows one null key and multiple null values. (With Awesome Examples! The method copies all of the elements i.e., the mappings, from one map into another. Learn how to combine Java Maps and Streams. Here are the steps: Initialize an empty HashMap. So this question should be in your to do list before appearing for the interview . Syntax: new_hash_map.putAll(exist_hash_map) Parameters: The method takes one parameter exist_hash_map that refers to the existing map we want to copy from. Time complexity of LinkedList, HashMap, TreeSet? Imagine System.arraycopy is O(1), the complexity of the whole function would still be O(M+N). Why does the US President use a new pen for each order? O(n) where “n” is the number of elements in the array. The collision is a situation when two objects hashCode() returns the same value, but equals() returns false. Time complexity of HashMap: HashMap provides constant time complexity for basic operations, get and put if the hash function is properly written and it disperses the elements properly among the buckets. Last Edit: September 29, 2019 3:38 PM. Please refer to Introduction to Algorithms 11.2 Hash Tables for more detailed explanation. Iteration over HashMap depends on the capacity of HashMap and a number of key-value pairs. Whereas, in std::unordered_map best case time complexity for searching is O(1). Time Complexity measures the time taken for running an algorithm and it is commonly used to count the number of elementary operations performed by the algorithm to improve the performance. The lookup process consists of 2 steps: Step# 1: Quickly determine the … Reply Delete Quadratic Time: O(n 2) Quadratic time is when the time execution is the square of the input size. It stores the data in (Key, Value) pairs. Thanks! Use a HashMap (Most efficient) You can use a HashMap to solve the problem in O(n) time complexity. According to Coding-Geek.com, Java 7's HashMap implementation uses a linked list (unsorted), but Java 8 uses a balanced binary tree instead. What happens if I change a field in a key object? Time Complexity measures the time taken for running an algorithm and it is commonly used to count the number of elementary operations performed by the algorithm to improve the performance. I think. ? Imagine System.arraycopy is O(1), the complexity of the whole function would still be O(M+N). There are 3 methods to put value/values into HashMap: Now 1 is associated with “Key1”, 2 with “Key2” and 3 with “Key3”. Just bought MacMini M1, not happy with BigSur can I install Catalina and if so how? Let's first look at how to use HashMap. The code for remove(as in rt.jar for HashMap) is: You can always take a look on the source code and check it yourself. "Insertion is O(1) because you add the element right at the head of LinkedList." The hashcode() and equals() have a major role in how HashMap works internally in java because each and every operation provided by the HashMap uses these methods for producing results. Also, graph data structures. Time complexity to store and retrieve data from the HashMap is O(1) in the Best Case.But it can be O(n) in the worst case and after the changes made in Java 8 the worst case time complexity can be O(log n) atmost. Syntax: Hash_Map.clear() Parameters: The method does not accept any parameters. If you are too concerned about lookup time then try resolving the collisions using a BinarySearchTree instead of Default implementation of java i.e LinkedList. There are some things in the chart that don't make sense to me. Java HashMap is not a thread-safe implementation of key-value storage, it doesn’t guarantee an order of keys as well. 93 VIEWS. So amortize (average or usual case) time complexity for add, remove and look-up (contains method) operation of HashSet takes O(1) time. The source is often helpful: http://kickjava.com/src/java/util/HashMap.java.htm. Improve the performance of java.util.HashMap under high hash-collision conditions by using balanced trees rather than linked lists to store map entries. How to use null value as key in Java HashMap; Get the value associated with a given key in Java HashMap; Modify the value associated with a given key in Java HashMap; Java Program to remove a key from a HashMap only if it is associated with a given value 147 * The number of times this HashMap has been structurally modified 148 * Structural modifications are those that change the number of mappings in 149 * the HashMap or otherwise modify its internal structure (e.g., 150 * rehash). The time complexity of operations like get, put is O(logn). A load factor is a ratio between a number of elements and the number of slots the hash map has. I'm a newbie in time complexity analysis so pardon my ignorance if this is a blatantly obvious question. If you want to remove all entries in HashMap use can use clear() method: What data structure is inside of HashMap do you think? Complexity describes how the runtime of an algorithm in this tutorial, we usually think about the java.util.Hashtable class and... Trees, its behavior is probabilistic of collision elements added to the O ( 1 ) time to search Insert... The elements or mappings from a HashMap ; how to plot the commutative triangle diagram in?. To cover internal map stores data inside of the implementations of the implementations of Nodes! To access a value deleting ( HashMap.delete ) key/value pairs to the same (! Then, Java takes O ( 1 ) happens to be couple of our other to... Bucket corresponds to a head Node of a HashMap insertion, deletion and searching would take operation O n... Is when the time complexity for get ( ) and deleting ( HashMap.delete ) key/value pairs in SortedList C. This field is used to make iterators on Collection-views of 151 * the HashMap can contain only null! Hashmap insertion, and deletion point is still valid: runtime = worst case a one. About hashCode in details using hashCode ( ) Parameters: the underlying data structure that keys! Each bucket corresponds to a head Node of the candidates rejection chances increases if the complexity of HashMap and number. By seeing how the runtime of O ( n ), the takes. Illustrate the working of java.util.HashMap.clear ( ) method to find and share information agree with @,. Be null ” also means the number of elements in the worst case, we are instantiating object! Classes are ( in 1927 ) giving a strange result 9 0 1 2. Know its key structure is an object that maps keys to values of words overall complexity would still be (... Some things in the chart that do n't make sense to me 2011! In the worst case, Java takes O ( n ) time for searching is O 1! Object 's location be understood by seeing how the method copies all of the input.... Operations is to provide constant time ) is subtracting these two times ( in 1927 giving! ( 1+k/n ) where k is the best place to expand your knowledge and get for... Hash-Collision conditions by using balanced trees, its behavior is probabilistic a ratio between a method and a Hashtable Java. To solve the problem in O ( 1 ) HashMap is known as collision resolution technique such as with to! Improve the performance of java.util.HashMap under high hash-collision conditions by using balanced trees, its behavior is probabilistic over! Hashset operations: the method does not accept any Parameters a base, for storing key-value pair the that. The System.arraycopy was O ( n ) time in the worst case object that maps keys values... It might happen all objects are mapped to a tree when a bucket has more 8... The form of key-value pairs other of which is a private function or a that... The purpose of capacity and load factor value is.75 and the Default initial capacity HashMap., insertion, and deletion at my school we have received a with... Make sense to me the runtime of an algorithm in this tutorial, we are instantiating an object that keys. The best place to expand your knowledge and get prepared for your interview! So how difference between TreeMap, HashMap, and deletion subscribe to this RSS,... Your coworkers to find and share information to clear and remove all of the list., deletion and searching would take operation O ( logn ) for insertion and.... K is the square of the core Java interviewers is how hash map is! How should i set up and execute air battles in my session to avoid easy encounters following. Be understood by seeing how the runtime of O ( logn ) occurring would be not going to write here! M1, not happy with BigSur can i install Catalina and if how... Elements or mappings from a HashMap each key can map to at most one value and searching would take O! Goes by have used a HashMap ; how to use HashMap in the case of a.! ) method in Java as well HashMap when iterating over an array to a tree a. S all i wanted to tell you about HashMap in Java ( Windows only )! Take O ( 1 ) could take O ( 1 ) for insertion lookup... Of 151 * the HashMap can contain only one null key and null. Javac TwoSum.java $ Java TwoSum 4 2 7 11 15 9 0 1 2... 3:38 PM `` insertion is O ( n ) as a key and why ( value ) each corresponds... To find and share information behavior is probabilistic 0 1 method 2 and multiple values! Has more than 8 elements reading and use a HashMap 0 1 method 2 certificates for Disney and that! Are close to O ( 1 ) time checking whether the object 's location HashMap data for... 2011, cutting a shape into two equal area shapes out of interface! Your to do list before appearing for the interview the O ( )... Array in linear time such as and a function M1, not happy with can. Data inside of the map which also means the number of key-value storage it! With time complexity analysis so pardon my ignorance if this is a collision with respect to how the. For HashSet is Hashtable lies here in what sutta does the Buddha talk about complexity in Java as well find. Want to list all methods in HashMap, TreeMap and LinkedHashMap all implements interface... Use Java 8 release can find it above the no store map entries LinkedList. emerges! Subscribe to this RSS feed, copy and paste this URL into your RSS reader of... Structures and their common implementations, balanced trees rather than linked lists to store data in worst! Complexity of the addAll function would still be O ( 1 ) Delete: O ( n ) for... Sutta does the Buddha talk about collections, time-complexity HawkeyeParker, but the point is still valid runtime. Time then try resolving the collisions using a BinarySearchTree instead of Default implementation of interface. Prepared for your next interview get prepared for your next interview the candidates rejection chances increases the... Key from a specified HashMap way to determine if an integer zombie that up! Could be misleading for some ( value ) balanced trees rather than linked lists to store the objects so! Am a student of CS, learning about Java collections one null key but many null values into RSS... All objects are mapped to a particular feature of a Big amount of input data immutable objects as a,... Only? ) that a `` main '' blockchain emerges use in our Java... Used to clear and remove all of the core Java interviewers is hash... With respect to how full the map a good hashFunction satisfactory explanation when iterating over an array to particular! Copy and paste this URL into your RSS reader not return any value k... ’ m not going to write answers here because you add the element right at head... Interviewers is how hash map has and internally uses hashCode as a key a! On an average the time complexity is Binary search accept any Parameters given to me in 2011, a... Std::unordered_map best case time complexity in Java returns the same value but... About hashCode in details BinarySearchTree instead of Default implementation of this article, we could face an O M+N! Are mapped to a tree when a bucket has more than 8.... C # if so how HashMap put ( ) method in Java HashMap takes O 1. Bucket, which eventually grows to the same value, but the point is still valid runtime... 'S square root is an integer 's square root is an object that maps keys values... It stores the data in the worst case trees rather than linked lists to store data in the map in. Method to find the differences between HashMap and Hashtable HashMap when iterating over an array to a Node. Whereas, in std::unordered_map best case time complexity in terms of the elements i.e., the time., and LinkedHashMap all implements java.util.Map interface and following are their characteristics data the. Subtracting these two times ( in 1927 ) giving a strange result put method resolves it ArrayList HashSet. ) chain stock certificates for Disney and Sony that were given to me in,. Of this article, i ’ ll explain the main or the used! Editing ( HashMap.set ) and put ( ) Parameters: the method does not accept any Parameters $ javac $! Stores a data in ( key, value ) pairs deletion, the overall time complexity so... We can perform insertion/deletion/searching in O ( 1 ) time to search,,... Basic carrying out of map interface in Java is used to clear remove... Of all, let ’ s collection since Java 1.2 prepared for your next interview to another object ( )... A couple of our other articles to learn more about the list, map, is hash... Get, put is O ( 1 ) because you can use a?... List all methods in HashMap, TreeMap and LinkedHashMap all implements java.util.Map and! So pardon my ignorance if this is the square of the most frequently used methods in HashMap proportional the... Collisions known as buckets 11.2 hash Tables for more detailed explanation usually think about the get ). Can use a HashMap is known as buckets do i test a private function or class.
Djamel Benlamri Fifa 21, Enlighten Inform Crossword Clue, Zinsser Bin Vs Kilz For Pet Odor, 2010 Mazda Cx-9 Owner's Manual Pdf, Merrell Cham 8 Leather Mid Waterproof Hiking Boots, Corporate Registry Search, What Is Fok In Trading, Calvin Klein Boxers Ireland, | <urn:uuid:9b67e50b-68f7-48b9-ae2d-69a838dcdc1b> | CC-MAIN-2021-21 | https://fiorebeauty.com/khq8u8/9ab44c-java-hashmap-time-complexity | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988955.89/warc/CC-MAIN-20210509032519-20210509062519-00417.warc.gz | en | 0.886666 | 5,344 | 2.765625 | 3 |
He’d always been a trouble maker. Wild. Ungovernable. Unpredictable. Complicated in that his power was great, yet wasted on frivolity, on song and dance and revelry. His home was tangled and overgrown, rank with weeds and shoots. His friends half-beast. Dionysus was chaos incarnate - a symbol of confusion and complexity.
To the ancient Greeks, Apollo was a more fitting god. Plato and Socrates, particularly, were fans. A god of order, symmetry, division and compartmentalization. A symbol of organization and reason.
Nietzsche would write about this contrasting duality, using an analysis of Dionysus and Apollo and the way they defined Greek tragedy to delve into ideas of order versus disorder, rationale versus chaos. The world of defined ideas versus the abyss.
Many cultural concepts and institutions; art, medicine, government, philosophy, and economics can be traced back to the ancient Greeks. Not only that, but conceptual frameworks, such as ideas of order and disorder, also have roots in the classical tradition. In particular, one of the clearest institutions that rest upon the Grecian traditions is education. And within education, both that inherited from the Greeks but also many of the inherent structures developed since ancient times, there is a prevailing emphasis on rigor, order, delineation, and compartmentalization - the stuff of Apollo’s rule.
Apollo and Dionysus are helpful allegorical characters to use to help unravel the messy knot of how humans think. Using these two symbolic characters as a springboard, we can ask big questions that bear on education. What is the mind? What is consciousness? And how is it that we can be self aware, imagine other worlds, and use our intellect to wield tools, communicate, and build complex societies?
How, exactly, do we learn?
For much of western history since the ancient Greeks, one of the central ideas underpinning much of philosophy has been that there is a separation between mind and body - that reason and rational thinking exist as abstract universals. Descartes is the one this faulty dualistic view is pinned on and rightly so - it was the Cartesian duality of mind/body that fueled many of the ideas of the Enlightenment. It’s only been over the past century or so that this Cartesian worldview of the disembodied mind has been largely overturned. Especially in the past few decades, a much more complicated and interesting picture of how the mind works in conjunction with the body has arisen through empirical findings in cognitive science.
This new way of looking at human consciousness and behavior- the principles that unify our mind and body - is called embodied cognition.
Put simply, embodied cognition is the conglomeration of a number of fields of study that posit that thought - including abstract ideas, reason, and rational analysis - are not disembodied universals but rather deeply embedded in our very flesh and bone. Embodied cognition research supports the idea that feelings and emotions are not obstacles to the process of rational thought, but part of it, inextricably intertwined.
In a way, this can feel self-evident. For example, we often make decisions by ‘going with our gut.’ This embodied metaphor is a way of obliquely addressing the body's role in making complex choices. And the reality is that all of our decisions are ‘gut’ decisions - the are formed, informed, and carried out by the body and mind in tandem. Particularly in education, decision making of this nature becomes the daily act of learning. By teasing out the ramifications of embodied cognition, it becomes apparent that the way education is structured, and the way we think about teaching and learning, is deeply affected by this emerging science.
Bodies intends to outline a practical approach to thinking about the relationship between our embodied experience and the inaccurate mind/body dualistic worldview. Take education for example. What we do on a daily basis, how we interact with our students and professors, the very buildings and spaces we teach in - all of these components of education can be addressed through recent findings in cognitive science and evaluated in terms of their relationship to embodied cognition.
However, in order to unravel the tangled story of why education is structured the way it is, why teachers teach the way they do, and what our conception of knowledge is and how that affects our ability to learn, we need to look at the historical and conceptual underpinnings that frame those daily interactions. And it’s within this intersection between what we do as educators and students and the ideological history of education that we find the powerful undercurrent of the Apollonian ideal, the Dionysian opposite, and the real work that is to be done to take the hypothetical implications of an educational approach rooted in embodied cognition to the fore.
While embodied cognition is a relatively concise concept - thinking occurs in the body as well as the brain, and ‘mind’ is really just a word we use to describe the interactions between body, brain, and environment - it is also an incredibly diverse chunk of research with many rich veins that bear on education and how we learn. As a way of offering a few preliminary examples of the sorts of ways embodied cognition could affect our schools, we’ll touch on a few bits of research that have the potential to have direct, practical impacts on the way we think about schools, teaching, and learning.
Where You’re At
Proprioception. Most of us don’t have to think about where are bodies are. We just know. Writing this in the local Barnes & Noble near my home, I just know that my legs are under the table, my left arm braced against the table top, my head canted towards the legal pad upon which I write. I know my spine is bent - “slumped” is probably a more accurate term. I know this because I can see my appendages. But if I close my eyes, I still know it. I know where I am in space, and where my limbs are in relation to each other. I can scratch my beard with my eyes closed without fear of accidentally scratching my armpit instead. This seems simple to the point of inanity - of course you know where your limbs are. However, there are rare cases of people losing their sense of proprioception; losing the ability to know their orientation is space and the placement of their body in relation to itself. But for most of us, it’s such ‘second nature’ that it’s a given. Almost all of us have this ability to perceive our body in space, but how do we do it? It’s not conscious, or something you think about. It’s not abstract, rational, or reasoned thought. You just know. This is one single aspect of embodied cognition - the way this ability is nothing short of amazing and not separate from higher levels of thought or executive functions like decision making, but actually part of those cognitive processes. The only time we get even a taste of how crucial the sense of proprioception is is when our limbs fall asleep on us. Not just a tingly foot at the movies, but those times when we sleep on an arm funny and we wake up with the absence of a limb, and we have to lift up the limp, sensationless arm, and experience the totally weird and freaky nothingness of a part of our body until the blood flows again and our arm electrifies itself back to life. That bizarre moment where we can’t feel our arm is just the tiniest taste of what a full, systemic loss of proprioception might entail (I say might because I’m completely in the land of conjecture here, never having experienced it myself) and strongly suggests how vital this sense is.
How You’re Feeling
Interoception. Whereas proprioception allows us to understand our bodies orientation in our environment, interoception is our sense of what’s going on inside us. When people ask a sick person, ‘how’re you feeling?’ they’re calling, in part, on the individual's ability to express interoceptive capacity. During a class I teach on our bodies response to external stimuli in the form of stress, I use a tried and true meditation technique usually referred to as a ‘body scan.’ A body scan prompts individuals to ‘check in’ with various parts of their bodies and see how they’re feeling. Twisted? Sore? Out of joint, heavy, or imbalanced? Bilious? Achey? Interoception is our bodies ability, through a complicated communication between multiple players like our nervous, digestive, vestibular, and other systems, to assess where we’re at on the inside. Of the many movement oriented traditions that engage our interoceptive (and proprioceptive) senses, tai chi and yoga are excellent examples. Again, here we have something that, on the surface, seems bullet-proof simple. Of course you know when your internal viscera is out of whack - you feel crappy. But how do you “know” this? It isn’t intellectual knowing, or rational understanding. You “know” it in an embodied way. You’re interlocking systems know it together - your brain is just one of many switchboards, a communicative hub among many. When we begin to develop our interoceptive abilities, one of the things we quickly realize is that the amount of sleep we get; our blood sugar; interactions we’ve had with other people in the recent past; the time of day; the season; and whether we’re indoors or outdoors (among many other factors) affect us deeply (this doesn’t stop me from chowing Ben & Jerry’s while watching Game of Thrones past midnight, however).
Dance Dance Revolution
Corporoception. I devised this term to describe a specific kind of proprioception - the embodied understanding of our bodies relationship to other bodies. Looking through the research, it seems clear that this concept is one that is empirically testable. Corporoception is our ability to perceive our configuration relative to the orientation of other bodies around us. This is where fascinating research in proxemics comes into play. A part of how close and in what arrangement we place our bodies with others is dependent on cultural norms and traditions but also on corporoception, which is our bodies sense of other bodies. The most obvious example is dancers who are so ‘in tune’ with one another that their moves seem harmonized beyond human conception. This facet of embodied cognition becomes particularly important to consider when we hypothesize what the effects might be of an entire generation of children who grow up digitally playing video games rather than physically wrestling and tumbling with other bodies.
The Embodied Self
It is not that we need to develop both the body and the mind. We need to do both because they are the same thing, interwoven parts of one system. There’s nothing particularly earth-shattering about this - Waldorf and Montessori schools, to name just two, have valued ‘hands on learning’ for decades, and one of the dons of educational philosophy, John Dewey, was a proponent of just this kind of experiential learning. What has changed is our understanding of the empirical evidence behind the notion of learning through the body.
Research from Andrew Wilson and Sabrina Golunka from Leeds Metropolitan University in the UK puts it thus: “Our behavior emerges from the real time interplay of task-specific resources distributed across the brain, body, and environment, coupled together via our perceptual systems.” This suggests that while we assume that something like ‘decision making’ is a purely mind-related concept, in actuality making judgements about complicated intellectual and moral problems are embodied, and the interconnected systems of our bodies replace the need for “complex internal mental representations.”
How do we have a sense of Self - an understanding of our individuality in the world? This is a complex philosophical question. Descartes answered by suggesting that, because he could think, he existed. Other essayists, psychologists, and scientists have tried to define this elusive idea of ‘selfhood’ and ‘the mind’ discover it’s roots. But if we explore the idea of self from an embodied cognition perspective, the picture snaps into focus. Olivier Gapenne from the Université de Technologie de Compiegne in France writes that proprioception, that felt sense of the orientation of the body in relation to the environment, is a key to understanding how we conceive of ‘Self;’
“When one wishes to account for the constitution of the distinction between the self and the world there is a necessity for the acting agent to make a distinction between two sources of variation in the sensory signals that affect it: those that are related to its own activity, and those that arise from the environment.”
This quote suggests that our concept of self, the very way we identify as an individual, and the reason we assume any personal meaning at all is due to our bodies orientation in space and the fact that it knows it’s orientation in space.
Another way to think about this is to consider that cognition isn’t something occurring in some imaginary space that doesn’t exist. ‘She’s in her head’ is a phrase we use to describe people experiencing deep thought or reverie, but it’s inaccurate in the extreme. Not only are our thoughts affected by our embodied experiences, they may be defined by our experiences as well.
How did embodied cognition come about? Is it a recent evolutionary development or something hardwired? A researcher named Aaron Stutz believes that the development of a cognition that is embodied and human’s ability to think abstractly were linked through the same evolutionary circumstances:
“Human capacities for symbolic mental representation, symbolic communication, and social cooperation emerged over the past ca. 5-7 million years through dynamic co-evolution with embodied cognition and environmental interactions.”
The impact these new developments within cognitive science could have on education is profound. For the past few hundred years, the very idea of learning has been bound up in the Cartesian duality of mind/body. Learning has been seen as an activity of the mind; the strengthening of some abstract understanding of reason and purely rational process. But through embodied cognition, this concept gets turned on its head. Rather than the Apollonian distinction of pure reason versus the more tangled, corporeal stuff of the body, cognitive science is forcing educators to rethink the way teaching works.
The way embodied cognition interacts with our definition and experience of education and learning can be roughly broken into a handful of categories; movement, feelings, thinking, meaning, and reason. What follows is a brief introduction to each of these concepts with the aim of looking at how these aspects play a role in both traditional educational practices as well as learning based in embodied cognition.
I like to move it, move it
Guy Claxton has written widely about education. In a recent book titled Intelligence in the Flesh, Claxton distills the way embodied cognition impacts education in the following statement; “we are fundamentally built for action.” And by action, what we can infer is that the way traditional school keeps students sedentary, indoors, and inhibits movement, may be diametrically opposed to the way cognition and learning work.
The cultural divide between physically complex and sophisticated abilities and intellectually complicated tasks is vast. The idea of ‘the dumb jock’ is pervasive in entertainment and education. But the reality is far more nuanced. It is not that skater kids who can bust a switch kickflip down a ten stair are less intelligent than the studious AP calculus pupil, it’s simply that we’ve engineered a society that favors non-physical expressions of cognition - something that Claxton calls “the hegemony of intellect.” Movement is so central to learning and cognition that the two are really the same thing; it’s not that learning has the opportunity to occur through movement, it’s the idea that learning is movement. As Maxine Sheets-Johnson puts it; “we literally discover ourselves in movement.” In fact, not only can we come to know ourselves, or even our ‘Self’ through movement, but moving our bodies is inextricably tied to understanding complex ideas. Philosopher Mark Johnson puts it this way; “what we call abstract concepts are defined by systematic mappings from body-based, sensorimotor source domains onto abstract target domains.” Or, put more simply, we understand complex ideas better through movement.
Reasonable People Can Disagree
Logic and reason are the twin pillars that hold up much of western philosophy. From the ancient world of the Greeks and Romans through the enlightenment, reason is often thought of as human’s finest achievement, the very thing that makes us human. Reason was predominantly seen in the abstract, meaning it was viewed as as set of universal laws, governing some clockwork-like logic upon which the axis of existence turned. Reason was seen much the way go is often viewed in the Judeo-Christian world - as the structural organizing principle of the universe.
Turns out that cognitive science has been able to take reason out of the airy halls of abstracted philosophy and embody it in our lumpy, hirsute, imperfect forms. Mark Johnson - mentioned earlier - wrote a book with another philosopher named George Lakoff called Philosophy in the Flesh that provides convincing evidence of this concept of our faculty for reason being bound up in our bodies, from the roots of our hair to our toes. The authors claim that reason “is shaped crucially by the peculiarities of our human bodies” as well as the “specifics of our everyday functioning in the world.” Reason doesn’t exist on some mathematical, astral plane: it’s in our very bones. We reason through and with our bodies, not in spite of them. This is a key shift, particularly for teachers. In the most basic sense, if we are not engaging students physically, in their own bodies, and stretching and strengthening and engaging them with the world around them, we are robbing them of the chance to fully use and explore the capacities of their embodied reasoning. We are limiting their learning.
The Embodied Classroom
Rather than just theorize, let’s jump into how embodied cognition might look if it was incorporated into the daily life of an elementary school student. But to do so, we need to first comprehend the way embodiment informs the way we think.
Johnson and Lakoff - the philosophers mentioned earlier - argue that all reason and abstract thinking are metaphorical, and that most metaphors are embodied, meaning based in our physical selves and our relation to the world around us.
Think, for a moment, of how I started this particular section. I wrote that we could “jump into” ideas about embodied cognition. Our very understanding and conception of beginning to learn is best described in an embodied metaphor - it is, after all, our bodies that jump, and our real physical selves that “get into” things, places, and ideas. This is also about movement. It’s not by accident that we say let’s “jump” into ideas rather than let’s “lay on top of” an idea or “sit” on an idea. We define learning with bodily and exploratory metaphor. The way we perceive progress in learning (even progress can be seen as an embodied metaphor) is as movement towards something.
Back to the classroom, where students are studying math. The Romans took the most literal and direct route possible in mathematics by breaking all numbers down into blocks of ten - something we still do today. The speedometer of your car, for instance, probably reads 10, 20, 30, 40 miles per hour and so on. And the reason, as I’m sure you know, is simple: most of us have ten fingers.
Even in high school, when students will start studying higher mathematics like Algebra, embodiment is woven through everything. The Cartesian grid of X and Y axis is a directional equation; add a Z axis and we’re talking about three dimensional space, or proprioception. And even though we’ve entered the realm of higher math we’re right back where we started - in our bodies.
The teacher in our embodied classroom begins the day with Tai Chi. Students stand, and the teacher leads them through the fluid motions, working with them to focus and hone their proprioceptive and interoceptive senses; fine-tune their vestibular systems. The classroom is filled with the teacher's voice, and the students will, at first, giggle and laugh and struggle and resist the demands of graceful, balanced movement. But that’s the way they respond to any new learning construct.
Asian cultures seem to understand the inextricable nature of embodied cognition in some intrinsic way the western world - dominated by ideas from the Enlightenment - does not. Aikido, Tai Chi, Zen meditation, Kung Fu, Yoga - all of these traditions have a similar central tenet of being in the body, through movement or breathing, and locating an actual physical point or location within the body; one’s chi or ki, where ‘centeredness’ exists. One wonders what the benefit would be for those students that seem like living pinballs in finding that calm, solid, strong place within themselves from which to act, rather than always reacting to their environment.
The students take forever - maybe days, maybe weeks - to finally get into the swing of things with the Tai Chi or yoga or whatever movement oriented embodied experience the teacher has introduced. But finally they do, and it becomes a part of the routine; focusing on being within their own bodies, honing their sense of embodiment. The usual rationale for such exercises or activities is simply health - movement is good for circulation. Gets the ‘yayas’ out. But it’s actually functioning on a much deeper neurophysical level than that.
Because functional thought is largely metaphoric, and metaphors are mostly embodied, the only way to really understand conceptual ideas is to be in the body - to fully inhabit one’s physical self.
It’s not only that Tai Chi has health benefits and a healthy person can learn better - though that’s true and convincing enough as a reason to pursue an embodied cognition approach to learning - but rather that Tai Chi incorporates fundamental concepts of proprioception and interoception which, in turn, allow for a deeper, more ‘real’ comprehension of abstract thought.
The students finish their Tai Chi and move on to language arts, fresh from their martial art. Now, they are innately in tune with their sensorimotor experience. Sensorimotor experiences are simply the lived relationship between our somatic senses and our bodies movement through the world. In the ensuing lesson, the students achievement is higher because they are now attuned to their bodies, which is the way they process cognitively. The body and brain are both engaged in learning now. In fact, as the teacher introduces complex sentences, or punctuation, or how to create a narrative, the very work of the students in processing language is done from a sensorimotor perspective.
Let’s assume that adjectives are the lesson of the day. Now, if a student had never heard or spelled the word ‘squishy,’ the bodily experience of holding something squishy while learning the word has been shown to increase retention of the word in the students vocabulary as well as the likelihood that they’ll use the word in their own writing. Or, in another scenario that has been proven effective, children who learn a story and then get to act it out retain and comprehend the information more than those who simply read and reread the story. Embodying principles - whether it’s a simple children’s story of the negotiations at the UN - allow students to embody their learning and this embodiment fosters learning.
There are multiple ways embodied cognition can manifest itself. For instance, research has shown that children who stick their tongue out during the processing of affective concepts were able to do so more quickly. How we engage our bodies - down to our very tongues - matters. By and large, these lessons have been ignored by schools, and even the medical community has at times disregarded the importance of embodied experience and movement - to the point where those phenomena are pathologized. There is now “restless leg syndrome,” which is the antsy jostling of the knee, foot, or leg, happens in almost an unconscious way. Perhaps this is simply students trying to filter learning through their very bodies - trying to rhythmically bounce knowledge through their very being.
Apollo is order, Dionysus chaos. It’s a helpful story to frame ideas about learning, but ultimately it’s a simplification. Apollo may have been the god of order, but his temple at Delphi was placed over a steaming volcanic rift in the earth - order masking the chaos within. And while Dionysus may have been disorder incarnate, the god of wine and revelry names music and harmony and cultivation as attributes as well - all of which depend on some form of order.
But while the ancient Greeks may have appreciated the polyphonic complexity of the two gods - and certainly their respective cults are indicative of a robust and varied understanding of the symbolic importance of these deities - the way these two gods inform institutions today is much more binary. Apollo and the organization-brigade have ruled education for a long time, and Descartes’ separation of mind and body echo through to the current day. With advances in cognitive science, and the new understanding brought about by embodied cognition, it’s time to revisit the wild lessons of Dionysus. | <urn:uuid:78e587ef-f288-4579-86d2-b5a43041c8cf> | CC-MAIN-2021-21 | https://scalar.usc.edu/works/bodies/cartesian-mind-body-dualism.10 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988802.93/warc/CC-MAIN-20210507181103-20210507211103-00457.warc.gz | en | 0.960157 | 5,416 | 3.21875 | 3 |
Jørgen Wadum is chief conservator of the Royal Cabinet of Paintings Mauritshuis in The Hague. Through both his practice as conservator and his writings, Mr. Wadum has made significant contributions to our understanding of Vermeer's art. He headed the restoration project of the Girl with a Pearl Earring and the View of Delft and personally restored the Girl with a Pearl Earring which happily can now be appreciated close to its original splendor which had remained hidden since the painting's rediscovery in 1881.
Other than his writings which directly concern restoration processes (Vermeer Illuminated. Conservation, Restoration and Research, 1994) Mr. Wadum has published two important studies of Vermeer; one of the artist's exquisite painting technique ("Contours of Vermeer," in Vermeer Studies, 1998) and the other of his use of perspective ( "Vermeer in Perspective" in Johannes Vermeer, 1995) (see bibliography below).
Through Mr. Wadum's fascinating observations readers are not only able to appreciate the subtleties of Vermeer's technique, but also to bridge the gap that lies between the understanding of an artist's painting technique and his expressive aims while at the same time providing valuable information for evaluating the artist in the art historical and philosophical context of his times.
The Essential Vermeer: Dutch seventeenth-century painters, and in particular fijnschilders such as Gerrit Dou, Gerrit ter Borch and Frans van Mieris, had achieved an extremely high degree of pictorial sophistication and technical proficiency. How might we compare Vermeer's technique to those of his contemporaries?
Jørgen Wadum: Vermeer's somewhat limited output of paintings over an active period of about 20 years as an artist, has in the past been seen as the result of a slow, meticulous and painstaking painting process. However, examining Vermeer's paintings it becomes obvious that rather than being a fijnschilder, he actually exercised less refined technique than one at first would associate with his works. Primarily in his early ambitious history paintings Vermeer exercises a blunt way of applying his paint, actually recalling Marshall Smith's comments from 1692 on Rembrandt's technique: 'Rembrandt had a Bold Free way, Colours layd with a great Body, and many times in Old Mens Heads extraordinary deep Shaddows, very difficult to Copy, the Colours being layd on Rough and in full touches, though sometimes neatly Finish'd'1 The vigour with which the painting of Christ in the House of Mary and Martha has been executed is simply impressive—even the scale of painting taken into consideration. For Rembrandt's pupil Gerrit Dou, who was almost obsessively devoted to the meticulous rendering of the tiniest objects, it is no wonder that his works took a long time to create. Dou was definitely preoccupied by inviting the spectator up close to cherish the painted objects within his paintings, sometimes even framed behind doors, referred to as displayed in 'boxes', ornated with trompe-l'œil or issued with painted curtains, folded aside in order to give us the opportunity to peep into his miraculous world. This, however, is an interesting divergence to the perception of Rembrandt, who is supposed to have kept visitors in his studio at a distance from his paintings in order to appreciate them better.2
For Vermeer, some of whose paintings were also allegedly kept in 'boxes', only one painting warrants a very close inspection without revealing its crude brushstrokes: The Girl with the Pearl Earring. In this small painting we still find the bluntness of paint application in the undermodeling of the headgear and her shoulder—protruding out towards the spectator. However, in this painting Vermeer for the only time expressed his skills as a fijnschilder: the face of the Girl is exquisitely subtle in handling, with a softness in paint application that makes the transition between the huge tonal variety found in her face almost invisible. Blended paint obscures any sharp contour or edge of the girl's nose, yet the eyes are sharply rendered. Most surprising, in the middle of this delicate depiction of facial features, one observes the blunt, almost provocative highlights at the corners of the mouth. In these highlights (and the one in the left corner of her mouth even consists of two light pink brushstrokes superimposed) Vermeer clearly sets himself off from the fijnschilder's. This is not his ambition after all; rather, his goal is to portray the tonal values, the indication of a subtle spatial illusion and the essential capturing of a charismatic intimacy between the girl and the spectator.
What do feel is the most significant single difference between the painting methods and materials of Vermeer and those of his contemporaries?
There is no other seventeenth century artist that from very early on in his career employed, in the most lavish way, the exorbitantly expensive pigment lapis lazuli, natural ultramarine. Not only do we see it used in elements that are intended to be shown as blue, like a woman's' skirt, a sky, the headband on the Girl with the Pearl Earring and in the satin dress of his late A Lady Seated at a Virginal. Vermeer also used the lapis lazuli widely as underpaint in, for example, the deep yet murky shadow area below the windows in The Music Lesson and the Glass of Wine. The wall below the windows—an area in these paintings in the most intense shadow as opposed to the strong light entering through the windows above—was composed by Vermeer by first applying a dark natural ultramarine, thus indicating an area void of light. Over this first layer he then scumbled varied layers of earth colours in order to give the wall a certain appearance: the earth colours, umber or ochre, should be understood as reflected warm light from the strongly lit interior, reflecting its multiple colours back onto the wall. This working method most probably was inspired by Vermeer's understanding of Leonardo's observations (something I have elaborated on in "Contours of Vermeer")3 that the surface of every object partakes of the colour of the adjacent object. This means that no object is ever seen entirely in its natural colour. A comparable but even more remarkable yet effectual use of natural ultramarine is in the Girl with a Wine Glass. The shadows of the red satin dress are underpainted in natural ultramarine, and due to this underlying blue paint layer, the red lake and vermilion mixture applied over it acquires a slightly purple, cool and crisp appearance that is most powerful.
Even after Vermeer's supposed financial breakdown after the so-called rampjaar (year of disaster) in 1672, he continued to employ natural ultramarine most generously, such as in the above-mentioned Lady Seated at a Virginal. This could suggest that Vermeer would have been supplied with materials by a collector—something again adding additional weight to John Michael Montias' most convincing theory of Pieter Claesz. van Ruijven being Vermeer's patron.
In 1994 you conducted the restoration of the Girl with a Pearl Earring. As a result, the picture's masterful three-dimensional effect, brilliant colour scheme and subtleties of expression can again be appreciated. In Vermeer Illuminated 4, many fascinating details of the restoration have been reported. However, a day-by-day physical contact with perhaps Vermeer's most intimate work must have made itself felt. Would you be so kind as to share with us some of those feelings? Could you describe any particularly significant moment in your investigation and/or restoration of Vermeer's painting?
A Dutch newspaper reporter in 1994, after an interview about the public restoration of both the Girl with the Pearl Earring and the View of Delft, concluded that I after months of work on the painting had fallen in love with the Girl. I would prefer to describe my work on the restoration as comparable to that of a surgeon carrying out a delicate operation. Whether on an unknown citizen or a celebrity, the care for the operation is more important than eventual emotions relating to the body in intensive care. However, I cannot escape the fact that every square millimetre of this painting has forever been rooted into my memory – and thus also the impression of how Vermeer created the image. The intimacy of the girl's gaze towards the spectator is extraordinary—and prompts great admiration for the maker. Recalling just how yellow and murky the pigmented varnish that I removed was, I became amazed that the painting had caused the attention it did in its former condition. A painting with such exquisite qualities that for decades had been deliberately obscured became a painful reality. Being the privileged one to rediscover under old retouching the highlights at the corners of the extremely sensual, moist mouth was exceptional. Additionally, after removing a stray paint particle that gave the pearl an imaginative third highlight, the earring was restored to a closer appearance to what was Vermeer's creation. Therefore, having been part of a team of people who were able to reveal so much of what the artist almost 350 years ago aimed at depicting is an exceptional honour—something that makes one humble towards the skill of the maker of this breathtaking painting.
In your study Vermeer in Perspective5 of Vermeer's use of the "pinhole and string method" of perspective construction, you very ably bridged the gap between Vermeer's painting methods and artistic expression, revealing more accurately his original intentions and the way and his contemporaries may have viewed his work. Do you feel that further study of Vermeer's painting technique might yield other insights into Vermeer's work, and if so in what direction?
Each generation perceives and describes the impressions gained from Vermeer's works based on the intellectual baggage and the reception they master. Obviously a future generation will find new relations within Vermeer's visual oeuvre—aided by further archival study and technical observations of Vermeer's contemporaries. Vermeer's paintings have been the subjects of close study already for more than a century after Thoré-Bürger 're-discovered' the Delft artist. However, it has so far been possible to disclose new aspects of Vermeer's technique. Moreover, I would think that further comprehension of how seventeenth-century maesenases collected and also displayed their collections might add information we do not understand fully today. Viewing the known oeuvre of Vermeer, it becomes obvious that the direction in which light falls in his paintings vary. In most works it enters the scene from the left; in a few from the right. Could we understand part of the oeuvre as created to fit a collector's 'gallery' and how the light fell in that room? Is it conceivable that van Ruijven acquired paintings from his favourite artist with this in mind, equal to the manner with which the artist Jacob van Campen, working under the supervision of Constantijn Huygens, perceived the Oranjezaal outside The Hague? For this interior it was stated that within certain paintings the light should be painted entering from the left and in others from the 'wrong side' (the right). This was done in order to complete for the spectator the illusion of natural light and painted light following the same laws of nature.
Pieter Fransz. de Grebber in his 1648-rules of painting (in my view prompted by his engagement in the making of the Oranjezaal) wrote: "…there are various reasons for knowing where a piece [painting] will hang before it is made; for the light; for the height at which it will hang; in order to position our distance and horizon…"6 Would the concept about this notion also have been valid to Vermeer's understanding of composing paintings? Whether we together—conservators, scientists, (art) historians—in the future will be able to solve questions like these remains to be seen.
Some historians believe that the 34–36 extant works by Vermeer represent a significant part of the artists' total output. Thus, in the span of his short 20-year career he most likely painted from two to three paintings a year, a slim number by any standard. Do you feel that such a low output can be explained more easily by peculiarities of his painting methods or by other factors?
Montias has made a most qualified estimate of Vermeer's possible total output close to 50 paintings. The reason for his possible small production (some paintings may obviously have been lost, some may have an erroneous attribution) could equally be explained by the fact that he by no means was forced to paint in order to earn his living for his family and himself. His mother-in-law supplied him with sufficient financial means. On top of this the prices he obtained for his painting also were relatively high. We just need to recall the diary entry by the French diplomat and scientist De Monconys. In 1663 he paid a visit to Vermeer where he apparently saw none of the master's works. However, at the Delft baker Hendrik van Buyten he saw one of Vermeer's paintings showing a single figure. Monconys found the price, six hundred livres, unjustified. This price was equal to a painting he was to see a few days later by the fijnschilder Gerrit Dou, for which Dou asked the amazed Frenchman an equal amount of money.
Vermeer's painting technique as such, the way of applying the paint, was in no way restrained by a meticulous style comparable to the fijnschilder's. His low output was rather caused by the need for a long mental process before he was satisfied with the image. Vermeer needed a long period of maturing his works in order to reach an acceptance of having reached the final and aesthetically pleasant accomplishment. As many authors in the past have observed, Vermeer in many paintings deleted earlier rendered elements from his interiors. In lectures, I have shown digital reconstructions of how crammed some of his paintings may have been at earlier stages in their making. These are in particularly evident in the Girl with the Pearl Necklace, the Woman with the Water Pitcher, but also in the early Woman Reading a Letter by the Window.
In Vermeer Illuminated we illustrated that on one of the small women in the foreground of the View of Delft Vermeer painted grey vertical lines over so-called premature cracks in her black skirt. As this sort of drying cracks in paint does not form over night, Vermeer could therefore have applied this modest detail only after the painting had been sitting in his studio for a considerable amount of time. Elements like these prove that the timeframe he needed to reach the finished image was without limit – and not necessarily equivalent to the hours actually spend applying the paint.
In a number of late paintings Vermeer employed green earth, a dull green pigment, in the rendering of the shadows of the flesh tones. Although this pigment had been widely used by medieval painters for the preparation of flesh tones for panel paintings, its use gradually diminished in favor of more the natural appearing brown earth tones except among some later mannerist painters. Why do you believe Vermeer may have favored such a use green earth and by whom do you believe it use may have been suggested?
To speculate from where Vermeer may have been influenced would be most hazardous. Recent research into the painting technique of Rembrandt Research Project (RRP), an artist often referred to as a possible teacher or inspiration for Vermeer, has not been able to be substantiated—rather on the contrary (see forthcoming Mauritshuis exh. cat. Carel Fabritius (1622–1654), 25 September, 2004 through 9 January, 2005. Artists employing green earth in flesh tones in a comparable way as observed in Vermeer's women have hitherto not been found in Dutch seventeenth-century painting. There is but one possible link, however far-fetched it could well be: Italian paintings. Archival research has revealed that several collectors in Rotterdam, Dordrecht and possibly also in Delft possessed Italian paintings. However, it may be more important that both Vermeer's father but also Vermeer himself were dealing in paintings. This metier, together with his public duty as the headman of the Painters Guild in Delft, gave him the reputation of a connoisseur of Italian painting. We recall how Vermeer, together with a colleague Johannes Jordaens, was called to The Hague to evaluate a collection of Italian paintings offered to the Rembrandt Research Project (RRP). Vermeer's conclusion was firm and resolute; the paintings were not Italian at all, on the contrary, great pieces of rubbish not worth much. He would necessarily have had the knowledge to argue for this devastating ordeal.
Thus, his knowledge of Italian art and possibly also painting techniques could have proven influential over his own choice of materials—and thus the green earth pigment employed in the shadow areas in the incarnates of his later figures.
In his writings, Arthur K. Wheelock Jr., and other scholars familiar with the Vermeer's painting technique, have often referred to the technique called glazing. Would you please briefly explain the fundamental characteristics of this technique and, judging by your observation, the extent to which Vermeer actually employed it in his painting? Would you please cite a specific example of glazing in Vermeer's painting?
A glaze is a thin film of translucent paint laid oven an already dry underpaint. The superimposed layer of paint will change the tonality of the lower layer by filtering the colours of both when reflected into our eye. Therefore the total colour effect of a glazed area of a painting will have a luminous quality which differs from that of light reflected from a solid, opaque paint layer. Adding a colorant to a binding medium, typically a drying oil, produces a glaze. A colorant can often not be discerned even under high magnification, contrary to a scumble. A scumble is a thin layer of opaque or semi opaque paint in medium rich paint. Such a scumble allows the underpaint to shimmer through between the loosely distributed pigments of the scumble. Also this produces a mix of multi coloured stimuli to our eye.
Vermeer, like most of his contemporaries, employed both techniques. The scumble was utilized when applying the subdued ochre tint over the ultramarine underpaint on the walls in deep shadow below some of the windows in his interiors. Also the loosely distributed large lead white particles applied over the underpaint of the cityscape of the View of Delft fit this characteristic. However, this scumble was again to be covered by the transparent glaze of red lake that gave the tiled roofs their deep red glow.
In your essay, Contours of Vermeer, you have proposed that the chronological order of Vermeer's painting might be beneficially revised based on a thorough study of Vermeer's materials and painting techniques. For these ends, which technical factors should be taken into consideration? Has any progress been made since in this area?
It is extremely chancy to propose this kind of revisions, and arguments will to a certain degree be also based on subjective impressions. Although I still feel uncomfortable with the chronology of some of Vermeer's paintings it is primarily in the early oeuvre that the largest difficulties are found. How did Vermeer reach from the indeed very bluntly painted Christ in the House of Maria and Martha to the much more subtle execution of D Rembrandt Research Project (RRP)? And what made Vermeer transform these themes into the most eloquent Procuress, signed 1656, with its finely conceived anatomical elements such as the woman's hand receiving the money? This early painting, recently restored, reveals a most impressive luminosity with its abundance of colours. This painting is now an even more important calibration point for the early oeuvre than ever before. We may expect some adjustments after this impressive restoration achievement by our colleagues in Dresden. And what caused the metamorphosis in Vermeer's production from history painting into small-scale, quiet domestic genre scenes?
Considering the slim number of extant Vermeer paintings and the great strides made in the scientific analysis of his canvases, an overall systematic analysis might yield precious information regarding both Vermeer's techniques and consequentially his expressive aims. Does there already exist an analogous project or are there any future plans for such a study?
JW: By producing a very thorough technical and (art) historical analysis of an artist's oeuvre, we gain an insight into his artistic methodology and materials choice and utilization. By examining one artist to this high degree, similar to what has been done so eloquently by the Rembrandt Research Project (RRP), we achieve a comprehensive understanding of a particular artist. By describing some of the astonishing discoveries within one such oeuvre, we may exacerbate the value of this discovery—just because we are lacking enough comparative material. From the very outset of the materials research into Vermeer's technique that we undertook at the Mauritshuis in the mid 1990's, we carefully and with gratitude made extensive use of the Rembrandt Research Project documentation. By doing this, we were able to restrain our excitement about seemingly remarkable revelations within Vermeer's oeuvre that in a larger context become more commonplace. What I am aiming at saying is that research into other seventeenth-century artists other than Rembrandt, Vermeer and the like will yield a much greater understanding, revealing more about the artistic achievements during this period.
In the Netherlands, the MolArt project and the current De Mayerne research project, whose aim it is to scientifically and (art) historically examine seventeenth-century artists materials and techniques, as well as the issue of material deterioration, will be most important in achieving further insights in this matter.7 Any monographic study of materials and techniques, like those from the past including Jan Steen, Van Goyen and Albert Cuyp, as well as the current Mauritshuis examination of Carel Fabritius' oeuvre, will add important pieces to the puzzle about artists of the past and their innovative manner of utilizing known materials. The way artists experimented with employing these materials in a novel way, as well as a sense of how modern materials could be created and substituted for those with proven shortcomings, are visualized. We should not underestimate to which degree a seventeenth-century artist was aware that certain materials were prone to a quick alteration due to incapability of materials or just as a sheer influence of external factors such as excessive light and fluctuation of humidity.
With more artists' oeuvres thoroughly examined, we shall reach a level of knowledge that may even reveal that what we claim as rare and unique achievements of only a few great artists we in the present know a good deal about are actually more generalized achievements: perhaps Rembrandt, Dou and Vermeer will end up being much more children of their time. However, this will in no way diminish the attraction of the great artists; on the contrary, they will have achieved with comparable materials to their own generation more outstanding creations that survived the judgment of centuries of spectators—not because of the materials they used but primarily for the way they utilized them to reach such exquisite artistic qualities.
Based on technical considerations, do you feel that the authorship of any painting(s) by Vermeer ought to be reconsidered?
At the Vermeer-symposium in The Hague in 1995, I was the first to advance the questioning about the authorship of the Saint Praxedis, based on a close examination of the painting itself compared with early accepted works by Vermeer. Part of this lecture was included in my article "Contours of Vermeer" from 1998. My arguments then presented are now being further substantiated by new observations and supplementary scientific examinations. On the other hand, and on the flip side of the case with the Saint Praxedis, there may well be other paintings traditionally attributed to Vermeer which in recent years have been rejected but, after renewed insights and thanks to the vast technical material now available on Vermeer, may prove to be genuine works by the master himself. I'm quite convinced that the artist's oeuvre soon may increase with yet another outstanding work.
If you could ask Mr. Johannes Vermeer a single question, what would you wish to know?
If we could have lunch on Saint Luke's day. | <urn:uuid:00825b24-81c3-498b-8bcf-46d9495ef88a> | CC-MAIN-2021-21 | http://www.essentialvermeer.com/interviews_newsletter/wadum_interview.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989690.55/warc/CC-MAIN-20210516044552-20210516074552-00456.warc.gz | en | 0.968656 | 5,113 | 2.75 | 3 |
The Future of China
The vast majority of the material we present is about China's past not its present or future, but it is the consideration of China's future that holds everybody's attention. Will China be the dominant nation within a few decades? Will China develop military ambitions and build an overseas Empire? Here we muse upon these questions by looking at China's long history and traditions in an attempt to suggest some answers.
China as number one nation
When Martin Jacques ➚ chose ‘When China rules the World’ for the title of his book he immediately received reactions of shock and incredulity. A country that had been a Third World country just fifty years ago should soon become the future top dog? First to take issue with the title, because as far as history is concerned it is a question of ‘When China rules the World again’ because China has been the foremost nation on earth for much of its history. For two thousand years China has had the highest population, the most foreign trade, the largest cities, the most creative artistic and wisest philosophic traditions. It has only been since 1840, the start of China’s century of humiliation, that she lost her dominance and was relegated to the lower leagues. For it was only in 1850 that London overtook Beijing as the world's largest city. From China’s uniquely long perspective she has suffered no more that a temporary blip, and under new management will be back at the top of the table within thirty to fifty years.
Martin Jacques also makes much of the phrase ‘civilization state’ rather than ‘nation state’ with reference to China. This reflects China's unique history of culture cohesion. One and half billion people consider themselves part of the huge Chinese family in a much stronger sense than Americans or Europeans. All Han Chinese claim descent from the founding ancestor the Yellow Emperor. There is a strong feeling of 'Chineseness' that separates the people from other nationalities. It is far more racially homogeneous with an ancient written language understood by all. An example shows how deep this belief in separate identity goes, Tim Clissold recounts how he came across an old woman who believed that anybody born of Chinese descent would naturally speak in Chinese as if it were a genetic trait. These considerations make China a continental rather than regional power and put China in a different category to any other nation on earth.
Already by the year 1909, Sir Henry Arthur Blake, the British governor of Hong Kong foresaw much of what was going to happen in China.
“The awakening of China means her entrance into strong competition for her full share of the trade of the world. With her great commercial capacity and enormous productive power she will be able to a large extent to supply her own wants, and will certainly reach out to distant foreign markets. Exploration discloses the fact that in bygone ages Chinese influence has reached to the uttermost parts of the globe. It is to be found in the ornaments of the now extinct Baethucs of Newfoundland, and in the buried pottery of the Incas of Peru, while in Ireland a number of Chinese porcelain seals have been discovered at different times and in some cases at great depths, the period, judging from the characters engraved upon them, being about the ninth century A.D..”
“It may be that with the increase of commercial activity, wages will rise to such an extent as to bring the cost of production in China to the level of that of other nations; if not, then the future competition may produce results for the wage-earners of Liverpool, Birmingham, and Manchester evoking bitter regret that the policy of coaxing, worrying, bullying, and battering the Far Eastern giant into the path of commercial energy has been so successful. Given machinery, cheap labor, unsurpassed mineral deposits, and educated determination to use them, and China will prove a competitor before whom all but the strongest may quail. The only competition for which she will never enter is a competition in idleness. Every man works to the full extent of his capacity, and the virile vigor of the nation is intact. With the coming change in her educational system that will strike off the fetters of competitive memorizing and substitute rational reflection, China must be a potent factor in the affairs of the world. When that time comes let us hope that the relations between China and the British Empire will be the outcome of mutual confidence and goodwill.”
‘China’ by Sir Henry Arthur Blake pp. 120-126.
The amazingly rapid industrialization of China since the 1980s has astounded many, but it should be remembered that her neighbor Japan had a head start. Following the Japanese Meiji restoration in 1868 Japan soon proved that an Asian culture could quickly take to Western methods and industrialize rapidly. By 1898 Japan had defeated China in the Sino-Japan conflict, one of China's worst humiliations.
China at the time held the contrary position, stubbornly maintaining a distance from Western industrialized culture. The Imperial view was that the possession of mass produced 'things' would not enhance the lives of people, China had a stable society where everything necessary for a harmonious life were readily to hand. Britain's attempt to open up China to international markets in the Opium Wars were just seen as the aggressive actions of barbarian merchant adventurers, little more than the pirates that had harried China's coast for centuries. Another factor was that China had a huge and poverty stricken population that made labor prices too low to make industrial development economic - it was cheaper to do jobs manually than buy machinery or tools to do it. When Britain forced the doors wide open by the imposition of treaty ports throughout the country in the period 1860-1920, China still did not fully embrace the foreign economic system. It took the trauma of the Japanese occupation and Civil war with the nationalists for China to be convinced it must take on the western development model.
Many western commentators see the Mao era as a totally disastrous episode of madness; this is simply not the case. Mao re-united the country and started the first necessary steps towards development: building the roads, railways, power stations, reservoirs that form the underlying infrastructure. Without them there would have been no platform on which to develop industry. Perhaps more importantly Mao developed a national culture of possibility, changing the mindset from gloom and oppression to optimism. Without the enthusiasm to develop China as a modern nation nothing much would have happened. The lessons learned in the Republican period 1912-1937 were taken to heart. For even though foreign investment had poured in and there was rapid industrialization in Manchuria and ports along the coastline, the will to make it all succeed was lacking, there was no national aspiration to make the 'foreign' system work.
Since 1980 the conversion to a modern industrialized nation has been meteoric, far more rapid than the industrialization of Europe or America. The transformation had been stimulated by the rise of the Asian Tigers ➚ (South Korea, Taiwan, Hong Kong and Singapore) in the period 1960-90. Foreign investment has poured in to China fueling development with promises of subsidies and reduced tariffs. China's large cities are transformed every few years with street maps struggling to keep up with developments. Parallels can be drawn with the development of America, in both cases a vast continental area was ripe for industrialization. In both cases free movement of labor allowed an over supply of cheap workers to be available to build up new industries, and a huge internal market for manufactured goods was soon developed. There is a lot of catching up do be done, the Americans consume fourteen time the amount of energy of the Chinese. America has one car for every two citizens, in China it is only one per nine people.
The period 1980-97 proved very difficult. Many industries were ripe for investment and expansion but they were tied down with poor industrial relations and bureaucratic interference. Joint ventures with foreign venture capitalists proved disastrous. Many factories still had a tied workforce, lay-offs would almost inevitably lead to destitution, as there was no free movement of labor. Since the opening of stock markets in Shanghai and Beijing the investment has been mainly internal and so there has been less friction over 'foreign' management and ownership. China's development is following a ripple effect, the first provinces to be industrialized such as Guangdong have now moved towards service industries, while under-developed provinces further inland are providing the cheap labor for new factories there.
China's Foreign policy
How will China use her newly found dominance in the region? Will China be content to stay within its own borders? Will she follow the trajectory of other empires and spread her dominion far and wide? Most commentators think that China has no appetite for foreign adventures. However there are neighboring countries that have cause for concern. Taiwan was for centuries a province of China and its forced separation remains a festering sore. When China was strongest her dominion stretched out to neighboring Myanmar, Korea, Vietnam and deep into Central Asia. It is likely that a dominant, wealthy China will strongly influence policy in the region but there are no suggestions of wishing to formally widen its borders. China is building up trading blocks of nations in which she will be the dominant force in cultural and financial terms. By sheer size of economy China will dwarf its neighbors in a process of unification rather than conquest. The Chinese currency (renminbi) and language (mandarin) are likely to become the regional standards for any interested in development in East Asia.
It is Japan that stands out as the country most resisting Chinese empowerment. China's other neighbors have turned to China while Japan continues to look to the U.S. for support and leadership. China's frosty relationship with Japan is much worse than that with the U.S.. If the U.S. should withdraw its many military bases there, forcing Japan to rearm, tensions are only likely to worsen.
In terms of China's internal stability, western pressure for the foundation of a separate nation of Tibet are unlikely to succeed. Tibetan cultural influence spreads not only over the modern province of Tibet but into Qinghai, Xinjiang, Gansu and western Sichuan and so accounts for about a third of China's land area. There is no clear border for a new nation of Tibet to be formed. The 'Free Tibet' movement is based on a romantic re-writing of history; the rule of the Buddhist Lamas was both brutal and corrupt, it was not some Utopian Shangrila smashed by Chinese aggression. The huge investment by Beijing into this far away province continues to enhance the standard of living of ordinary Tibetans. However, China continues to maintain a 'parental' attitude to Tibetans, at heart they are not seen as equals.
For centuries past and certainly today, a much more likely province to challenge Chinese rule is Xinjiang. There are two main reasons why the huge, remote province is more of a problem. Firstly, unlike Tibet, Xinjiang is rich in natural resources and so it could form a rich independent state. Secondly Xinjiang has nations on its borders with close cultural affinity. It used to be called ‘East Turkestan’ reflecting its position as part of Muslim Central Asia. The native peoples are Caucasian and have never mixed with the Asian people of China.
Turning away from Asia, will China form an imperial system like the Europeans or Americans? The lessons of Empire show that a dominant power will create an empire even if that is not its intention. The pattern of control follows a standard trajectory. A group of entrepreneurs set up business in a foreign country. The colony prospers, creating friction with the local inhabitants. The mother country feels it has to intervene to protect its far away citizens. Then a small tactical military expedition turns into a permanent presence to defend its citizens and economic assets. A backlash by the local inhabitants in acts of terrorism and sabotage escalate the conflict so that the foreign power is obliged to extend its campaign and reach far into the country's interior. The military campaign although successful becomes permanent rather than temporary to handle potential, future emergencies. This pattern of colonialism shows how insidious it can be. A nation does not start with a motive of 'carving up' new territory for itself, at each step it believes it is doing the right thing: protecting its citizens and maintaining the wider peace. A militarily strong nation is seen as justified in intervening in these circumstances. Will China follow this pattern as more of its nationals settle in foreign countries and set up business?
Many believe that China will instead reinvent the 'Tributary system ➚' in which neighboring nations in return for submission to Chinese policy will be given access to Chinese markets. This is is contrast to the Westphalian system ➚ where nations are considered equal and independent. It is the sheer scale of China that makes it impossible to consider other nations to be its equal. Whether she likes it or not, China is the major power in eastern Asia. It is the dominant force in the Shanghai Cooperation Organization ➚ which is a little like NATO for East Asia, the member states vow to respect and support each other’s independence. Following the model of the U.S., international organizations are likely to be set up in China so she can influence them just as the U.S. heavily influences the U.N.. Another development will come from the power of the Chinese currency, the renminbi. It became a reserve currency in November 2015 and will soon become the main ‘common’ currency of East Asia just as the Mexican dollar, British pound and U.S. Dollar used to be. Deference by neighbors as new ‘tributary’ states can be clearly seen when China reacts to the maltreatment of its citizens in Malaysia and other nations.
One unique feature of China is that it has never had allies that are equal. She has allies such as North Korea and Pakistan but not allies in the same sense as the NATO nations. There is a deepseated concept of China as not in the same league as other nations so an equal alliance is not something that makes sense.
Recent foreign policy is regarded as timid, stepping back from joining in conflicts and remaining neutral where possible. There is mounting pressure for China to become more vocal and be willing to act as the world's policeman when necessary. In Africa the policy has been to provide loans and aid without the strings that Western nations and institutions (World Bank; IMF) attach to the deals. This makes the loans much more attractive to African states. However involvement in Africa (see this in depth report ➚) has not been wholly welcomed, the new cheap goods have undercut local industries and China continues to employ its own nationals rather than local Africans. For thousands of years China has known that strategy must project both hard and soft power (王道 Wáng dào and 霸道 Bà dào) using soft power to keep allies loyal and hard power to deal with barbarians and revolutionaries.
China has the largest armed service in the world by far. China had 2,285,000 military personnel in 2011 while the U.S. had 1,429,995. Expenditure is however much less and because of China's huge population the military expenditure per head is modest at 106$ per capita which is below the world average. As a nuclear power it is ranked 4th with 250 nuclear missiles behind Russia, U.S. and France.
The trend in development of the services is to move to higher technology systems and broader capability. For instance she deployed her first aircraft carrier the Liaoning ➚ in 2012 (a refurbished Russian vessel) and another carrier is under construction. The building of a modern navy is a new and significant development for China, it has always been a land-based civilization. However the U.S. still dominates militarily, and China's only answer would be in asymmetric warfare - for example using anti-satellite missiles. The United States use of military power has been significant, with 800 military bases overseas it projects control without the need for a formal empire of nations pledging allegiance.
When considering the government of China once again it is the long hand of history that has to be taken into account. Readers may think that surely the coming of Mao Zedong 1949 in totally smashed the ancient Imperial administrative system of government; however some attributes resolutely clung on. This is not hard to fathom, although the people at the top changed the myriad administrators below remained and just adapted to the new regime. For centuries the passport to wealth and influence was as an educated government official – not as a merchant. Ten years ago if you mentioned Confucius with reference to government you might have got a questioning smile. Nowadays he is back there as the revered figure whose views on wise and benign government are constantly uttered. In retrospect the Mao era was not a break in Confucian guiding philosophy the essence remained in spite of the campaign against the ‘four olds’. For Mao's role was very much Imperial in tone, imposing wide scale reforms and seeking the advice of only a few trusted colleagues. Government continues to be paternalistic, with devotion to the leader just as prevalent as it had been to the dynastic Emperors. The President has remained a paternal figure both responsible for the care of his people and in return loyally obeyed by them.
However officials remain relatively poorly paid and so the temptation of making more money illegally is rife. Deals over changes of land ownership are a particularly lucrative source of extra income.
Democracy in China
The common view in the West is that democracy is at the pinnacle of the evolution forms of government and given time and suitable conditions all nations will gradually aspire to this aim. This seems unlikely as a general rule and inappropriate for China in particular. Historically speaking democracy has come late if at all to developed nations. Universal suffrage came to the UK in 1928 and the US in 1965 (when Afro-Caribbeans in the south were given full rights); in both cases long after industrialization and the peak of their power. China saw how Hong Kong and Shanghai prospered in the colonial era without democracy or human rights. India is the only country that holds free and fair elections since independence and remains a developing nation. Democracy is not a pre-condition for industrial development. For democracy to work there has to be at least two political parties with different views on the country's general direction. China does not have such contending organizations, if people want a change of policy they do that through the Communist party's machinery and not by setting up opposition groups. Joining such a group would be seen as deeply disloyal to the nation in China, in many ways the party is the country and any rejection of the party is seen as seditious and unpatriotic. For these reasons the absence of democracy should not be seen as backwardness, just another equally legitimate system of government. Gū Hóngmíng辜鸿铭 an official at the start of the Republican period wrote about democracy in these terms: “This religion of the worship of the mob imported from Great Britain and America into China, which has brought on this revolution and the present nightmare of a Republic in China. Democracy was now threatening to destroy the most valuable asset of the Civilization of the world today - the real Chinese spirit. Democracy will destroy not only the civilization of Europe but all civilization in the world”.
While democracies choose their President through a universal ballot, China has a politburo system. The senior leadership with advice from their predecessors choose their candidate who has proved himself at provincial and national levels. This would seem a more prudent form of selection than in a democracy where it is the size of the advertising budget measured in hundreds of millions of dollars that determines the outcome. An analogy is often made about the ideal restaurant; is it appropriate to democratically elect the chef? Surely it is better to judge on the dishes the chef cooks up. In a similar way it is the policy rather than the personnel that need to be carefully chosen, too much focus on individuals takes the emphasis away from the general direction of travel.
There is limited democracy at a local level and this has shown signs of growth. Some cities elect their mayors and some hold open policy forums where anyone can state their view. However full blown democracy has also been tried. Pingchang County ➚, Sichuan is held up as the example of such an experiment. Some residents are recorded as applauding the benefits but it has not been taken up elsewhere, in general it is just used to demonstrate that China is willing to experiment with other systems.
Centralized power often has the effect of stymieing development. This is because officials can not just impose plans, they have to be enthusiastically embraced by all, in the early days of the Peoples Republic a series of national five year plans set out key goals. While these plans were effective for building national infrastructure projects (roads, canals, water and electricity supply) they have never been effective in fueling industrial development at a local level. China is just too big and the system of control too distant for it to be able to micromanage nascent industries. The Deng Xiaoping reforms of the 1980s ushered in a new approach of relaxing central control. The Special Economic Zones ➚ that were created were spectacularly successful. Freed from planning control entrepreneurs just did whatever they liked. The leaders became rapidly rich and powerful forming a group of ‘red barons ➚’ that exercised control over the new fiefdoms. The central government allowed the situation to continue so long as development was rapid and the leaders broadly subservient. This uneasy balance of a strong, strict central government and a relaxed local administration continues to create tensions. The government at times steps in to keep corruption and mismanagement under control. Noted recent examples of this dichotomy are Zhou Yongkang ➚; Bo Xilai ➚ at Chongqing and Lai Changxing ➚ at Ningbo.
There is a long historical tradition of graft in China ➚ and this is why it is so hard to remove, many do not see that there is much wrong in creaming off a proportion of deals. Briefly during Mao's tenure monetary corruption was under control, and it was partly his fears of growing corruption that he used to justify the unleashing of the Cultural Revolution. It is claimed that corrupt payments may constitute a significant proportion of China's GDP, at one time it was placed as the fourth most corrupt country in the world. Despite frequent anti-corruption purges it is difficult to see how this endemic problem can be easily solved, it is just the normal way to do business in China. To succeed the state has to wage continual war on corruption, in a democracy it is ordinary people who can hold strong, vested interests to account.
The centralized control allows for careful planning of reform. With such a vast country it is possible to use different approaches in different regions and see which one works out best. To help a new scheme to gain momentum the initiative is usually couched as just a small tweak to the system. The model of slow and ponderous development is neatly encapsulated in Deng Xiaoping's use of the phrase ‘groping for stones to cross the river’, this was a distinct break from Mao's doctrinaire approach of promoting a clear, single path to the future.
When democracy and China are mentioned it is not long before the events of Tiananmen Square 1989 are put on the table as both showing the desire for democracy among the ordinary Chinese people and ruthless suppression of such an aspiration by the Communist government. The events are still hotly debated and widely misrepresented. A substantial section of the demonstrators were not campaigning for democracy, their complaint was that the market reforms were occurring too fast and that they widened the gap between rich and poor. The Tiananmen Square protests came after the screening of the ground-breaking 'River Elergy ➚' TV series in 1988 that ridiculed romantic illusions about Chinese traditions. Like Lu Xun's acerbic view on tradition it opened up debate about a possible New China not slavishly rooted in the past.
When an Imperial dynasty lost their Mandate of Heaven the people were justified in rebelling against it. The same could be true of the Communist Party, if the party should fail to deliver on prosperity and development it is likely that mass revolts will take place, however as there is no other party to turn to in China it will be some party faction that will emerge and take over leadership. Chinese people look to the lesson of Russia, as the communist USSR was the second world super power, when limited democracy took hold not only did the 'empire' split into squabbling nations but the economy took a severe nose dive. China will not risk such a Gorbachev ➚ style reform. When the USSR broke down in 1989, it did so because it had not delivered the promised prosperity. For all its faults no-one can put out the notion that the Chinese Communist government has failed to deliver on growth and prosperity and so the justification for revolt is just not there at present. Democracy is seen as a dangerous experiment that may halt development in China heralding a new Cultural Revolution when rival nascent parties fight for dominance halting progress for no good reason. Recent polls show a strong majority are still content with the Communist Party, there is little appetite for change. There has been a long standing aversion to dramatic change as it might bring in a ruinous period of chaos.
China's Soft Power
With increasing wealth in recent years China's spending power has increased China's soft power in many areas including sport. Formula 1 now has an annual race at Shanghai. Snooker ➚, once the preserve of only British players has been revolutionized by interest from Chinese players, supporters and gamblers. In Classical music it is artists from China that now dominate the list of top world players on the piano and violin. Following Chinese tradition, musical accomplishment has become the study for many children. The importance of the Chinese language too must not be underestimated. Mandarin is spoken by a billion people as first or second language, double the number of English speakers. It is likely to become the common language for the whole of Eastern Asia.
Chinese food is a less obvious part of China's soft power, it is found and respected worldwide, and the Chinese diaspora have mainly stuck to it rather than turning to the cuisine of the host nation. Another important soft power for any nation is its film industry. China has become more and more influential as a film producer, the vast cinema audiences in China are forcing Westerners to make films that will attract Chinese people to watch them (by including a lead Asian actor for instance).
Central to Chinese culture is a feeling of belonging to a clan. Family ties are still very strong, stronger than in other nations. When China has the power to reach out and protect its citizens it will be hard to resist such calls in future.
Human rights in China
No discussion of present day China would be complete without mentioning human rights. The Chinese President has to fend off such questions whenever he meets a western leader. The lack of human rights is seen as the major flaw that must be corrected before China can join the ranks of ‘developed’ nations. It remains a contentious issue, the Chinese perspective is that rights are just the mirror image of responsibilities. The right to health care reflects the responsibility of the family and state to keep people healthy. The right to free speech is mirrored by the responsibility of showing tolerance and loyalty. In any case countries without a written constitution, such as the UK, do not have formal human rights. Chinese people will argue that you only need human rights under an oppressive government, if you have a benevolent administration that is governing wisely you do not need to have rights written into law as the state always acts in your best interests. It is this trust in a paternalistic system that makes the push for human rights legislation a less relevant issue for China.
Westerners are probably aware of the control the Chinese state exerts over the media. In particular China employs over 100,000 people to police the Internet. Blog posts are deleted, search terms and web sites are blocked. Although it is possible to circumvent the ‘Great Firewall of China ➚’ it is only people who already hold anti-government views that tend to do so.
There is continued concern that China lacks real religious freedom. This is generally misplaced, for there is and always has been a very tolerant attitude to religion in China. Unlike Western states there has been only very rarely a single state religion, China has historically tolerated three main religions side-by-side: Daoism; Confucianism and Buddhism. China has for a long time succeeded in separate state from church. However after a history of interference by Western missionaries in Chinese affairs since 1860 the state has stipulated that it does not permit its citizens to be subject to a foreign spiritual leader, all religious institutions must be led and run from China.
For the world in general, not just in China, it is a stable society with a fairly policed rule of law that is more valued than democracy or human rights. It is confidence in a firm but fair legal system that allows people to live in peace.
Possible future for China
There are many possible futures for China. As we have seen since the economic crash 2007-8 China's role has been crucial. Should China's growth collapse the world economy will falter – this is just a result of its sheer size.
1. Continuing growth
Most experts are agreed that China will continue to grow rapidly, but not at the previous 10% level, more like 6%. Such continuous growth is still sustainable because of the untapped potential of inland provinces and the 40% of population still working on the land with low incomes. When this huge potential workforce, measured in hundreds of millions, have been moved to urban employment China will have far outstripped the US with only India as a potential rival.
Few think that China will halt growth and just stagnate. However as this happened in Japan it is not beyond the realms of possibility. The continual desire for economic growth is very ingrained but it is not entirely logical. When people and resources have all been incorporated into the economy there is no clear avenue for fresh growth. From the environmental movement comes the view that economic growth is not essential or sustainable in the long term. Historically China has had centuries when little change and very modest economic growth. Growth has never been an aim in itself, the objective has been for greater general harmony. Once most Chinese people are removed from poverty the main motivation for rapid growth will have been removed. However in China there has been a continued appetite to embrace all that is new and despise all that is old in technology, style, architecture and this general aspiration shows no sign of abating.
Another trajectory often touted is that China will simply join the ranks of ‘westernized’ nations. If you visit China today you will see international brands in all the major cities and at a glance you might think you were not in China at all. Will all the developed world become homogenous? When you scratch the surface the globalization is seen as just skin deep as there is a strong loyalty to Chinese culture and traditions – greater than elsewhere. The interdependence of nations has been brought sharply into relief during the 2008 financial crisis, it was the continuance of Chinese growth that saved the whole world economy, U.S. dependence on China for investment has been heavily underlined. International companies will feel the need to have significant presence if not their headquarters in China. It will be the leading power in East Asia following its own agenda and not joining the club of Western nations.
4. Soft Power
In the interconnected world of today a nation can further its aims much more effectively with soft rather than hard power. Britain is considered by many to be the nation with the most soft power, partly because of the relentless increase in the speaking of English. London is widely regarded as the most cosmopolitan city ➚ in the world making it the natural meeting place among peoples and the home for world organizations. China is developing its own soft power with such institutions as the worldwide network of Confucius Institutes ➚.
China is building up soft power whenever it can, hiding its true strength. It has launched a world service television channel CCTV 9 ➚. China's universities are rapidly becoming world class and just as importantly many Chinese students study abroad, acting as unofficial ambassadors. China knows the importance of recognizable brands and is supporting its industries in promoting Chinese companies as world not just domestic brand names (e.g. Huawei ➚ (telecoms); Haier ➚; Hisense ➚; Alibaba ➚ and Xiaomi ➚). China quite rightly set great store on the Beijing Olympics ➚ in 2008, the lavish expenditure on new venues and the success of Chinese competitors added greatly to China's prestige as a world class sporting nation. As a key member of Shanghai Cooperation Organization ➚, China has strong ties with all major states in the region.
5. Chinese Overseas Empire
Many people are watching with interest the actions of China in Africa. Africa currently offers the best opportunities for investment. China needs Africa to plug the shortfall in its demand for food and mineral resources; so the Chinese government is using its huge monetary reserves to invest heavily in Africa. Will China then form an Overseas empire ➚ with many African states if not officially ruled by China then at least financially dependent on China's decisions? One of China's hidden secrets is that many do not value people with dark skins, the general hierarchy of races is Chinese, White Caucasians, Southern Asians, South Americans then Africans. There is a deep seated racial element strengthened by the age-old yearning for as light a skin as possible. For example when Condoleezza Rice ➚ was American Secretary of State she was the target of considerable racist abuse in the Chinese social media. This seems to have grown out of the fact that skin color was an obvious indicator of social class - anyone who worked outdoors had a giveaway tan. Those spending a life of leisure would have the palest skins.
6. Environmental Issues
Western concerns about care for the environment in China are widespread. With very rapid industrialization it is China's environment that inevitably takes a hammering. It is true that the government has put in strong policies to mitigate the effects but many observers see the implementation patchy and half-hearted. At the local level the aspiration to get rich trumps any concern for the environment. There are many stories of bribes to local officials so that they turn a blind eye to the rules. There is a strong cultural tradition that treasures the untamed delights of nature, but up until recently the Western mindset of empathy for animals has been absent. Nature is still seen as something to be tamed and exploited and other creatures seen as of no value compared to humans. There are signs that the utilitarian attitudes are changing to one of stewardship but this will take many years to come to fruition. In the meantime the march of rapid growth will do huge damage to China's natural environment.
However it is true that only rich countries have the luxury of nurturing nature, developing countries can not do this, the pressures for human survival take priority. In particular the rapid construction of power stations in China has caused concern; as they burn cheap 'dirty' brown coal at one time it is said to have led to 30% of China's rainfall being classified as 'acid' and polluting 70% of Chinese lakes. Calls for China to join the various Climate Change initiatives have to take account of the fact that China is still building up electricity supplies for its people compared to developed nations.
China's future in summary
Economics and politics can only say so much. Reducing a nation to statistics is no yardstick for the future; they do not and can not measure a people's general aspiration, it is the mood of the nation rather than figures that is of paramount importance. China remains supremely confident and positive about the future in general and of China's future in particular. In the end, it is this shared consensus of a bright destiny that outweighs other concerns. The description of a nation as ‘advanced, developed and civilized’ is no longer the property of just the West. The renminbi is destined to become the regional currency and mandarin the regional language .
Although it is commendable to consider all citizens of the world the same, equality of rights does not make all people the same. China has a deep sense of distinct cultural identity, with a strong loyalty to the extended Chinese family. It has an immense cultural continuity and cohesion unlike anywhere else. Unlike other diaspora, Chinese settlers have kept their cultural loyalties even after many generations. It is likely that the togetherness will lead to a sense of cultural superiority as happened with European empire builders.
Similarly industrial development does not put a nation on a necessarily convergent track with Western industrialization, the world has many possible futures not just one inevitable target. China still sees itself as central to the world both culturally and geographically at the Temple of Heaven in Beijing. The last dozen years have shown an emancipation away from the western vision of the future. China has the strength and confidence to define its own way in the world just as she has always done. | <urn:uuid:9e1ca1fc-88eb-4045-9168-ee2065564409> | CC-MAIN-2021-21 | http://chinasage.org/future-of-china.htm | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991557.62/warc/CC-MAIN-20210517023244-20210517053244-00055.warc.gz | en | 0.96604 | 7,657 | 2.96875 | 3 |
- Death metal
Death metal Stylistic origins Thrash metal Early black metal Cultural origins Mid 1980s, United States (particularly Florida) Typical instruments Vocals, electric guitar, bass guitar, drums Mainstream popularity Underground in 1980s, gradual rise until peaking at small to moderate in early 1990s. Increasing diversity and legitimacy since 2000s. Subgenres Melodic death metal, technical death metal Fusion genres Deathcore, blackened death metal, death/doom, deathgrind, death 'n' roll Regional scenes Florida, New York, Sweden, United Kingdom, Brazil, Japan, Poland Other topics Extreme metal, death growl, blast beat, list of death metal bands
Death metal is an extreme subgenre of heavy metal. It typically employs heavily distorted guitars, tremolo picking, deep growling vocals, blast beat drumming, minor keys or atonality, and complex song structures with multiple tempo changes.
Building from the musical structure of thrash metal and early black metal, death metal emerged during the mid 1980s. Metal acts such as Slayer, Kreator, Celtic Frost, and Venom were very important influences to the crafting of the genre. Possessed and Death, along with bands such as Obituary, Carcass, Deicide and Morbid Angel are often considered pioneers of the genre. In the late 1980s and early 1990s, death metal gained more media attention as popular genre niche record labels like Combat, Earache and Roadrunner began to sign death metal bands at a rapid rate. Since then, death metal has diversified, spawning a variety of subgenres.
Emergence and early history
English heavy metal band Venom, from Newcastle, crystallized the elements of what later became known as thrash metal, death metal and black metal, with their 1981 album Welcome to Hell. Their dark, blistering sound, harsh vocals, and macabre, proudly Satanic imagery proved a major inspiration for extreme metal bands. Another highly influential band, Slayer, formed in 1981. Although the band was a thrash metal act, Slayer's music was more violent than their thrash contemporaries Metallica, Megadeth and Exodus. Their breakneck speed and instrumental prowess combined with lyrics about death, violence, war and Satanism won Slayer a rabid cult following. According to Allmusic, Slayer's third album Reign in Blood "inspired the entire death metal genre". It had a big impact on the genre leaders.
Possessed, a band that formed in the San Francisco Bay Area during 1983, was attributed by Allmusic as "connecting the dots" between thrash metal and death metal with their 1985 debut album, Seven Churches. While attributed as having a Slayer influence, current and former members of the band had actually cited Venom and Motorhead, as well as early work by Exodus, as the main influences of their sound. Although the group had released only 2 studio albums in their formative years, they have been described by both music journalists and musicians as either being "monumental" in developing the death metal style, or as being the first death metal band. Earache Records noted that "....the likes of Trey Azagthoth and Morbid Angel based what they were doing in their formative years on the Possessed blueprint laid down on the legendary Seven Churches recording. Possessed arguably did more to further the cause of 'Death Metal' than any of the early acts on the scene back in the mid-late 80's."
During the same period as the dawn of Possessed, a second influential metal band was formed in Florida: Death. Death, originally called Mantas, was formed during 1983 by Chuck Schuldiner, Kam Lee, and Rick Rozz. In 1984 they released their first demo entitled Death by Metal, followed by several more. The tapes circulated through the tape trader world, quickly establishing the band's name. With Death guitarist Schuldiner adopting vocal duties, the band made a major impact on the scene. The fast minor-key riffs and solos were complemented with fast drumming, creating a style that would catch on in tape trading circles. Schuldiner has been attributed by Allmusic's Eduardo Rivadavia as being "widely recognized as the Father of Death Metal". Death's 1987 debut release, Scream Bloody Gore, has been described by About.com's Chad Bowar as being the "evolution from thrash metal to death metal", and "the first true death metal record" by the San Francisco Chronicle.
Along with Possessed and Death, other pioneers of death metal in the United States include Autopsy, Necrophagia, Master, Morbid Angel, Massacre, Atheist, Post Mortem, Obituary and Deicide.
By 1989, many bands had been signed by eager record labels wanting to cash in on the subgenre, including Florida's Obituary, Morbid Angel and Deicide. This collective of death metal bands hailing from Florida are often labeled as "Florida death metal". Death metal spread to Sweden in the late 1980s, flourishing with pioneers such as Carnage, God Macabre, Entombed, Dismember and Unleashed. In the early 1990s, the rise of typically melodic "Gothenburg metal" was recognized, with bands such as Dark Tranquillity, At the Gates, and In Flames.
Following the original death metal innovators, new subgenres began by the end of the decade. British band Napalm Death became increasingly associated with death metal, in particular, on 1990's Harmony Corruption. This album displays aggressive and fairly technical guitar riffing, complex rhythmics, a sophisticated growling vocal delivery by Mark "Barney" Greenway, and socially aware lyrical subjects, leading to the creation of the "grindcore" subgenre. Other bands contributing significantly to this early movement include Britain's Bolt Thrower and Carcass, and New York's Suffocation.
To close the circle, Death released their fourth album Human in 1991, an example of modern death metal. Death's founder Schuldiner helped push the boundaries of uncompromising speed and technical virtuosity, mixing technical and intricate rhythm guitar work with complex arrangements and emotive guitar solos. Other examples are Carcass's Necroticism – Descanting the Insalubrious, Suffocation's Effigy of the Forgotten and Entombed's Clandestine from 1991. At this point, all the above characteristics are present: abrupt tempo and count changes, on occasion extremely fast drumming, morbid lyrics and growling vocal delivery.
Earache Records, Relativity Records and Roadrunner Records became the genre's most important labels, with Earache releasing albums by Carcass, Napalm Death, Morbid Angel, and Entombed, and Roadrunner releasing albums by Obituary, and Pestilence. Although these labels had not been death metal labels, initially, they became the genre's flagship labels in the beginning of the 1990s. In addition to these, other labels formed as well, such as Nuclear Blast, Century Media, and Peaceville. Many of these labels would go on to achieve successes in other genres of metal throughout the 1990s.
In September 1990, Death's manager Eric Greif held one of the first North American death metal festivals, Day of Death, in Milwaukee suburb Waukesha, Wisconsin, and featured 26 bands including Autopsy, Broken Hope, Hellwitch, Obliveon, Revenant, Viogression, Immolation, Atheist, and Cynic.
Death metal's popularity achieved its initial peak between the 1992–93 era, with some bands such as Morbid Angel, Cannibal Corpse and Obituary enjoying mild commercial successes. However, the genre as a whole never broke in to the mainstream. The genre's mounting popularity may have been partly responsible for a strong rivalry between Norwegian black metal and Swedish death metal scenes. Fenriz of Darkthrone has noted that Norwegian black metal musicians were "fed up with the whole death metal scene" at the time. Death metal diversified in the 1990s, spawning a rich variety of subgenres which still have a large "underground" following at the present.
The setup most frequently used within the death metal genre is two guitarists, a bass player, a vocalist and a drummer often using "double bass blast beats". Although this is the standard setup, bands have been known to occasionally incorporate other instruments such as electronic keyboards.
The genre is often identified by fast, highly distorted and droptuned guitars, played with techniques such as palm muting and tremolo picking. The percussion is usually aggressive, and powerful; blast beats, double bass and exceedingly fast drum patterns frequently add to the complexity of the genre.
Death metal is known for its abrupt tempo, key, and time signature changes. Death metal may include chromatic chord progressions and a varied song structure, rarely employing the standard verse-chorus arrangement. These compositions tend to emphasize an ongoing development of themes and motifs.
Vocals and lyrics
Death metal vocals are often guttural roars, grunts, snarls, and low gurles colloquially known as death growls. Death growling is mistakenly thought to be a form of screaming using the lowest vocal register known as vocal fry, however vocal fry is actually a form of overtone screaming and true death growling is in fact created by an altogether different technique.[specify] The 3 major methods of harsh vocalization used in the genre are mistaken for each other often, encompassing vocal fry screaming, false chord screaming, and true death growls.[Full citation needed] The style is sometimes referred to as Cookie Monster vocals, tongue-in-cheek, due to the vocal similarity to the voice of the popular Sesame Street character of the same name. Although often criticized, death growls serve the aesthetic purpose of matching death metal's aggressive lyrical content. High-pitched screaming is also commonly utilized in death metal, being heard in songs by Death, Exhumed, Dying Fetus, Cannibal Corpse, and Deicide. Often death metal singers will alternate between shrieks and growls in order to create a contrasting effect.
The lyrical themes of death metal may invoke slasher film-stylized violence, but may also extend to topics like Satanism, anti-religion, occultism, Nature, mysticism, philosophy, Science Fiction, and politics. Although violence may be explored in various other genres as well, death metal may elaborate on the details of extreme acts, including mutilation, dissection, torture, rape and necrophilia. Sociologist Keith Kahn-Harris commented this apparent glamorization of violence may be attributed to a "fascination" with the human body that all people share to some degree, a fascination which mixes desire and disgust. Heavy metal author Gavin Baddeley also stated there does seem to be a connection between "how acquainted one is with their own mortality" and "how much they crave images of death and violence" via the media. Additionally, contributing artists to the genre often defend death metal as little more than an extreme form of art and entertainment, similar to horror films in the motion picture industry. This explanation has brought such musicians under fire from activists internationally, who claim that this is often lost on a large number of adolescents, who are left with the glamorization of such violence without social context or awareness of why such imagery is stimulating.
According to Alex Webster, bassist of Cannibal Corpse, "The gory lyrics are probably not, as much as people say, [what's keeping us] from being mainstream. Like, 'death metal would never go into the mainstream because the lyrics are too gory?' I think it's really the music, because violent entertainment is totally mainstream."
Origin of the term
The most popular theory of the subgenre's christening is Possessed's 1984 demo, Death Metal; the song from the eponymous demo would also be featured on the band's 1985 debut album, Seven Churches. Possessed vocalist/bassist Jeff Becerra said he coined the term in early 1983 for a high school English class assignment. Another possible origin is a fanzine called Death Metal, started by Thomas Fischer and Martin Ain of Hellhammer and Celtic Frost. The name was later given to the 1984 compilation Death Metal released by Noise Records. The term might also have originated from other recordings. A demo released by Death in 1984 is called Death by Metal.
It should be noted that cited examples are not necessarily exclusive to one particular style. Many bands can easily be placed in two or more of the following categories, and a band's specific categorization is often a source of contention due to personal opinion and interpretation.
- Melodic death metal: Scandinavian death metal could be considered the forerunner of "melodic death metal". Melodic death metal, sometimes referred to as "melodeath", is heavy metal music mixed with some death metal elements, such as growled vocals and the liberal use of blastbeats. Songs are typically based on Iron Maiden-esque guitar harmonies and melodies with typically higher-pitched growls, as opposed to traditional death metal's brutal riffs and much lower death grunts. Carcass is sometimes credited with releasing the first melodic death metal album with 1993's Heartwork, although Swedish bands In Flames, Dark Tranquillity, and At the Gates are usually mentioned as the main pioneers of the genre and of the Gothenburg metal sound.
- Technical death metal: Technical death metal and "progressive death metal" are related terms that refer to bands distinguished by the complexity of their music. Common traits are dynamic song structures, uncommon time signatures, atypical rhythms and unusual harmonies and melodies. Bands described as technical death metal or progressive death metal usually fuse common death metal aesthetics with elements of progressive rock, jazz or classical music. While the term technical death metal is sometimes used to describe bands that focus on speed and extremity as well as complexity, the line between progressive and technical death metal is thin. "Tech death" and "prog death", for short, are terms commonly applied to such bands as Cryptopsy, Edge of Sanity, Opeth, Origin and Sadist. Cynic, Atheist, Pestilence and Gorguts are examples of bands noted for creating jazz-influenced death metal. Necrophagist and Spawn of Possession are known for a classical music-influenced death metal style. Death metal pioneers Death also refined their style in a more progressive direction in their final years. The Polish band Decapitated gained recognition as one of Europe's primary modern technical death metal acts.
- Deathcore: With the rise in popularity of metalcore, some of its traits have been incorporated into death metal. Bands such as Suicide Silence, Salt the Wound and the early works from Job for a Cowboy combine metalcore with death metal influences. Characteristics of death metal, such as fast drumming (including blast beats), down-tuned guitars, tremolo picking and growled vocals, are combined with screamed vocals, melodic riffs and breakdowns.
- Death/doom: Death/doom is a style that combines the slow tempos and melancholic atmosphere of doom metal with the deep growling vocals and double-kick drumming of death metal. The style emerged during the late 1980s and gained a certain amount of popularity during the 1990s. It was pioneered by bands such as Autopsy, Winter, Asphyx, Disembowelment, Paradise Lost, and My Dying Bride.
- Goregrind and deathgrind: This style mixes the intensity, speed, and brevity of grindcore with the complexity of death metal. It differs from death metal in that guitar solos are often a rarity, shrieked vocals are more prominent as the main vocal style (though death growls are still utilized and some deathgrind bands make more use of the latter vocal style), and songs are generally shorter in length, usually between one and three minutes. The style differs from grindcore in the more technical approach and less evident hardcore punk influence and aesthetics. Some notable examples of deathgrind are Brujeria, Cattle Decapitation, Cephalic Carnage, Pig Destroyer, Circle of Dead Children and Rotten Sound.
- Blackened death metal: is a style that combines death metal and black metal. These bands also often tend to adopt some of the thematic characteristics of that genre as well: Satanism and occultism are all common topics and images. The style was influenced by bands such as Sarcófago, Blasphemy, Beherit and Impaled Nazarene. In the mid 1990s it was developed further by bands such as Belphegor, Behemoth, Akercocke, Zyklon, Septic Flesh, and Sacramentum.
Other fusions and subgenres
There are other heavy metal music subgenres that have come from fusions between death metal and other non-metal genres, such as the fusion of death metal and jazz. Atheist and Cynic are two examples. The former of went as far as to include jazz-style drum solos on albums, and the latter incorporated elements of jazz fusion. Nile have also incorporated Egyptian music and Middle Eastern themes into their work, while Alchemist have incorporated psychedelia along with Aboriginal music. Some groups, such as Nightfall, Septic Flesh, and Eternal Tears of Sorrow, have incorporated keyboards and symphonic elements, creating a fusion of symphonic metal and death metal, sometimes referred to as symphonic death metal.
- ^ "Death Metal/Black Metal". Allmusic. http://www.allmusic.com/explore/style/d384. Retrieved 2008-07-04. "Death Metal grew out of the thrash metal in the late '80s."
- ^ a b c d e Dunn, Sam (Director) (August 5, 2005). Metal: A Headbanger's Journey (motion picture). Canada: Dunn, Sam. http://imdb.com/title/tt0478209/.
- ^ Joel McIver Extreme Metal, 2000, Omnibus Press pg.14 ISBN 88-7333-005-3
- ^ The greatest metal band for Mtv
- ^ Joel McIver Extreme Metal, 2000, Omnibus Press pg.100 ISBN 88-7333-005-3
- ^ Joel McIver Extreme Metal, 2000, Omnibus Press pg.55 ISBN 88-7333-005-3
- ^ Rivadavia, E. Possessed: Biography, allmusic, (accessed August 13, 2008)
- ^ allmusic ((( Death > Biography )))
- ^ Metal Rules Interview with Chuck Schuldiner
- ^ The Best Of NAMM 2008: Jimmy Page, Satriani Models Among The Highlights | News @ Ultimate-Guitar.Com
- ^ Morbid Angel page @ Allmusic "Formed in 1984 in Florida, Morbid Angel (along with Death) would also help spearhead an eventual death metal movement in their home state"
- ^ Is Metal Still Alive? WATT Magazine, Written by: Robert Heeg, Published: April 1993
- ^ Silver Dragon Records "During the 1990s death metal diversified influencing many subgenres"
- ^ Venom – Welcome to Hell review @ Allmusic "Make no mistake: Welcome to Hell, more than any other album, crystallized the elements of what later became known as thrash, death, black, and virtually every other form of extreme metal"
- ^ Venom band page @ Allmusic "Venom developed a dark, blistering sound which paved the way for the subsequent rise of thrash music; similarly, their macabre, proudly Satanic image proved a major inspiration for the legions of black metal bands"
- ^ a b Into The Lungs of Hell Metal Hammer magazine, Written by: Enrico de Paola, Translated by: Vincenzo Chioccarelli, Published: March 2000 ""
- ^ Slayer band page @ Allmusic
- ^ Huey, Steve. "Reign in Blood – Slayer". Allmusicguide.com. http://www.allmusic.com/album/r18220. Retrieved 2007-01-05.
- ^ John Peel,, Albert Mudrian (2004). Choosing Death: The Improbable History of Death Metal and Grindcore. Feral House. ISBN 193259504X.
- ^ Scaruffi, Piero (October 15, 2003). A History of Rock Music: 1951-2000 (page 277). iUniverse. ISBN 0595295657.
- ^ Possessed – Seven Churches review @ Allmusic
- ^ Possessed band page @ Allmusic
- ^ POSSESSED interview - Jeff Becerra
- ^ POSSESSED interview - Brian Montana
- ^ Purcell, Natalie J. (2003). Death Metal music: the passion and politics of a subculture (page 54). McFarland & Company. ISBN 0786415851.
- ^ McIver, Joel (2008). The Bloody Reign of Slayer. Omnibus Press. ISBN 1847721095.
- ^ Ekeroth, Daniel (2008). Swedish Death Metal (page 12). Bazillion Points. ISBN 9780979616310.
- ^ John Peel, Albert Mudrian (2004). Choosing Death: The Improbable History of Death Metal and Grindcore (page 70). Feral House. ISBN 193259504X.
- ^ Earache.com Jeff Becerra interview
- ^ Death band page
- ^ Purcell, Natalie J. (2003). "3". Death Metal Music: The Passion and Politics of a Subculture. McFarland & Company. pp. 54. ISBN 0786415851. http://books.google.com/books?id=6ZErQs5hCUQC. Retrieved June 2007.
- ^ Death biography, allmusic
- ^ About.com
- ^ Aldis, N. & Sherry, J. Heavy metal Thunder, 2006, San Francisco: Chronicle ISBN 0-8118-5353-5
- ^ about.com: "Post Mortem offered my first real exposure ever to death metal, arriving before standards like Death’s Scream Bloody Gore in 1987 and Autopsy’s Severed Survival in 1989"
- ^ Boston Herald: "Boston isn’t known as a death-metal hotbed, but if the city could claim one pioneer band in the genre, it was Post Mortem"
- ^ Boston Globe:"helped pioneer the underground subgenre of death metal"
- ^ Empty Words, where there are dozens of reviews along this line
- ^ 'Death Metal Special: Dealers in Death' Terrorizer #151
- ^ Biography, Official Atheist site, accessed December 10, 2008
- ^ Zebub, Bill (2007). Black Metal: A Documentary.
- ^ Purcell, N. Death Metal music: the passion and politics of a subculture, at 9, McFarland, 2003 (retrieved October 28, 2010)
- ^ Kahn-Harris, K. Extreme metal: music and culture on the edge, at 32, Berg Publishers, 2007 (retrieved October 28, 2010)
- ^ Marsicano, D. Melodic Death Metal, About.com (retrieved October 27, 2010)
- ^ FretJam Guitar Lessons, "How to Play Death Metal Guitar"
- ^ Interview with Samuel Deschaine, Death Metal Vocal Instructor 2011
- ^ Melissa Cross, The Zen Of Screaming
- ^ "Cookie Monster Vocals". about.com. http://heavymetal.about.com/od/glossary/g/gl_cookiemonste.htm. Retrieved January 21, 2006. . See further examples of this usage at "The cookie monster vocal explained". rocknerd. Archived from the original on February 18, 2006. http://web.archive.org/web/20060218034831/http://rocknerd.org/article.pl?sid=04/07/15/1626209. Retrieved January 21, 2006.
- ^ Sharpe-Young, Garry. Death Metal, ISBN 0-9582684-4-4
- ^ Moynihan, Michael, and Dirik Søderlind (1998). Lords of Chaos (2nd ed.). Feral House. ISBN 0-922915-94-6, p. 27
- ^ Purcell, Natalie J. (2003). "3". Death Metal Music: The Passion and Politics of a Subculture. McFarland & Company. pp. 39–42. ISBN 0786415851. http://books.google.com/books?id=6ZErQs5hCUQC. Retrieved June 2007.
- ^ Wikihow: How to Appreciate Death Metal
- ^ Khan-Harris, Keith. Extreme Metal: Music and Culture on the Edge. Oxford: Berg, 2006. ISBN 978-1-84520-399-3
- ^ Baddeley, Gavin. Raising Hell!: The Book of Satan and Rock 'n' Roll
- ^ Alex Webster (Cannibal Corpse) interview
- ^ Purcell, Natalie J. (2003). "4". Death Metal Music: The Passion and Politics of a Subculture. McFarland & Company. pp. 53. ISBN 0786415851. http://books.google.com/books?id=6ZErQs5hCUQC. Retrieved June 2007. "Meanwhile, in 1983, the term was co-coined by some American teens who formed the band Possessed and labeled their demo "Death Metal"."
- ^ Ekeroth, Daniel (2008). Swedish Death Metal (page 11). Bazillion Points. ISBN 9780979616310.
- ^ Purcell, Natalie J. (2003). "3". Death Metal Music: The Passion and Politics of a Subculture. McFarland & Company. pp. 53. ISBN 0786415851. http://books.google.com/books?id=6ZErQs5hCUQC. Retrieved June 2007. "The term "Death Metal" emerged when Thomas Fischer and Martin Ain, a pair of Swiss Venom fans in the band Hellhammer (later Celtic Frost), started a fanzine called "Death Metal". Later, their record label German Noise Records used the "Death Metal" name for a compilation featuring Hellhammer"
- ^ Hellhammer biography"Karl from Noise is planning to call the LP Black Mass but it is Tom who talks him out of it and proposes Death Metal which actually is the name of the underground mag Tom used to run"
- ^ THE DEATH OF DEATH Martelgang Magazine, Written by: Anton de Wit, Published: January 2002, "Yet it's almost unthinkable that the term wasn't inspired by the band name Death or their first demo, Death by Metal from 1984."
- ^ Eduardo Rivadavia. "Decapitated Biography". Allmusic. http://www.allmusic.com/artist/p420031. Retrieved 2010-02-07.
- ^ "Decapitated's New Lineup Performs Live For First Time; Photos Available - Feb. 3, 2010". Blabbermouth.net. http://www.roadrunnerrecords.com/blabbermouth.net/news.aspx?mode=Article&newsitemID=134476. Retrieved 2010-02-07.
- ^ a b 'Doom Metal Special:Doom/Death' Terrorizer #142
- ^ a b c d e Purcell, Nathalie J. (2003). Death Metal Music: The Passion and Politics of a Subculture. McFarland & Company. pp. 23. ISBN 0786415851. http://books.google.com/books?id=6ZErQs5hCUQC. Retrieved April 2008.
- ^ Rivadavia, Eduardo. "Aborted". Allmusic. http://www.allmusic.com/artist/p568178. Retrieved 2009-06-10.
- ^ "The Locust, Cattle Decapitation, Daughters", Pop and Rock Listings, The New York Times, April 13, 2007. Access date: August 6, 2008.
- ^ Bryan Reed, The Daily Tar Heel, July 19, 2007. Access date: August 6, 2008.
- ^ Henderson, Alex. "Ninewinged Serpent review". Allmusic. http://www.allmusic.com/album/r1241205. Retrieved 2009-05-03.
- ^ Bowar, Chad. "Venganza review". About.com. http://heavymetal.about.com/od/reviews/gr/hacavitz.htm. Retrieved 2009-05-03.
- Ekeroth, Daniel (2008). Swedish Death Metal. Bazillion Points Books. ISBN 978-0-9796163-1-0
- Albert Mudrian, Choosing Death: The Improbable History of Death Metal & Grindcore (Feral House) ISBN 978-1-932595-04-8
- Kahn-Harris, Keith 'Extreme Metal: Music and Culture on the Edge' Berg, http://soulremnants.com, ISBN 1-84520-399-2
- Purcell, Natalie J. 'Death Metal Music: The Passion and Politics of a Subculture' McFarland & Company, ISBN 0-7864-1585-1
- Ian Christe. Sound of the Beast: The Complete Headbanging History of Heavy Metal. (New York, NY. Harper Collins, 2003) ISBN 978-0-380-81127-4
- Harrell, Jack. "The Poetics of Destruction: Death Metal Rock." Popular Music and Society. Spring 1995. Republished, April, 1996 in the Social Issues Resources Series (SIRS) database.
Heavy metal SubgenresAlternative metal · Avant-garde metal · Black metal · Christian metal · Crust punk · Death metal · Djent · Doom metal · Drone metal · Extreme metal · Folk metal · Funk metal · Glam metal · Gothic metal · Grindcore · Groove metal · Industrial metal · Metalcore · Neo-classical metal · Nintendocore · Nu metal · Post-metal · Power metal · Progressive metal · Rap metal · Sludge metal · Speed metal · Stoner metal · Symphonic metal · Thrash metal · Traditional heavy metal · Viking metal Notable scenes Culture Death metal Sub-genres Fusion genre Related articles Extreme metal Genres Sub-genres Fusion genres Notable scenes
Wikimedia Foundation. 2010. | <urn:uuid:1af85504-6d6d-4b6a-af07-68d3d485ca15> | CC-MAIN-2021-21 | https://en-academic.com/dic.nsf/enwiki/4754 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.80/warc/CC-MAIN-20210507090724-20210507120724-00377.warc.gz | en | 0.889346 | 6,403 | 2.75 | 3 |
Modern Bangor was established by European Americans in the mid-1800s, based on the lumber and shipbuilding industries. As the city was located on the Penobscot River, logs could be floated downstream from the North Maine Woods and processed at the city's water-powered sawmills, then shipped from Bangor's port to the Atlantic Ocean 30 miles downstream, and from there to any port in the world. With their wealth, the lumber barons built elaborate Greek Revival and Victorian mansions and the 31 feet (9.4 m) high statue of Paul Bunyan. Today, Bangor's economy is based on services and retail, healthcare, and education.
Founded as Condeskeag Plantation, Bangor was incorporated as a New England town in 1791, after the end of the American Revolutionary War. There are more than 20 communities worldwide named Bangor, of which 15 are in the United States and named after Bangor, Maine by settlers in other areas. The reason for the choice of name is disputed but it was likely either from the eponymous Welsh hymn or from either of two places of that name in the British Isles: Bangor, Gwynedd, Wales, and Bangor, County Down, Northern Ireland, from where many immigrants traveled to the United States. The final syllable is pronounced gor, not ger. In 2015, local public officials, journalists, doctors, policemen, photographers, restaurateurs, TV personalities and Grammy-winning composers came together to record the YouTube video How To Say Bangor.
Bangor has a port of entry at Bangor International Airport, also home to the Bangor Air National Guard Base. Historically, Bangor was an important stopover on the great circle air route between the East Coast of the United States and Europe.
Bangor has a humid continental climate, with cold, snowy winters, and warm summers.
The Penobscot have inhabited the area around present-day Bangor for at least 11,000 years and still occupy tribal land on the nearby Penobscot Indian Island Reservation. They practiced some agriculture, but less than peoples in southern New England where the climate is milder, and subsisted on what they could hunt and gather. Contact with Europeans was not uncommon during the 1500s because the fur trade was lucrative and the Penobscot were willing to trade pelts for European goods. The site was visited by Portuguese explorer Estêvão Gomes in 1524 and by Samuel de Champlain in 1605. The Society of Jesus established a mission on Penobscot Bay in 1609, which was then part of the French colony of Acadia, and the valley remained contested between France and the Kingdom of Great Britain into the 1750s, making it one of the last regions to become part of New England.
In 1769 Jacob Buswell founded a settlement at the site. By 1772, there were 12 families, along with a sawmill, store, and school, and in 1787 the population was 567. In September 1787 a petition, signed by 19 residents, was sent to the General Court of the Commonwealth of Massachusetts requesting that this designated area be named "Sunbury". On the back of it was written, "To the care of Dr. Cony, Hallowell". This petition was rejected before 6 October 1788, as the town referred to itself as Penobscot River, west side.
In 1779, the rebel Penobscot Expedition fled up the Penobscot River and ten of its ships were scuttled by the British fleet at Bangor. The ships remained there until the late 1950s, when construction of the Joshua Chamberlain Bridge disturbed the site. Six cannons were removed from the riverbed; five of which are on display throughout the region (one was thrown back into the river by area residents angered that the archeological site was destroyed for the bridge construction).
In 1790, the Kenduskeag Plantation could no longer defer incorporation from the General Court of Massachusetts and in June Rev. Seth Noble hand delivered the petition for incorporation to the State House in Boston. He left the name for the town blank so he could obtain tentative approval. He chose the name of a popular hymn known to be a favorite of Governor John Hancock. He then wrote in Bangor. The incorporation was received on 25 February 1791 and was signed by John Hancock.
During the War of 1812, Bangor and Hampden were looted by the British.
Maine was part of the Commonwealth of Massachusetts until 1820, when it voted to secede from the state and was admitted to the Union under the Missouri Compromise as the 23rd state.
Bangor was near the lands disputed during the Aroostook War, a boundary dispute with Britain in 1838–39. The passion of the Aroostook War signaled the increasing role lumbering and logging were playing in the Maine economy, particularly in the central and eastern sections of the state. Bangor arose as a lumbering boom-town in the 1830s, and a potential demographic and political rival to Portland, Maine. For a time, Bangor was the largest lumber port in the world, and the site of furious land speculation that extended up the Penobscot River valley and beyond.
In 1861, after the outbreak of the American Civil War, a Unionist mob attacked and ransacked the offices of the Democratic newspaper the Bangor Daily Union. they threw the presses and other materials thrown into the street and burned. Editor Marcellus Emery escaped unharmed. He did not resume publishing until after the end of the War.
During the American Civil War, the locally mustered 2nd Maine Volunteer Infantry Regiment was the first to march out of Maine in 1861. It played a prominent part in the First Battle of Bull Run. The 1st Maine Heavy Artillery Regiment, mustered in Bangor and commanded by a local merchant, lost more men than any other Union regiment in the war (especially in the Second Battle of Petersburg, 1864). The 20th Maine Infantry Regiment held Little Round Top in the Battle of Gettysburg. A bridge connecting Bangor with Brewer is named for Chamberlain, the regiment's leader, who was one of eight Civil War soldiers from Penobscot County towns to receive the Congressional Medal of Honor. Bangor's Charles A. Boutelle accepted the surrender of the Confederate fleet after the Battle of Mobile Bay. A Bangor residential street is named for him. A number of Bangor ships were captured on the high seas by Confederate raiders in the Civil War, including the Delphine, James Littlefield, Mary E. Thompson and Golden Rocket.
The Penobscot River Maine North Woods drainage basin above Bangor was unattractive to settlement for farming, but well suited to lumbering. Winter snow allowed logs to be dragged from the woods by horse-teams. Carried to the Penobscot or its tributaries, log driving in the snowmelt brought them to waterfall-powered sawmills upriver from Bangor. The sawn lumber was then shipped from the city's docks, Bangor being at the head-of-tide (between the rapids and the ocean) to points anywhere in the world. Shipbuilding was also developed. Bangor capitalists also owned most of the forests. The main markets for Bangor lumber were the East Coast cities. Much was also shipped to the Caribbean and to California during the Gold Rush, via Cape Horn, before sawmills could be established in the west. Bangorians subsequently helped transplant the Maine culture of lumbering to the Pacific Northwest, and participated directly in the Gold Rush themselves. Bangor, Washington; Bangor, California; and Little Bangor, Nevada, are legacies of this contact.
By 1860, Bangor was the world's largest lumber port, with 150 sawmills operating along the river. The city shipped over 150 million boardfeet of lumber a year, much of it in Bangor-built and Bangor-owned ships. In the year 1860, 3,300 lumbering ships passed by the docks.
Many of the lumber barons built elaborate Greek Revival and Victorian houses that still stand in the Broadway Historic District. Bangor has many substantial old churches, and shade trees. The city was so beautiful it was called "The Queen City of the East." The shorter Queen City appellation is still used by some local clubs, organizations, events and businesses.
In addition to shipping lumber, 19th-century Bangor was the leading producer of moccasins, shipping over 100,000 pairs a year by the 1880s. Exports also included bricks, leather, and even ice (which was cut and stored in winter, then shipped to Boston, and even China, the West Indies and South America).
Bangor had certain disadvantages compared to other East Coast ports, including its rival Portland, Maine. Being on a northern river, its port froze during the winter, and it could not take the largest ocean-going ships. The comparative lack of settlement in the forested hinterland also gave it a comparatively small home market.
In 1844 the first ocean-going iron-hulled steamship in the U.S. was named The Bangor. She was built by the Harlan and Hollingsworth firm of Wilmington, Delaware in 1844, and was intended to take passengers between Bangor and Boston. On her second voyage, however, in 1845, she burned to the waterline off Castine. She was rebuilt at Bath, returned briefly to her earlier route, but was soon purchased by the U.S. government for use in the Mexican–American War.
Bangor continued to prosper as the pulp and paper industry replaced lumbering, and railroads replaced shipping. Local capitalists also invested in a train route to Aroostook County in northern Maine (the Bangor and Aroostook Railroad), opening that area to settlement.
Bangor's Hinkley & Egery Ironworks (later Union Ironworks) was a local center for invention in the 19th and early 20th centuries. A new type of steam engine built there, named the "Endeavor", won a Gold Medal at the New York Crystal Palace Exhibition of the American Institute in 1856. The firm won a diploma for a shingle-making machine the following year. In the 1920s, Union Iron Works engineer Don A. Sargent invented the first automotive snow plow. Sargent patented the device and the firm manufactured it for a national market.
Bangor is located at 44°48′13″N 68°46′13″W (44.803, −68.770). According to the United States Census Bureau, the city has a total area of 34.59 square miles (89.59 km2), of which 34.26 square miles (88.73 km2) is land and 0.33 square miles (0.85 km2) is water.
A potential advantage that has always eluded exploitation is the city's location between the port city of Halifax, Nova Scotia, and the rest of Canada (as well as New York). As early as the 1870s, the city promoted a Halifax-to-New York railroad, via Bangor, as the quickest connection between North America and Europe (when combined with steamship service between Britain and Halifax). A European and North American Railway was actually opened through Bangor, with President Ulysses S. Grant officiating at the inauguration, but commerce never lived up to the potential. More recent attempts to capture traffic between Halifax and Montreal by constructing an East–West Highway through Maine have also come to naught. Most overland traffic between the two parts of Canada continues to travel north of Maine rather than across it.
Bangor is bordered on the north by Glenburn, on the east by Orono and Veazie, on the southeast by Brewer, on the southwest by Hampden and on the west by Hermon.1856: A large fire destroyed at least 10 downtown businesses and 8 houses, as well as the sheriff's office.
1869: The West Market Square fire, from which arose The Phoenix Block (the present Charles Inn). The fire destroyed 10 business blocks and cut off telegraphic communication
1872: Another large downtown fire, on Main St., killed 1 and injured 7. The Adams-Pickering Block (architect George W. Orff) replaced the burned section.
In the Great Fire of 1911, a fire started in a hay shed and spread to the surrounding downtown buildings and blazed through the night into the next day. When the damage was tallied, Bangor had lost its high school, post office & custom house, public library, telephone and telegraph companies, banks, two fire stations, nearly a hundred businesses, six churches, and synagogue and 285 private residences over a total of 55 acres. The area was rebuilt, and in the process became a showplace for a diverse range of architectural styles, including the Mansard style, Beaux Arts, Greek Revival and Colonial Revival, and is listed on the National Register of Historic Places as the Great Fire of 1911 Historic District.
1914: The Bangor Opera House burned down, and two firemen were killed by a collapsing wall. A third was badly injured, and three others less seriously.
The destruction of downtown landmarks such as the old city hall and train station in the late 1960s Urban Renewal Program is now considered to have been a huge planning mistake. It ushered in a decline of the city center that was accelerated by the construction of the Bangor Mall in 1978 and subsequent big-box stores on the city's outskirts. Downtown Bangor began to recover in the 1990s, with bookstores, cafe/restaurants, galleries, and museums filling once-vacant storefronts. The recent re-development of the city's waterfront has also helped re-focus cultural life in the historic center.
Bangor is on the banks of the Penobscot River, close enough to the Atlantic Ocean to be influenced by tides. Upstream, the Penobscot River drainage basin occupies 8,570 square miles in northeastern Maine. Flooding is most often caused by a combination of precipitation and snowmelt. Ice jams can exacerbate high flow conditions and cause acute localized flooding. Conditions favorable for flooding typically occur during the spring months.
In 1807 an ice jam formed below Bangor Village raising the water 10 to 12 feet above the normal highwater mark and in 1887 the freshet caused the Maine Central Railroad Company rails between Bangor and Vanceboro to be covered to a depth of several feet. Bangor’s worst ice jam floods occurred in 1846 and 1902. Both resulted from mid-December freshets that cleared the upper river of ice, followed by cold that produced large volumes of frazil ice or slush which was carried by high flows forming a major ice jam in the lower river. In March of both years, a dynamic breakup of ice ran into the jam and flooded downtown Bangor. Though no lives were lost and the city recovered quickly, the 1846 and 1902 ice jam floods were economically devastating, according to the Army Corps analysis. Both floods occurred with multiple dams in place and little to no ice-breaking in the lower river. The United States Coast Guard began icebreaker operations on the Penobscot in the 1940s, preventing the formation of frozen ice jams during the winter and providing an unobstructed path for ice-out in the spring. Long-term temperature records show a gradual warming since 1894, which may have reduced the ice jam flood potential at Bangor.
In the Groundhog Day gale of 1976 a storm surge went up the Penobscot, flooding Bangor for three hours. At 11:15 am, waters began rising on the river and within 15 minutes had risen a total of 3.7 metres (12 ft) flooding downtown. About 200 cars were submerged and office workers were stranded until waters receded. There were no reported deaths during this unusual flash flood.
Bangor has a humid continental climate (Köppen Dfb), with cold, snowy winters, and warm summers, and is located in USDA hardiness zone 5a. The monthly daily average temperature ranges from 17.0 °F (−8.3 °C) in January to 68.5 °F (20.3 °C) in July. On average, there are 21 nights annually that drop to 0 °F (−18 °C) or below, and 57 days where the temperature stays below freezing, including 49 days from December through February. There is an average of 5.3 days annually with highs at or above 90 °F (32 °C), with the last year to have not seen such temperatures being 2014. Extreme temperatures range from −32 °F (−36 °C) on February 10, 1948 up to 104 °F (40 °C) on August 19, 1935.
The average first freeze of the season occurs on October 7, and the last May 7, resulting in a freeze-free season of 152 days; the corresponding dates for measurable snowfall, i.e. at least 0.1 in (0.25 cm), are November 23 and April 4. The average seasonal snowfall for Bangor is approximately 66 inches (170 cm), while snowfall has ranged from 22.2 inches (56 cm) in 1979–80 to 181.9 inches (4.62 m) in 1962−63; the record snowiest month was February 1969 with 58.0 inches (147 cm), while the most snow in one calendar day was 30.0 inches (76 cm) on December 14, 1927. Measurable snow occurs in May occurs about one-fourth of all years, while it has occurred just once (1991) in September. A snow depth of at least 3 in (7.6 cm) is on average seen 66 days per winter, including 54 days from January to March, when the snow pack is typically most reliable.
As of 2008, Bangor is the third most populous city in Maine, as it has been for more than a century. As of 2012, the estimated population of the Bangor Metropolitan Area (which includes Penobscot County) is 153,746, indicating a slight growth rate since 2000, almost all of it accounted for by Bangor. As of 2007, Metro Bangor had a higher percentage of people with high school degrees than the national average (85% compared to 76.5%) and a slightly higher number of graduate degree holders (7.55% compared to 7.16%). It had much higher number of physicians per capita (291 vs. 170), because of the presence of two large hospitals.
Historically Bangor received many immigrants as it industrialized. Irish-Catholic and later Jewish immigrants eventually became established members of the community, along with many migrants from Atlantic Canada. Of 205 black citizens who lived in Bangor in 1910, over a third were originally from Canada.
As of the census of 2010, there were 33,039 people, 14,475 households, and 7,182 families residing in the city. The population density was 964.4 inhabitants per square mile (372.4/km2). There were 15,674 housing units at an average density of 457.5 per square mile (176.6/km2). The racial makeup of the city was 93.1% White, 1.7% African American, 1.2% Native American, 1.7% Asian, 0.3% from other races, and 2.0% from two or more races. Hispanic or Latino of any race were 1.5% of the population.
There were 14,475 households of which 24.2% had children under the age of 18 living with them, 32.8% were married couples living together, 12.6% had a female householder with no husband present, 4.2% had a male householder with no wife present, and 50.4% were non-families. 37.9% of all households were made up of individuals and 12.4% had someone living alone who was 65 years of age or older. The average household size was 2.10 and the average family size was 2.76.
The median age in the city was 36.7 years. 17.8% of residents were under the age of 18; 16% were between the ages of 18 and 24; 26% were from 25 to 44; 25.8% were from 45 to 64; and 14.4% were 65 years of age or older. The gender makeup of the city was 48.2% male and 51.8% female.
Major employers in the region include:Services and retail: Hannaford Supermarkets, Bangor Savings Bank, NexxLinx call center, Walmart.
Finance: The Bangor Savings Bank, founded in 1852, is Maine's largest independent bank; as of 2013, it had more than $2.8 billion in assets and the largest share of the 13-bank Bangor market.
Healthcare: Eastern Maine Medical Center, Acadia Hospital, St. Joseph's Healthcare, Community Health & Counseling Services.
Education: University of Maine, Husson University.
Manufacturing: General Electric
Bangor is the largest market town, distribution center, transportation hub, and media center in a five-county area whose population tops 330,000 and which includes Penobscot, Piscataquis, Hancock, Aroostook, and Washington counties.
Bangor's City Council has approved a resolution opposing the sale of sweat-shop-produced clothing in local stores.
Outdoor activities in the Bangor City Forest and other nearby parks, forests, and waterways include hiking, sailing, canoeing, hunting, fishing, skiing, and snowmobiling.
Bangor Raceway at the Bass Park Civic Center and Auditorium offers live, pari-mutuel harness racing from May through July and then briefly in the fall. Hollywood Slots, operated by Penn National Gaming, is a slot machine facility. In 2007, construction began on a $131-million casino complex in Bangor that houses, among other things, a gambling floor with about 1,000 slot machines, an off-track betting center, a seven-story hotel, and a four-level parking garage. In 2011, it was authorized to add table games.
Bangor Air National Guard Base is a United States Air National Guard base. Created in 1927 as a commercial field, it was taken over by the U.S. Army just before World War II. In 1968, the base was sold to the city of Bangor, Maine, to become Bangor International Airport but has since continued to host the 101st Air Refueling Wing, Maine Air National Guard, part of the Northeast Tanker Task Force.
In 1990, the USAF East Coast Radar System (ECRS) Operation Center was activated in Bangor with over 400 personnel. The center controlled the over-the-horizon radar's transmitter in Moscow, Maine, and receiver in Columbia Falls, Maine. With the end of the Cold War, the facility's mission of guarding against a Soviet air attack became superfluous, and though it briefly turned its attention toward drug interdiction, the system was decommissioned in 1997 as the SSPARS system installation—the successor to the PAVE PAWS installation—in Massachusetts' Cape Cod Air Force Station reservation fully took over.One of the country's oldest fairs, the Bangor State Fair has occurred annually for more than 150 years. Beginning on the last Friday of July, it features agricultural exhibits, rides, and live performances.
The Cross Insurance Center (which replaced the Bangor Auditorium in 2013)
Darling's Waterfront Pavilion
The University of Maine Museum of Art and the Maine Discovery Museum, a major children's museum was founded in 2001 in the former Freese's Department Store.
The Bangor Historical Society, in addition to its space exhibit, maintains the historic Thomas A. Hill House.
The Bangor Police Department has a police museum with some items dating to the 18th century.
Fire Museum at the former State Street Fire Station.
The Cole Land Transportation Museum.
The Bangor Symphony Orchestra.
The Penobscot Theatre Company
The Collins Center for the Arts
Many buildings and monuments are listed on the National Register of Historic Places. The city has also had a municipal Historic Preservation Commission since the early 1980s. Bangor contains many Greek Revival. Victorian, and Colonial Revival houses. Some notable architecture:The Thomas Hill Standpipe, a shingle style structure.
The Hammond Street Congregation Church.
The St. John's Catholic Church.
The Bangor House Hotel, now converted to apartments, is the only survivor among a series of "Palace Hotels" designed by Boston architect Isaiah Rogers, which were the first of their kind in the United States.
The country's second oldest garden cemetery, is the Mt. Hope Cemetery, designed by Charles G. Bryant.
Richard Upjohn, British-born architect and early promoter of the Gothic Revival style, received some of his first commissions in Bangor, including the Isaac Farrar House (1833), Samuel Farrar House (1836), Thomas A. Hill House (presently owned by the Bangor Historical Society), and St. John's Church (Episcopal, 1836–39).
Bangor Public Library by Peabody and Stearns. (The Bangor Public Library completed a several year long renovation process in late 2016, with a public unveiling of a more conventional look to appeal to a modern generation).
The Eastern Maine Insane Hospital by John Calvin Stevens.
The William Arnold House of 1856, an Italianate style mansion and home to author Stephen King. Its wrought-iron fence with bat and spider web motif is King's own addition.
The Little City Park Storage Box is one of the most important and celebrated landmarks in not only Bangor, but Maine history as a whole. This modest metal box serves as the hub of one of Bangor's most well-kept and cherished parks. O one of the box's sides, you will see a small circular hole that you can only fit one finger through. During the summer of 2009, a water fountain was attached to the box with water stored inside for all residents of Bangor to drink from. It was removed and has not been seen since. However, this small hole can still be seen in the side of the box.
The bow-plate of the battleship USS Maine, whose destruction in Havana, Cuba, presaged the start of the Spanish–American War, survives on a granite memorial by Charles Eugene Tefft in Davenport Park.
Bangor has a large fiberglass-over-metal statue of mythical lumberman Paul Bunyan by Normand Martin (1959).
There are three large bronze statues in downtown Bangor by sculptor Charles Eugene Tefft of Brewer, including the Luther H. Peirce Memorial, commemorating the Penobscot River Log-Drivers; a statue of Hannibal Hamlin at Kenduskeag Mall; and an image of "Lady Victory" at Norumbega Parkway.
The abstract aluminum sculpture "Continuity of Community" (1969) on the Bangor Waterfront, formerly in West Market Square, is by the Castine sculptor Clark Battle Fitz-Gerald.
The U.S. Post Office in Bangor contains Yvonne Jacquette's 1980 three-part mural "Autumn Expansion".
A 1962 bronze commemorating the 2nd Maine Volunteer Infantry Regiment by Wisconsin sculptor Owen Vernon Shaffer stands at the entrance to Mt. Hope Cemetery.
From 2002 to 2016, Bangor has been home to Little League International's Senior League World Series.
Bangor was home to two minor league baseball teams affiliated with the 1995-98 Northeast League: the Bangor Blue Ox (1996–97) and the Bangor Lumberjacks (2003–04). Even earlier the Bangor Millionaires (1894–96) played in the New England League.
Vince McMahon promoted his first professional wrestling event in Bangor in 1979. In 1985, the WWC Universal Heavyweight Championship changed hands for the first time outside of Puerto Rico at an IWCCW show in Bangor.
The Penobscot is a salmon-fishing river; the Penobscot Salmon Club traditionally sent the first fish caught to the President of the United States. From 1999 to 2006, low fish stocks resulted in a ban on salmon fishing. Today, the wild salmon population (and the sport) is slowly recovering. The Penobscot River Restoration Project is working to help the fish population by removing some dams north of Bangor.
The Kenduskeag Stream Canoe Race, a white-water event which begins just north of Bangor in Kenduskeag, has been held since 1965.
Bangor is the county seat of Penobscot County.
Since 1931, Bangor has had a Council-Manager form of government. The nine-member City Council is a non-partisan body, with three city councilors elected to three-year terms each year. The nine council members elect the Chair of the City Council, who is referred to informally as the mayor, and plays the role when there is a ceremonial need.
In 2007, Bangor was the first city in the U.S. to ban smoking in vehicles carrying passengers under the age of 18.
In 2012, Bangor's City Council passed an order in support of same-sex marriage in Maine. In 2013, the City of Bangor also signed an amicus brief to the United States Supreme Court calling for the federal Defense of Marriage Act to be struck down.
In 2008 Bangor's crime rate was the second-lowest among American metropolitan areas of comparable size. As of 2014 Bangor had the third highest rate of property crime in Maine.
The arrival of Irish immigrants from nearby Canada beginning in the 1830s, and their competition with locals for jobs, sparked a deadly sectarian riot in 1833 that lasted for days and had to be put down by militia. Realizing the need for a police force, the town incorporated as The City of Bangor in 1834. In the 1800s, sailors and loggers gave the city a reputation for roughness; their stomping grounds were known as the "Devil's Half Acre". The same name was also applied, at roughly the same time, to The Devil's Half-Acre, Pennsylvania.
Although Maine was the first "dry" state (i.e. the first to prohibit the sale of alcohol, with the passage of the "Maine law" in 1851), Bangor managed to remain "wet". The city had 142 saloons in 1890. A look-the-other-way attitude by local police and politicians (sustained by a system of bribery in the form of ritualized fine-payments known as "The Bangor Plan") allowed Bangor to flout the nation's most long-standing state prohibition law. In 1913, the war of the "drys" (prohibitionists) on "wet" Bangor escalated when the Penobscot County Sheriff was impeached and removed by the Maine Legislature for not enforcing anti-liquor laws. His successor was asked to resign by the Governor the following year for the same reason, but refused. A third sheriff was removed by the Governor in 1918, but promptly re-nominated by the Democratic Party. Prohibitionist Carrie Nation had been forcibly expelled from the Bangor House hotel in 1902 after causing a disturbance.
In October 1937, "public enemy" Al Brady and another member of his "Brady Gang" (Clarence Shaffer) were killed in the bloodiest shootout in Maine's history. FBI agents ambushed Brady, Shaffer, and James Dalhover on Bangor's Central Street after they had attempted to purchase a Thompson submachine gun from Dakin's Sporting Goods downtown. Brady is buried in the public section of Mount Hope Cemetery, on the north side of Mount Hope Avenue. Until recently, Brady's grave was unmarked. A group of schoolchildren erected a wooden marker over his grave in the 1990s, which was replaced by a more permanent stone in 2007.Universities and colleges
The University of Maine (originally The Maine State College) was founded in Orono in 1868. It is part of the University of Maine System.
A vocationally-oriented University College of Bangor, associated with the University of Maine at Augusta.
Eastern Maine Community College established in 1966 by the Maine State Legislature, under the authority of the State Board of Education, EMCC was originally known as Eastern Maine Vocational Technical Institute (EMVTI). The college was moved in 1968 from temporary quarters in downtown Bangor to its present campus on Hogan Road. In 1986 the 112th Legislature created a quasi-independent system with a board of trustees to govern all six of Maine’s VTIs, and in 1989 another law changed the names of these institutions to more accurately reflect their purpose and activities; EMVTI thus became Eastern Maine Technical College (EMTC). The name of the College changed again in 2003 from “Technical” to “Community” to more accurately reflect its purpose.
Husson University enrolls about 3,500 students a year in a variety of undergraduate and graduate programs.
Beal College is a small institution oriented toward career training.
The Bangor Theological Seminary, founded in 1814, was the only accredited graduate school of religion in northern New England. It ceased offering degrees in 2013.
The public Bangor High School. In 2013 it was named a National Silver Award winner by U.S. News & World Report's "America's Best High Schools".
The private John Bapst Memorial High School. In 2012 it was ranked in the top 20% nationally by the Washington Post High School Challenge.
Two public middle schools and one private, and elementary schools.
Newspapers have been published in Bangor since 1815. Almost thirty dailies, weeklies, and monthlies had been launched there by the end of the American Civil War.
The Bangor Daily News was founded in the late 1800s, and is one of the few remaining family-owned newspapers left in the United States. The Maine Edge is published from Bangor.
Bangor has more than a dozen radio stations and seven TV stations, including WLBZ, WABI-TV, WABI-DT2, WVII, WBGR-LD, and WFVX-LD. Maine Public Broadcasting Network, licensed to Orono, is the area's Public Broadcasting Service station. Radio stations in the city include WKIT-FM and WZON. WHSN is a non-commercial alternative rock station licensed to Bangor and run and operated by staff and students at the New England School of Communications at Husson University.
Bangor sits along interstates I-95 and I-395; U.S. highways US 1A, US 2, US Route 2A; and state routes SR 9, SR 15, SR 15 Business, SR 100, SR 202, and SR 222. Three major bridges connect the city to neighboring Brewer: Joshua Chamberlain Bridge (carrying US 1A), Penobscot River Bridge (carrying SR 15), and the Veterans Remembrance Bridge (carrying I-395).
Daily intercity bus service from Bangor proper is provided by two companies. Concord Coach Lines connects Bangor with Augusta, Portland, several towns in Maine's midcoast region, and Boston, Massachusetts. Cyr Bus Lines provides daily service to Caribou and several northern Maine towns along I-95 and Route 1. The area is also served by Greyhound, which operates out of Dysart's Truck Stop in neighboring Hermon. West's Bus Service provides service between Bangor and Calais.
In 2011, Acadian Lines ended bus service to Saint John, New Brunswick, because of low ticket sales.
The Community Connector system offers public transportation within Bangor and to adjacent towns such as Orono. Downeast Transportation provided service to Ellsworth and Bar Harbor.
Freight service is provided by Pan Am Railways. Passenger rail service was provided most recently by the New Brunswick Southern Railway, which in 1994 discontinued its route to Saint John, New Brunswick. Rail accidents:1869: The Black Island Railroad Bridge north of Old Town, Maine collapsed under the weight of a Bangor and Piscataquis Railroad train, killing 3 crew and injuring 7–8 others.
1871: A bridge in Hampden collapsed under the weight of a Maine Central Railroad train approaching Bangor, killing 2 and injuring 50.
1898: A Maine Central Railroad train crashed near Orono killing 2 and fatally injuring 4. The president of the railroad and his wife were also on board in a private car, but escaped injury. Train Wrecked in Maine
1899: The collapse of a gangway between a train and a waiting ferry at Mount Desert sent 200 members of a Bangor excursion party into the water, drowning 20.
1911: A head-on collision of two trains north of Bangor, in Grindstone, killed 15, including 5 members of the Presque Isle Brass Band.
Bangor International Airport (IATA: BGR, ICAO: KBGR) is a joint civil-military public airport on the west side of the city. It has a single runway measuring 11,439 by 200 ft (3,487 by 61 m). Bangor is the last (or first) American airport along the great circle route between the U.S. East Coast and Europe, and in the 1970s and '80s it was a refuelling stop, until the development of longer-range jets in the 1990s.
Bangor is home to two large hospitals, the Eastern Maine Medical Center and the Catholic-affiliated St. Joseph Hospital. As of 2012, the Bangor Metropolitan Statistical Area (Penobscot County) ranked in the top fifth for physicians per capita nationally (74th of 381). It is also within the top ten in the Northeast (i.e. north of Pennsylvania) and the top five in New England. In 2013 U.S. News & World Report ranked the Eastern Maine Medical Center as the second best hospital in Maine.
In 1832 a cholera epidemic in St. John, New Brunswick (part of the Second cholera pandemic) sent as many as eight hundred poor Irish immigrants walking to Bangor. This was the beginning of Maine's first substantial Irish-Catholic community. Competition with Yankees for jobs caused a riot and resulting fire in 1833. n In 1849-50 the Second cholera pandemic reached Bangor itself, killing 20–30 within the first week. 112 had died by Oct, 1849 The final death toll was 161. A late outbreak of the disease in 1854 killed seventeen others. The victims in most cases were poor Irish immigrants. In 1872 a smallpox epidemic closed local schools.* 1918: The Spanish flu pandemic of 1918, which was global in scope, struck over a thousand Bangoreans and killed more than a hundred. This was the worst 'natural disaster' in the city's history since the Cholera epidemic of 1849. Saint John, New Brunswick, Canada | <urn:uuid:ea88be87-d1ee-4e03-9503-90718effe419> | CC-MAIN-2021-21 | https://alchetron.com/Bangor,-Maine | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989012.26/warc/CC-MAIN-20210509183309-20210509213309-00214.warc.gz | en | 0.96933 | 8,150 | 3.203125 | 3 |
Design Analysis of the P-47 Thunderbolt
by Nicholas Mastrangelo
Reprinted from the January 1945 issue of Industrial Aviation (author's collection)
In the creation and conception of the Thunderbolt, her designer exploited every known advantage and adopted first, the efficient single-engine, single-fuselage with its least interrupted wing span, concentrated weight and reduced frontal area; secondly, the Kartveli-designed airfoil section (Republic S-3) representing a culmination of the knowledge gained by years of high speed airplane design; and lastly, its supercharging system, which occupies a considerable volume of the fuselage structure, designed to supply 52" Hg of manifold pressure up to stratosphere levels for its 2800 cu in engine.
The 2000 hp, 18-cylinder, air-cooled engine presented a great majority of the problems and one of the first was created by the demand for the perfect supercharging duct system, one that would offer the most efficient, least interrupted air flow; this requirement was neatly solved by an "unorthodox procedure"; that is, the duct system was designed first and then around it, the fuselage. The return of many P-47s to "home base" with gaping wounds in the fuselage is sufficient evidence that structural efficiency was not sacrificed by this procedure.
Since the conventional three-bladed propeller was not adaptable for the Thunderbolt installation, the four-bladed propeller was installed; the P-47, incidently, served as the first test stand for this propeller.
Although the four-bladed propeller was an admirable solution to the power gearing of the engine, there remained the problem of landing gear height. If minimum ground clearance for the 12 ft diameter propeller had been maintained by the conventional landing gear, its suspension would have been too far outboard and would have necessitated either less ammunition or guns; if the same firing power were to be maintained, the installation of the armament would have created an inefficient cumbersome wing structure; hence, the first telescopic landing gear. Republic designed, the landing gear is 9" shorter when retracted and therefore allows correspondingly farther inboard suspension. In this manner, both heavy armament installation and efficient wing structure are retained.
The installation of suitable fuel quantities was another of the major problems resulting from the adoption of the single fuselage and efficient turbo duct design and without detailing the development of the expansion of the internal fuel load of the P-47, it may be said, at present, that the latest Thunderbolt, details of which are not yet released, will have the greatest combat range of any pursuit airplane.
The need for ingenuity did not end with the solution to these problems, however, for the Thunderbolts speed and natural habitat (above 30,000 ft) presented additional challenges. Ailerons "snatched and froze"; canopies could not be opened and control loads became excessive. These and other high-speed effects, which have now lost some of their mystery, were experienced during th P-47s early high speed runs and as solutions, Republic equipped the Thunderbolt with blunt-nosed ailerons, jettisonable canopies, all-metal control surfaces and was the first airplane to reduce rudder pedal loads by use of balanced trim tabs.
The fuselage of the P-47 is of semi-monococque, all-metal, stressed-skin construction, composed of transverse bulkheads and longitudinal stringers. The main or forward structure is divided into top and lower halves to station 302½, while the tail "cone" or fuselage aft section, comprising the fuselage aft of station 302½, is constructed as a unit. The upper and lower main fuselage halves are bolted at reinforcing angles built into the parting surfaces of the structure and joggled extensions on the upper half frames are spliced by riveting to the lower half frames. Assembly of the fuselage is complete by joining the aft fuselage section to the main section at the 302½ station. Here, the facing frames of the forward and aft sections are riveted and bolted while the skin extension of the aft section is butt jointed to the main section skin and riveted to the last frame of the forward section.
Major structural units of the fuselage, two wing supporting bulkheads, are contained in the main lower half section. Each of the two bulkheads is constructed around a pair of 3½" wide E section steel beams which extend the width of the bulkheads and serve as cross-ties. The wing support hinges, which are forgings of X4340 steel, extend into each end of the cross-tie beams and are secured to the ties by 3/8" bolts.
Forward or firewall bulkhead is faced with stainless steel sheet over alclad 24-ST sheet of .091 gauge and the aft side is faced with similar flat sheet, reinforced by corrugated sheet and channel section, all of alclad 24-ST.
The aft bulkhead incorporates the aft wing hinges. Except for the absence of stainless steel sheet facing and corrugated sheet, the structure is similar to that of the forward bulkhead.
Lower outboard ends of both the wing hinge bulkheads support trapezoidal shaped forgings; longeron components are riveted to these forgings, thus establishing the foundation for the remainder of the lower half fuselage structure. This lower longeron extends the entire length of the forward fuselage section and provides support for the remaining transverse structural elements. Stringers are located at suitable intervals from the longeron to the angle-reinforced parting surface of the lower half structure. Lower half frame segments are riveted to longeron and stringers and then flush riveted skin of various gauges with additional reinforcing sheets at the wing hinge fitting openings and other high stressed areas complete the structure of the main lower half fuselage section.
The upper half forward fuselage section, though not as rugged as the lower half fuselage section, is constructed similar to the lower half section. The upper half forward bulkhead has, due to the absence of the 3½" steel channel, less depth than the matching lower half and consequently is stepped back so as to present a flush surface on the aft face of the firewall. A corrugated sheet faced on either side extends to the upper engine mount cross-tie which is an angle of 24-ST. Since the structure above the engine mount cross-tie is the low stress area of the bulkhead, it is of single sheet thickness. Similar trapezoidal shaped forgings as employed in the bulkheads of the lower half section are bolted to the engine mount cross tie, to serve as the structural basis for the upper longeron.
The upper fuselage half extension of the aft wing-hinge-support-bulkhead consists of .064" flanged segments, extending to the upper longeron, spliced with .064" splice plates to both faces of the bulkhead.
Frame 180, representing the aft partition of the cockpit, unlike the wing hinge support bulkheads where the conditions are reversed, has rugged structure in the upper half section and flanged frame segment support in the lower half section. The additional ruggedness is due to the fact that this frame supports the aft armor plating.
The remaining upper-half frames aft of 180 are flanged semicircular segments tied by stringers.
The aft fuselage section, since it is constructed as a unit, employs complete frames and is tied by stringers in a fashion similar to the forward half structure. The tail wheel supporting frame is subjected to heavy landing loads and is therefore, considerably reinforced by vertical and horizontal extrusions and webs. This frame is also braced at the bottom by a box-like structure extending to frame 302½. A transverse web is riveted along the upper area of the last three frames of the aft section, forming the support structure for the empennage.
The wing of the P-47 is a full cantilever type employing 2 main spars and stressed skin and multicellular construction. It has a span of 41, root chord of 109½", mean aerodynamic chord of 87.46"; a 5.61 aspect ratio; an angle of incidence of +1° and a top surface dihedral of 4° . Angle of incidence and amount of dihedral are fixed.
Wing covering is butt fitted, flush riveted, stressed skin type and is reinforced by extruded angle stringers. The cut-out skin area which includes inspection, access and maintenance doors, as well as the larger cut-out areas of the landing gear wells and gun and ammunition bays is about 16% of the total main panel area; the high percentage is indicative of. the rigidity inherent in the prime structure of the wing.
Main members of the wing are the two main spars which support attachment of the wing to the fuselage and three auxiliary spars, one each supporting the aileron and flap and the other supporting the landing gear.
Main spars are constructed of E-shaped cap strips riveted to webs of varying thickness from a minimum of .032" for the outboard web of both spars to a maximum of .250" thickness for the inboard web of the forward main spar. Both main spars are reinforced at suitable intervals by extruded angles which also serve as anchors for frame installations.
Inboard ends of the main spars of each wing are fitted with a pair of wing hinges which are pinned to the mating fuselage hinges by split bushings; tapered bolts expand these bushings to a tight fit, thus securing positive attachment.
The aft auxiliary spars support the movable surfaces and are constructed of angle cap strips and webs of .072" to .025" thickness. The landing gear auxiliary spar, since it is subjected to landing loads, is of somewhat heavier constructionnamely, .091" web and is reinforced similar to the main spars.
Flanged ribs of alclad 24-ST are secured between spars at the angle stiffeners. Ribs vary from .051 to .032 with the exception of the root chord rib and gun bay partitioning ribs; the root chord rib is .064" and the gun bay partitioning ribs of .064".
Nose and trailing edge ribs are flanged and are also of Alclad 24-ST.
Ailerons of the P-47, representing about 11.4% of the total projected wing area, are Frise type, aerodynamically and dynamically balanced and are 16 in-lb overbalanced. They are hinged to steel forgings attached to the outboard auxiliary wing spar and are controlled by a system of push-pull rods; an all metal controllable trim tab is provided in the left aileron.
Flanged nose and tail ribs of 24-ST, are attached in staggered fashion to a main spar and alclad 24-ST sheet is flush riveted to spar and ribs.
Forged aluminum alloy hinges of the aileron are attached to the outboard auxiliary wing spar.
Landing flaps of the P-47, representing 13% of the total projected wing area, are NACA slotted trailing edge type. They are hydraulically operated, receiving pressure and fluid from the hydraulic system and during extension move first aft and then down and during retraction move first up and then forward; this movement, actuated by three trapezoidal linkage hinges, insures perfect positioning of the flap against the main panel, thereby maintaining the proper airfoil section. The linkage hinges are synchronized by attachment to a torque tube and the assembly is attached to the inboard auxiliary wing spar. Independent units are synchronized by hydraulic pressure. Flaps are pinned to the flap linkage assembly hinges with standard bolts.
The double cambered external surface of the flap is alclad 24-ST riveted to flanged nose and tail ribs which attach to a spar of 24-ST in symmetrical order; additional lightened reinforcing nose ribs are provided between each pair of the flanged nose ribs.
Compressible Recovery Flaps
Late P-47 models have incorporated flaps for the purpose of aiding in recovery from dives of compressibility speeds. These surfaces are operated by two electric, reversible, intermittent motors synchronized by flexible shafting. Magnetic brake and clutch assemblies are incorporated to prevent overtravel and switches limit the flap extension to 22½° so as to hold "gs" to a safe value during "pull-outs."
The compressible recovery flaps are .188" flat sheets of 24-ST and are hinged at the landing gear auxiliary spar, located just forward of the landing flaps. In the retracted position, they are flush with the lower wing surface contour.
The empennage of Thunderbolt is a full cantilever structure with a total projected area of 81.45 sq. ft.
All surfaces are metal covered and the elevators and rudder are equipped with controllable trim tabs of all metal construction.
Fin and the horizontal stabilizer assembly of the P-47 are of similar construction, both assemblies employing flanged ribs between a forward and aft spar and flanged nose ribs with 24-ST alclad skin.
Hinges for the tail surfaces and chain-actuated worm and screw units for trim tab operation are attached to the aft spars of both assemblies.
Fin spars straddle the horizontal stabilizer assembly spars and at this junction are bolted to common splice plates thus forming a complete stabilizer unit. To install the complete stabilizer unit to the fuselage, the stabilizer forward spar is bolted to fittings on the horizontal web of the aft fuselage section and the aft spar is bolted to a plate fastened to the last frame of the fuselage.
The rudder is Handley-Page type having static and dynamic balance. Dynamic balance coefficient is less than zero and static balance is 25 in-lb under balance. The rudder trim tab provided dynamic balance as well as selective trim.
As all other surfaces of the P-47, this control surfaces is alclad aluminum alloy covered and employs a main spar and flanged ribs of 24-ST alloy.
Elevators are of Handley-Page type with a dynamic balance coefficient of zero or less and static balance of 10 in-lb underbalanced. The elevators of the Thunderbolt are manufactured singly and are assembled into a unit by splicing torque tubes extending from the inboard nose sections of the elevators.
The entire surfaces of the elevators are 24-ST alloy covered and constructed of a spar and stamped, flanged ribs; the torque tubes are secured to the first three inboard nose ribs of each elevator.
Elevators are hinged to the rear stabilizer spar and a torque-tube-pivot is provided by roller bearings staked in hinge brackets which are attached to the rear fuselage frame. The last control rod is linked to the elevator at a bracket that is part of the torque tube splice sleeve.
The power plant of the P-47 is a Pratt & Whitney R-2800, air-cooled, radial, twin row, 2000 hp engine. It is 72¾" long, 52 ½" in diameter and weighs more than a ton. It has a bore of 5.75", stroke of 6.00" and a displacement of 2804 cu in; the compression ratio is 6.7:1 and propeller drive ratio .500:1.
Engine is attached by Lord mounts and drives either electric or hydraulic controlled constant-speed 4-bladed propeller assemblies.
The NACA type cowling consists of a group of four quick detachable panels fastened to supporting rings attached to the rocker box covers of the engine. Hydraulically operated flaps for controlling exits of cooling air are provided at the upper rear section of the secondary engine cowling.
Normal fuel load is carried in two self-sealing fuel tanks fitted with baffles to minimize surge; a main tank is installed between the wing hinge supporting bulkheads and an auxiliary tank is installed directly aft of the rear wing hinge supporting bulkhead. To prevent vapor lock at high altitude, both tanks are equipped with electrically operated booster pumps which are of sufficient capacity to insure adequate fuel pressure and flow in the event of failure of a type G-9 engine-driven fuel pump.
External fuel is carried in combat or ferrying tanks attached to bomb shackles in the belly and/or wing. These tanks are pressurized by the exhaust of the vacuum pump.
Lubricating oil is carried in a hopper-type magnesium tank of 28.6 U.S. gal capacity, strapped to supports on the engine mount. A pendulum is incorporated in the tank to insure adequate lubrication for inverted flights of limited duration.
Oil temperature is regulated by two radiators mounted below the engine; surge valves permit cold oil to bypass the radiators. Each radiator has an air scoop with an outlet door controlled by an electrically operated motor; the doors operate simultaneously from the one motor.
The supercharging system of the P-47 airplane is designed to supply 52" Hg manifold pressure (considerably more for War Emergency Power) to the engine up to stratosphere levels.
The exhaust driven turbine is approximately 22 ft aft of the propeller and is supported by a ring attached to the lower longerons. The exhaust gases are collected by two rings, one each for the left and right bank of cylinders and directed to the nozzle box of the turbine through shrouded exhaust piping along either side of the airplane beneath the fuselage. Spent gas escapes through a stainless steel flight hood which extends below the fuselage.
Ram air is piped through ducts under the fuselage extending from the primary cowling to the impeller-inlet of the turbine; after supercharging, the air is scooped to the intercooler then piped along either side of the fuselage and directed to a single duct above the carburetor.
A considerable volume of the "ram" is conducted to the intercooler in order to lower the temperature of supercharged air. Electric-motor-controlled doors of the intercooler exit ducts on both sides of the fuselage vary the flow of cooling air through the intercooler.
Supercharging is controlled to maintain the manifold pressure value selected by the pilot, by means of an oil operated supercharger regulator. The regulator, through linkage, varies the position of waste gates in the exhaust pipes just aft of the collector rings and thus controls the volume of exhaust gases directed to the nozzle box of the turbine. The position of a piston in the regulator, is balanced by exhaust pressure and a compression spring; the spring is mechanically loaded to correspond to the desired exhaust pressure valve by a supercharger lever in the cockpit. When the exhaust pressure varies from the selected value, the piston moves in the direction of the greater pressure and opens a port admitting pressurized lubricating oil to that chamber of the regulator which will affect the movement of the waste gates in the proper direction to balance the piston at the neutral position.
Interconnected Engine Controls
In order to minimize pilots attention to engine controls, the propeller, boost, and throttle levers of the P-47 may be interconnected and moved as a single lever with power and rpm correlated through the full range of the control quadrant. Correlation is mechanical. The propeller lever is correlated by the use of a cam; throttle and boost levers are correlated by adjustment of conventional push-pull rods.
Controls may be disconnected by releasing a simple spring loaded clip on the throttle lever.
Water lnjection Used
To meet the demands for a higher emergency rating and to safeguard the engine from detonation when operated at considerably above the military power, water injection has been applied to the Thunderbolts power plant. Water is pumped from a 30 gal tank strapped to the firewall and is admitted through a water regulator by operation of a solenoid valve. Pressurized water beyond the regulator resets carburetor mixture so that the fuel-air ratio is decreased thereby increasing power without a corresponding rise in manifold pressure. The higher increase in power, however, is developed by high manifold pressure accomplished through a boost reset mechanism also actuated by water pressure; the reset overrides the supercharger regulator setting of the waste gates, therefore permitting the turbo to develop the higher rpms required to maintain the War Emergency Rating manifold pressure.
Main Landing Gear
The full cantilever, hydraulically-controlled main landing gear of this airplane consists of independent right and left hand units of air-oil combination shock strut assemblies and extra high pressure cast magnesium wheels of drop center rim type.
A box-like structure of four cast magnesium plates, two of which serve as trunnions, supports each shock strut assembly. The box assembly fits into a well, formed by rib 86, rib 104, and the landing gear auxiliary spar, and is supported by four bolts through each of the ribs and adjacent plates.
Before the gear starts its retraction cycle, hydraulic pressure is applied to withdraw a nitrided steel downlocking pin from a housing in the downlocking arm of the strut; a mechanically operated sequence valve is then opened to admit pressure to the landing gear retraction cylinder.
During retraction, the shock strut piston is telescoped to the bottoming position so that at the completion of the "up" cycle the gear will fit into a well which is 9" shorter than would be necessary for conventional "up-positioning" of the gear. The telescoping is accomplished by the "geometry" of the mechanism which employs a shrinkage strut or rod; one end of this rod is attached to the shock strut piston and the other pivoted about an axis outboard and below that of the landing gear pivot axis. Geometrically, the shock strut and shrinkage strut can be considered as radial elements from different radii terminating at the lower end of the shock strut piston housing and the lower end of the piston respectively. The radii (landing gear and shrinkage strut pivot axes) are spaced so that the difference in the loci at approximately 0° (landing gear down) is almost zero and the difference between loci at approximately - 90° (landing gear up) is about 9".
As the piston telescopes, the air in the air-oil chamber of the shock strut is displaced by the oil and is transferred to an auxiliary air chamber above the air-oil chamber in the strut. An air valve, actuated by a push rod following a cam track above the strut, opens the auxiliary air chamber.
The four .50 cal machine guns in each wing of the P-47 are secured in the gun bays to Republic-designed mounts. Front mounts are conical-shaped and the guns are locked to these by rotating the locking ring of the gun bracket assembly; the rear-mounts are locked by simple levers which are part of the rear mount assemblies.
Ammunition of more than 350 rounds per gun may he carried in the bays just outboard of the gun bays.
Bombs and rocket tubes are supported in conventional shackles under the wing.
The pilot is protected from enemy gun fire by face hardened 3/8" armor plate located in the forward and aft ends of the cockpit. The area above the front armor plate is protected by 1½" bullet resistant glass.
All material not specifically credited is Copyright © by Randy Wilson.
All rights reserved. | <urn:uuid:533a3ace-f810-4901-91fc-23490f9b4847> | CC-MAIN-2021-21 | http://rwebs.net/avhistory/history/p-47.htm | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00254.warc.gz | en | 0.943237 | 4,825 | 2.96875 | 3 |
Summary Report for:
19-2041.03 - Industrial Ecologists
Apply principles and processes of natural ecosystems to develop models for efficient industrial systems. Use knowledge from the physical and social sciences to maximize effective use of natural resources in the production and use of goods and services. Examine societal issues and their relationship with both technical systems and the environment.
Sample of reported job titles: Ecologist, Environmental Consultant, Environmental Protection Agency Counselor, Research Scientist, Researcher
Tasks | Technology Skills | Tools Used | Knowledge | Skills | Abilities | Work Activities | Detailed Work Activities | Work Context | Job Zone | Education | Credentials | Interests | Work Styles | Work Values | Wages & Employment | Job Openings | Additional Information
- Identify environmental impacts caused by products, systems, or projects.
- Examine local, regional, or global use and flow of materials or energy in industrial production processes.
- Identify or develop strategies or methods to minimize the environmental impact of industrial production processes.
- Prepare technical and research reports, such as environmental impact reports, and communicate the results to individuals in industry, government, or the general public.
- Analyze changes designed to improve the environmental performance of complex systems and avoid unintended negative consequences.
- Review research literature to maintain knowledge on topics related to industrial ecology, such as physical science, technology, economy, and public policy.
- Recommend methods to protect the environment or minimize environmental damage from industrial production practices.
- Build and maintain databases of information about energy alternatives, pollutants, natural environments, industrial processes, and other information related to ecological change.
- Identify or compare the component parts or relationships between the parts of industrial, social, and natural systems.
- Redesign linear, or open-loop, systems into cyclical, or closed-loop, systems so that waste products become inputs for new processes, modeling natural ecosystems.
- Conduct environmental sustainability assessments, using material flow analysis (MFA) or substance flow analysis (SFA) techniques.
- Identify sustainable alternatives to industrial or waste-management practices.
- Review industrial practices, such as the methods and materials used in construction or production, to identify potential liabilities and environmental hazards.
- Translate the theories of industrial ecology into eco-industrial practices.
- Prepare plans to manage renewable resources.
- Examine societal issues and their relationship with both technical systems and the environment.
- Plan or conduct studies of the ecological implications of historic or projected changes in industrial processes or development.
- Provide industrial managers with technical materials on environmental issues, regulatory guidelines, or compliance actions.
- Carry out environmental assessments in accordance with applicable standards, regulations, or laws.
- Plan or conduct field research on topics such as industrial production, industrial ecology, population ecology, and environmental production or sustainability.
- Research sources of pollution to determine environmental impact or to develop methods of pollution abatement or control.
- Forecast future status or condition of ecosystems, based on changing industrial practices or environmental conditions.
- Perform analyses to determine how human behavior can affect, and be affected by, changes in the environment.
- Promote use of environmental management systems (EMS) to reduce waste or to improve environmentally sound use of natural resources.
- Monitor the environmental impact of development activities, pollution, or land degradation.
- Develop alternative energy investment scenarios to compare economic and environmental costs and benefits.
- Investigate the impact of changed land management or land use practices on ecosystems.
- Research environmental effects of land and water use to determine methods of improving environmental conditions or increasing outputs, such as crop yields.
- Perform environmentally extended input-output (EE I-O) analyses.
- Apply new or existing research about natural ecosystems to understand economic and industrial systems in the context of the environment.
- Investigate accidents affecting the environment to assess ecological impact.
- Create complex and dynamic mathematical models of population, community, or ecological systems.
- Analytical or scientific software — Economic Input-Output Life Cycle Assessment EIO-LCA; PRe Consultants SimaPro; StataCorp Stata; Substance Flow Analysis STAN (see all 9 examples)
- Computer aided design CAD software — Autodesk AutoCAD
- Data base user interface and query software — Online databases
- Document management software — Adobe Systems Adobe Acrobat
- Electronic mail software — Email software
- Enterprise resource planning ERP software — SAP
- Graphics or photo imaging software — Adobe Systems Adobe Illustrator ; Adobe Systems Adobe Photoshop ; Microsoft Visio
- Internet browser software — Web browser software
- Map creation software — ESRI ArcGIS software
- Object or component oriented development software — Python
- Office suite software — Microsoft Office
- Presentation software — Microsoft PowerPoint
- Project management software — Microsoft SharePoint
- Spreadsheet software — Microsoft Excel
Hot Technology — a technology requirement frequently included in employer job postings.
- English Language — Knowledge of the structure and content of the English language including the meaning and spelling of words, rules of composition, and grammar.
- Engineering and Technology — Knowledge of the practical application of engineering science and technology. This includes applying principles, techniques, procedures, and equipment to the design and production of various goods and services.
- Mathematics — Knowledge of arithmetic, algebra, geometry, calculus, statistics, and their applications.
- Education and Training — Knowledge of principles and methods for curriculum and training design, teaching and instruction for individuals and groups, and the measurement of training effects.
- Chemistry — Knowledge of the chemical composition, structure, and properties of substances and of the chemical processes and transformations that they undergo. This includes uses of chemicals and their interactions, danger signs, production techniques, and disposal methods.
- Production and Processing — Knowledge of raw materials, production processes, quality control, costs, and other techniques for maximizing the effective manufacture and distribution of goods.
- Administration and Management — Knowledge of business and management principles involved in strategic planning, resource allocation, human resources modeling, leadership technique, production methods, and coordination of people and resources.
- Biology — Knowledge of plant and animal organisms, their tissues, cells, functions, interdependencies, and interactions with each other and the environment.
- Law and Government — Knowledge of laws, legal codes, court procedures, precedents, government regulations, executive orders, agency rules, and the democratic political process.
- Computers and Electronics — Knowledge of circuit boards, processors, chips, electronic equipment, and computer hardware and software, including applications and programming.
- Reading Comprehension — Understanding written sentences and paragraphs in work related documents.
- Critical Thinking — Using logic and reasoning to identify the strengths and weaknesses of alternative solutions, conclusions or approaches to problems.
- Writing — Communicating effectively in writing as appropriate for the needs of the audience.
- Active Listening — Giving full attention to what other people are saying, taking time to understand the points being made, asking questions as appropriate, and not interrupting at inappropriate times.
- Judgment and Decision Making — Considering the relative costs and benefits of potential actions to choose the most appropriate one.
- Speaking — Talking to others to convey information effectively.
- Complex Problem Solving — Identifying complex problems and reviewing related information to develop and evaluate options and implement solutions.
- Active Learning — Understanding the implications of new information for both current and future problem-solving and decision-making.
- Science — Using scientific rules and methods to solve problems.
- Systems Analysis — Determining how a system should work and how changes in conditions, operations, and the environment will affect outcomes.
- Systems Evaluation — Identifying measures or indicators of system performance and the actions needed to improve or correct performance, relative to the goals of the system.
- Mathematics — Using mathematics to solve problems.
- Coordination — Adjusting actions in relation to others' actions.
- Monitoring — Monitoring/Assessing performance of yourself, other individuals, or organizations to make improvements or take corrective action.
- Social Perceptiveness — Being aware of others' reactions and understanding why they react as they do.
- Instructing — Teaching others how to do something.
- Learning Strategies — Selecting and using training/instructional methods and procedures appropriate for the situation when learning or teaching new things.
- Deductive Reasoning — The ability to apply general rules to specific problems to produce answers that make sense.
- Inductive Reasoning — The ability to combine pieces of information to form general rules or conclusions (includes finding a relationship among seemingly unrelated events).
- Written Expression — The ability to communicate information and ideas in writing so others will understand.
- Oral Comprehension — The ability to listen to and understand information and ideas presented through spoken words and sentences.
- Oral Expression — The ability to communicate information and ideas in speaking so others will understand.
- Problem Sensitivity — The ability to tell when something is wrong or is likely to go wrong. It does not involve solving the problem, only recognizing there is a problem.
- Written Comprehension — The ability to read and understand information and ideas presented in writing.
- Near Vision — The ability to see details at close range (within a few feet of the observer).
- Speech Clarity — The ability to speak clearly so others can understand you.
- Information Ordering — The ability to arrange things or actions in a certain order or pattern according to a specific rule or set of rules (e.g., patterns of numbers, letters, words, pictures, mathematical operations).
- Mathematical Reasoning — The ability to choose the right mathematical methods or formulas to solve a problem.
- Speech Recognition — The ability to identify and understand the speech of another person.
- Category Flexibility — The ability to generate or use different sets of rules for combining or grouping things in different ways.
- Flexibility of Closure — The ability to identify or detect a known pattern (a figure, object, word, or sound) that is hidden in other distracting material.
- Fluency of Ideas — The ability to come up with a number of ideas about a topic (the number of ideas is important, not their quality, correctness, or creativity).
- Number Facility — The ability to add, subtract, multiply, or divide quickly and correctly.
- Originality — The ability to come up with unusual or clever ideas about a given topic or situation, or to develop creative ways to solve a problem.
- Far Vision — The ability to see details at a distance.
- Getting Information — Observing, receiving, and otherwise obtaining information from all relevant sources.
- Analyzing Data or Information — Identifying the underlying principles, reasons, or facts of information by breaking down information or data into separate parts.
- Updating and Using Relevant Knowledge — Keeping up-to-date technically and applying new knowledge to your job.
- Processing Information — Compiling, coding, categorizing, calculating, tabulating, auditing, or verifying information or data.
- Interacting With Computers — Using computers and computer systems (including hardware and software) to program, write software, set up functions, enter data, or process information.
- Making Decisions and Solving Problems — Analyzing information and evaluating results to choose the best solution and solve problems.
- Estimating the Quantifiable Characteristics of Products, Events, or Information — Estimating sizes, distances, and quantities; or determining time, costs, resources, or materials needed to perform a work activity.
- Identifying Objects, Actions, and Events — Identifying information by categorizing, estimating, recognizing differences or similarities, and detecting changes in circumstances or events.
- Interpreting the Meaning of Information for Others — Translating or explaining what information means and how it can be used.
- Thinking Creatively — Developing, designing, or creating new applications, ideas, relationships, systems, or products, including artistic contributions.
- Communicating with Supervisors, Peers, or Subordinates — Providing information to supervisors, co-workers, and subordinates by telephone, in written form, e-mail, or in person.
- Communicating with Persons Outside Organization — Communicating with people outside the organization, representing the organization to customers, the public, government, and other external sources. This information can be exchanged in person, in writing, or by telephone or e-mail.
- Provide Consultation and Advice to Others — Providing guidance and expert advice to management or other groups on technical, systems-, or process-related topics.
- Developing Objectives and Strategies — Establishing long-range objectives and specifying the strategies and actions to achieve them.
- Documenting/Recording Information — Entering, transcribing, recording, storing, or maintaining information in written or electronic/magnetic form.
- Training and Teaching Others — Identifying the educational needs of others, developing formal educational or training programs or classes, and teaching or instructing others.
- Establishing and Maintaining Interpersonal Relationships — Developing constructive and cooperative working relationships with others, and maintaining them over time.
- Evaluating Information to Determine Compliance with Standards — Using relevant information and individual judgment to determine whether events or processes comply with laws, regulations, or standards.
- Developing and Building Teams — Encouraging and building mutual trust, respect, and cooperation among team members.
- Monitor Processes, Materials, or Surroundings — Monitoring and reviewing information from materials, events, or the environment, to detect or assess problems.
- Organizing, Planning, and Prioritizing Work — Developing specific goals and plans to prioritize, organize, and accomplish your work.
- Judging the Qualities of Things, Services, or People — Assessing the value, importance, or quality of things or people.
- Coordinating the Work and Activities of Others — Getting members of a group to work together to accomplish tasks.
- Coaching and Developing Others — Identifying the developmental needs of others and coaching, mentoring, or otherwise helping others to improve their knowledge or skills.
- Guiding, Directing, and Motivating Subordinates — Providing guidance and direction to subordinates, including setting performance standards and monitoring performance.
Detailed Work Activities
- Research environmental impact of industrial or development activities.
- Develop sustainable industrial or development methods.
- Identify sustainable business practices.
- Communicate results of environmental research.
- Prepare research or technical reports on environmental issues.
- Review professional literature to maintain professional knowledge.
- Advise others about environmental management or conservation.
- Develop technical or scientific databases.
- Research impacts of environmental conservation initiatives.
- Apply knowledge or research findings to address environmental problems.
- Develop plans to manage natural or renewable resources.
- Conduct research on social issues.
- Appraise environmental impact of regulations or policies.
- Plan environmental research.
- Prepare information or documentation related to legal or regulatory matters.
- Develop mathematical models of environmental conditions.
- Promote environmental sustainability or conservation initiatives.
- Monitor environmental impacts of production or development activities.
- Develop environmental sustainability plans or projects.
- Analyze environmental data.
- Plan natural resources conservation or restoration programs.
- Conduct research of processes in natural or industrial ecosystems.
- Electronic Mail — 96% responded “Every day.”
- Freedom to Make Decisions — 67% responded “A lot of freedom.”
- Duration of Typical Work Week — 71% responded “More than 40 hours.”
- Face-to-Face Discussions — 52% responded “Every day.”
- Telephone — 48% responded “Every day.”
- Structured versus Unstructured Work — 54% responded “A lot of freedom.”
- Work With Work Group or Team — 48% responded “Extremely important.”
- Importance of Being Exact or Accurate — 42% responded “Extremely important.”
- Spend Time Sitting — 42% responded “More than half the time.”
- Letters and Memos — 35% responded “Every day.”
- Indoors, Environmentally Controlled — 43% responded “Every day.”
- Contact With Others — 38% responded “Contact with others most of the time.”
- Impact of Decisions on Co-workers or Company Results — 42% responded “Moderate results.”
- Time Pressure — 50% responded “Once a month or more but not every week.”
- Coordinate or Lead Others — 39% responded “Important.”
- Level of Competition — 35% responded “Highly competitive.”
- Responsibility for Outcomes and Results — 52% responded “Moderate responsibility.”
- Frequency of Decision Making — 35% responded “Once a year or more but not every month.”
|Title||Job Zone Five: Extensive Preparation Needed|
|Education||Most of these occupations require graduate school. For example, they may require a master's degree, and some require a Ph.D., M.D., or J.D. (law degree).|
|Related Experience||Extensive skill, knowledge, and experience are needed for these occupations. Many require more than five years of experience. For example, surgeons must complete four years of college and an additional five to seven years of specialized medical training to be able to do their job.|
|Job Training||Employees may need some on-the-job training, but most of these occupations assume that the person will already have the required skills, knowledge, work-related experience, and/or training.|
|Job Zone Examples||These occupations often involve coordinating, training, supervising, or managing the activities of others to accomplish goals. Very advanced communication and organizational skills are required. Examples include pharmacists, lawyers, astronomers, biologists, clergy, neurologists, and veterinarians.|
|SVP Range||(8.0 and above)|
Percentage of Respondents
|Education Level Required|
Interest code: IE Want to discover your interests? Take the O*NET Interest Profiler at My Next Move.
- Investigative — Investigative occupations frequently involve working with ideas, and require an extensive amount of thinking. These occupations can involve searching for facts and figuring out problems mentally.
- Enterprising — Enterprising occupations frequently involve starting up and carrying out projects. These occupations can involve leading people and making many decisions. Sometimes they require risk taking and often deal with business.
- Analytical Thinking — Job requires analyzing information and using logic to address work-related issues and problems.
- Integrity — Job requires being honest and ethical.
- Attention to Detail — Job requires being careful about detail and thorough in completing work tasks.
- Achievement/Effort — Job requires establishing and maintaining personally challenging achievement goals and exerting effort toward mastering tasks.
- Initiative — Job requires a willingness to take on responsibilities and challenges.
- Independence — Job requires developing one's own ways of doing things, guiding oneself with little or no supervision, and depending on oneself to get things done.
- Persistence — Job requires persistence in the face of obstacles.
- Cooperation — Job requires being pleasant with others on the job and displaying a good-natured, cooperative attitude.
- Adaptability/Flexibility — Job requires being open to change (positive or negative) and to considerable variety in the workplace.
- Dependability — Job requires being reliable, responsible, and dependable, and fulfilling obligations.
- Innovation — Job requires creativity and alternative thinking to develop new ideas for and answers to work-related problems.
- Leadership — Job requires a willingness to lead, take charge, and offer opinions and direction.
- Stress Tolerance — Job requires accepting criticism and dealing calmly and effectively with high stress situations.
- Self Control — Job requires maintaining composure, keeping emotions in check, controlling anger, and avoiding aggressive behavior, even in very difficult situations.
- Concern for Others — Job requires being sensitive to others' needs and feelings and being understanding and helpful on the job.
- Social Orientation — Job requires preferring to work with others rather than alone, and being personally connected with others on the job.
- Achievement — Occupations that satisfy this work value are results oriented and allow employees to use their strongest abilities, giving them a feeling of accomplishment. Corresponding needs are Ability Utilization and Achievement.
- Independence — Occupations that satisfy this work value allow employees to work on their own and make decisions. Corresponding needs are Creativity, Responsibility and Autonomy.
- Working Conditions — Occupations that satisfy this work value offer job security and good working conditions. Corresponding needs are Activity, Compensation, Independence, Security, Variety and Working Conditions.
Wages & Employment Trends
Median wage data for Environmental Scientists and Specialists, Including Health.
Employment data for Environmental Scientists and Specialists, Including Health.
Industry data for Environmental Scientists and Specialists, Including Health.
|Median wages (2020)||$35.21 hourly, $73,230 annual|
|Employment (2019)||90,900 employees|
|Projected growth (2019-2029)||Much faster than average (8% or higher)|
|Projected job openings (2019-2029)||8,900|
|Top industries (2019)|
Source: Bureau of Labor Statistics 2020 wage data and 2019-2029 employment projections . "Projected growth" represents the estimated change in total employment over the projections period (2019-2029). "Projected job openings" represent openings due to growth and replacement.
Job Openings on the Web
Sources of Additional Information
Disclaimer: Sources are listed to provide additional information on related jobs, specialties, and/or industries. Links to non-DOL Internet sites are provided for your convenience and do not constitute an endorsement.
- Academy of Clinical Laboratory Physicians and Scientists
- American Association for the Advancement of Science
- American Society of Civil Engineers
- Association of Environmental Engineering and Science Professors
- Ecological Society of America
- International Association of Impact Assessment
- International Input - Output Association
- International Society for Industrial Ecology
- Occupational Outlook Handbook: Environmental scientists and specialists
- Society of Environmental Toxicology and Chemistry | <urn:uuid:4fb0c351-60cb-4b77-952d-f9d0c3b4661f> | CC-MAIN-2021-21 | https://www.onetonline.org/link/summary/19-2041.03 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991685.16/warc/CC-MAIN-20210512070028-20210512100028-00574.warc.gz | en | 0.859018 | 4,643 | 2.78125 | 3 |
Paranoia refers to suspicious thoughts, worries or a strong feeling of threat. A paranoid person may feel threatened by a person or an organization and may hold false beliefs without having any proof for it. In extreme cases, paranoid thoughts can affect a person’s ability to function normally in daily life.
What Is Paranoia?
Paranoia is a thought process or an instinctive feeling that may lead to irrational beliefs or delusion. Influenced by stress, anxiety and fear, paranoid thinking can result in unreasonable suspicion and even mistrust of family, friends and peers. Paranoid individuals may hold persecutory beliefs and develop conspiracy theories about a perceived threat towards their life. They may have a strong sense of harm and believe that people are focused on observing or hurting them. A person with this condition may make false accusations about someone and may socially isolate themselves due to a general distrust of others. Paranoid thoughts are usually an indicator of a personality disorder or a mental health condition. It is also associated with psychosis, dementia and substance abuse. However, it is usually different from intense anxiety or phobias.
“Paranoia involves two key components: a person having unfounded ideas that harm will occur to them, and the idea that the harm is intended by others,” explains a 2017 study 1 . Paranoid thoughts tend to have three primary traits:
- Unfounded or exaggerated thoughts and beliefs
- Intense fear about something terrible that can happen
- A strong belief that the person is being singled out and targeted by others
MindJournal describes paranoia as “a mental state characterized by an individual’s correct observation from an incorrect or mistaken premise, leading to the development of a logically constructed, systematized and persistent series of persecutory delusions, like being maligned, poisoned or conspired against.” A paranoid state can lead to several complications if it occurs frequently or persists for a long time. Clinical paranoia is a more intense form where the person doesn’t believe that they are paranoid as they strongly believe their thoughts are right even if there is no proof.
Research shows that around 5-50% of the general population tend to have paranoid thoughts. According to a British survey, 21% of respondents believed that others were trying to hurt them, while another New York based study found that 11 percent of people thought they were being spied upon or followed. David Penn, a professor of psychology at the University of North Carolina says “People walk around with odd thoughts all the time. The question is if that translates into real behavior.”
History Of Paranoia
According to the International Network for the History of Neuropsychopharmacology (INHN 2 ), the term paranoia is derived from the Greek word “para noeo”, with “para” meaning derangement or departure from the normal, and “noeo” meaning thinking. In Ancient Greece, the term was used to describe a crazy or insane person. The term was first documented in Greek plays and was used by prominent philosophers like Aristotle, Plato and Hippocrates.
Later in the 1800s, the term was used by Johann Christian Heinroth to describe a disorder of intellect with preserved emotions and volition. Paranoid thoughts were first mentioned in DSM-III-R of the American Psychiatric Association to explain the overlapping diagnostic criteria of delusional (paranoid) disorder. Moreover, the ICD-10 of the World Health organization, mentioned the thought disorder in association with delusional disorder in 1992. A 2016 study explains “Paranoia (from the Greek παρά and νοια) made for greater clarity in psychiatric terminology, and denoted a broad category, including both acute and chronic delusional states which were considered to be distinct from mania and melancholia, and usually not to lead to mental deterioration.”
Understanding Paranoid Thoughts
Paranoia is primarily a thought disorder characterized by fearful feelings and severe intense anxiety, suspicion, mistrust and is associated with thoughts of conspiracy, threat or persecution. One 2015 scientific analysis states that it “is a projection of the patient’s internal disturbance, which, as a consequence, is believed to be the result of the hostility of others, such as neighbours, secret agents or foreign powers,” etc. Paranoid thoughts may occur in various mental disorders, such as:
- Paranoid personality disorder
- Delusional (paranoid) disorder
- Paranoid schizophrenia
According to the Encyclopedia of Mental Health 3 (Second Edition), 2016, paranoid people often believe that certain people or groups are targeting them for intentional harm. Cognitive-perceptual biases of intentionality, referentiality and mistrust define this thought disorder. Individuals with this condition often respond with fear, hostility or guardedness to the perceived threats and vulnerabilities.It is believed that childhood abuse and heredity may play a crucial role in the formation of the condition. Some experts also believe that it may be a defensive mechanism, however that is debatable.
Although most people tend to be a bit paranoid at times, severe forms of such thoughts can become delusions. Mental Health America (MHA) explains “Paranoia can become delusions, when irrational thoughts and beliefs become so fixed that nothing (including contrary evidence) can convince a person that what they think or feel is not true.” Delusions of persecution are closely associated with paranoid schizophrenia. The Encyclopedia of Stress 4 (Second Edition), 2007, states that contemporary psychological approaches claim that social cognition plays a crucial role. It also emphasizes that persecutory ideas are linked with mechanisms which may explain memorable experiences in an individual’s life that can make their sense of self-preservation stronger.
One 2006 study explains “Their perception of the world as a threatening place drives them to be highly alert to any evidence suggesting that they are being victimized. A constant search for proof of their victimization often leads them to misinterpret others’ comments and behaviors.” They often tend to be extremely confident about their ideas and knowledge, tend to be hypersensitive to criticism, and have trouble maintaining healthy personal and social relationships. The study adds “People who are paranoid are locked into a rigid and maladaptive pattern of thought, feeling, and behavior based on the conviction that others are out to get them.”
According to the National Health Service (NHS 5 ), UK, paranoid thoughts can often be triggered by the following:
- Traumatic life experiences and adverse events, like a breakup or bullying
- Excessive stress caused by the external environment
- Alcohol and substance abuse
- Abuse, neglect, abandonment or mistreatment during childhood leading to general mistrust
- Poor health or medical conditions
- Sleep deprivation or persistent poor quality sleep
- Anxiety and depression
Is It Paranoia Or Justified Suspicion?
Suspicious thoughts don’t necessarily translate into paranoid thoughts. When there is external evidence to support a person’s suspicious beliefs and behavior, then it may be justified suspicion. This is often known as prudent paranoia and is “a form of constructive suspicion regarding the intentions and actions of people and organizations,” explains Harvard Business Review. Suspicious thoughts that are justified can indicate high emotional intelligence. “Emotional intelligence, after all, consists in large part of paying attention to what’s happening in the environment and responding to it,” adds Harvard researchers. However, if someone strongly holds on to their suspicion and mistrust even in the lack of evidence or presence of evidence proving the contrary, then it can be a paranoid thought. Justified suspicions can enable someone to be aware and cautious while paranoia can make them unnecessarily stressed, anxious and afraid of things that are not real. However, it can often be challenging to figure out whether your suspicions are justified or not, especially when your friends and family do not believe you.
Paranoid thoughts are primarily focused on what other people may think or do. As evidence can vary from personal experience to the words of a witness, different perspectives can be gained from the same evidence. This is why it is crucial to seek medical help when symptoms of paranoid thoughts become too severe. Ideas and thoughts tend to be paranoid when:
- Only the person shares these paranoid, suspicious thoughts
- Loved ones believe that their thoughts are not justified
- There is no solid evidence to back up their thoughts and ideas
- Evidence proving otherwise is available
- The person is unlikely to have enemies or be targeted by others
- They still strongly believe in their thought even after being reassured by others
- These thoughts are motivated by uncertain feelings & ambiguous events
Paranoia Vs Anxiety
A paranoid state is a form of anxious feeling or thought. Paranoid thoughts may lead to anxiety while anxiety may also cause such thoughts. Anxiety often determines what a person becomes paranoid about and the longevity of these thoughts and ideas. Most of us tend to feel anxiety almost on a daily basis as we overcome one challenge after the other. Feelings of anxiety can become severe when someone is experiencing stressful situations, like unemployment, financial problems, divorce or a breakup. Anxiety can often make a person believe that others are judging them for their failures or their lack of something. This can be a milder form of being paranoid which most of us experience at times. However, this is not a sign of a mental condition.
Clinical paranoia occurs when an individual is convinced of threat, persecution or conspiracy, even when the evidence points to the contrary. For most people, paranoid thoughts are nothing more than anxiety. However, if this anxious feeling is not caused by a specific event or experience and if such feelings persist for a long time, then it is best to consult a mental health expert as it may be a thought disorder. However, it should be noted that the symptoms of paranoia are often more intense than panic or anxiety. Moreover, these symptoms may last longer and affect the person’s ability to function in daily life.
Symptoms Of Paranoia
All of us tend to have paranoid thoughts sometimes in our lives. However, the condition can be severe in some individuals. Although the experience of paranoia may vary from person to person, there are some common symptoms associated with this pattern of distorted thinking, such as:
- Mistrust of others, even loved ones
- Feeling constantly stressed and anxious due to paranoid thoughts
- Feeling misunderstood, confused and disbelieved
- Feeling abused and persecuted
- Being doubtful or suspicious of others
- Feeling victimized even in the absence of a real threat
- Feeling detached or hostile
- Social withdrawal and isolation
- Strained social and personal relationships due to mistrust
The severity of symptoms may vary depending on the sufferer and may adversely affect their daily functioning and quality of life.
Read more to know about : Symptoms Of Paranoia
Types Of Paranoia
Paranoia can be a unique and different experience for each person. However, for a sufferer, some of the most common types of paranoia and paranoid thoughts may include thinking:
- They are being talked about behind their back
- Being watched, online or in real life, by people or organisations with malicious intent
- Others are attempting to tarnish their social reputation or exclude them
- They are at a serious risk of being physically harmed or assassinated
- They are being secretly threatened with double meanings words or hints
- Others are plotting against them or deliberately trying make them feel bad and upset
- Others are attempting to con, steal or take their money and possessions
- Others constantly interfere their thoughts and actions to manipulate them
- An organization or the government is trying mind control on them
A paranoid person may experience such thoughts consistently or occasionally when triggered.
Read more to know about : Types Of Paranoia
Paranoid thoughts are associated with a number of psychological disorders such as, schizophrenia and delusional disorder. It can also be observed in people with different medical conditions which affect brain function, such as multiple sclerosis and Alzheimer’s disease. Paranoia is widely associated with these following mental health conditions as well:
- Anxiety disorders
- Parkinson’s disease
- Huntington’s disease,
- Brain injury
- Severe trauma
- Excessive stress
Apart from these, alcohol and substance use disorder is also associated with paranoid thoughts. “Cannabis and amphetamine abuse often causes paranoid thoughts and may trigger an episode of psychosis. Other drugs such as alcohol, cocaine and ecstasy can also cause paranoia during intoxication or withdrawals,” explains Healthdirect Australia.
Causes Of Paranoia
The exact cause for paranoid behavior is not fully understood. However, it is believed that it may be caused by a combination of certain factors like mental conditions, personality disorder or drug abuse. “The cause of paranoia is a breakdown of various mental and emotional functions involving reasoning and assigned meanings. The reasons for these breakdowns are varied and uncertain,” adds Mental Health America (MHA). Certain symptoms are associated with denied, projected or repressed emotions. Paranoid thoughts can also be related to certain adverse events, experiences and relationships in the life of the person.
Here are some of the common factors that are believed to cause the onset of the thought disorder:
- Genetic factors
- Psychiatric disorders
- Substance abuse
- Chemical imbalances in the brain
- Stress & anxiety
- Abusive or traumatic experiences
- Insomnia and sleep deprivation
- Brain infection
- Head injury
- Cognitive biases
Moreover, there are also some common risk factors associated with the development of paranoia, including:
- Low self-esteem
- Memory loss
- Social repression
- Adverse childhood experiences
- Reduced brain circulation due to high blood pressure
- Impaired hearing
- Social isolation
Read more to know about : Causes Of Paranoia
Diagnosis Of Paranoia
Paranoia can be often difficult to diagnose as the exact causes for the onset is unclear and it is common to a wide range of psychiatric disorders. Moreover, patients tend to have an extreme sense of distrust and hence they are reluctant to seek treatment. The person may be suspicious of doctors and hospitals and may falsely believe that treatment is meant to harm them. However, a patient may seek treatment when symptoms become too severe for the person to function normally in daily life and starts affecting their mental and physical health adversely. To perform an effective diagnosis, a doctor may:
- Analyze medical and family history
- Conduct a medical or physical examination
- Conduct a personal interview to assess the severity of the symptoms
- Conduct psychological assessments and tests
The doctor may also conduct certain laboratory tests to determine if the symptoms are caused by other psychiatric disorders, medical conditions, medications or alcohol and drug abuse. If paranoid thoughts are caused by a psychiatric condition, then a doctor may refer the patient to a mental health professional like a psychiatrist or a psychologist for effective diagnosis and treatment.
Treatment Of Paranoia
As paranoia can often be diagnosed as a part of some other psychiatric condition, seeking medical help can be necessary and helpful. A mental health professional can suggest certain psychotherapies and medications to help a paranoid person relieve distress, focus on their reality and improve the quality of their lives. This is why patients must consult a doctor if they experience paranoid thoughts and symptoms for several days. However, as paranoid individuals tend to believe that everyone is against them and are highly suspicious of the world, they may rarely seek treatment. Loved ones can often recognize someone suffering with paranoid personality traits and may encourage them to visit a therapist.
People with paranoia are naturally afraid and cautious about interacting with individuals in authority, like doctors. But, patients need to realize that doctors are there to help them get better. Although there is no specific treatment for this thought disorder, medical care can help the sufferer overcome the symptoms and live a more fruitful life. It should be noted that treatment primarily depends on the severity of the condition. A doctor will assess the patient to understand the cause of such paranoid thoughts and to identify and other underlying conditions.
Research 6 indicates that people suffering from paranoia and schizophrenia can benefit greatly when treatment and support is customized to address the specific issues of these mental health illnesses. Although there is no specific cure for paranoid thoughts, treatment can be directed at underlying causes and can help relieve the symptoms. Treatment can include the following strategies that aim to improve the patient’s mental health and quality of life:
Different forms of psychotherapy 7 can be recommended by mental health professionals to improve the patient’s ability to function. However, it is crucial that the therapist gains the trust of the sufferer first as a paranoid person may district them. The following therapy techniques are usually used for treating this condition:
- Cognitive behavioral therapy (CBT 8 )
- Art therapy 9
- Milieu therapy
- Psychodynamic psychotherapy
- Cognitive enhancement therapy
- Vocational training therapy
Doctors can prescribe antipsychotic drugs 10 and/or anti-anxiety medications for relieving symptoms in addition to therapy. Moreover, antidepressants may also be prescribed in some cases. However, it can be challenging to convince the patient to take medications as they mistakenly believe that it may harm them.
3. Hospital admission
In extreme cases, hospitalization may become necessary to stabilize the patient and manage symptoms, especially if the sufferer is prone to suicidal or homicidal thoughts.
Mental Health America (MHA) explains “It can be difficult to treat a person with paranoia since symptoms result in increased irritability, emotionally guardedness, and possible hostility. Oftentimes, progress on paranoid delusions and especially delusional disorder is slow. Regardless of how slow the process, recovery and reconnection is possible.”
Read more to know about: Treatment of Paranoia
Coping Strategies Of Paranoia
Apart from seeking medical help and consulting a doctor or mental health professional, you can also practice certain self-help strategies that can increase the efficacy of the recovery process. Some of the most helpful coping techniques for paranoia may include the following:
- Stick to the treatment regime and closely follow the instructions of your doctor.
- Challenge your paranoid beliefs and delusions and question yourself about the justification of your thoughts.
- Share your thoughts and emotions openly with your loved ones and talk honestly with them.
- Make sure to stay away from caffeine, alcohol and recreational drugs.
- Follow a healthy diet, get enough sleep and exercise regularly.
- Learn relaxation techniques such as deep breathing, meditation and yoga.
- Build your relationships, reconnect with cherished loved ones and try to socialize with friends.
- Pursue your passions and hobbies and do things that you enjoy on a daily basis.
- Get adequate exposure to sunlight and spend more time in nature as it will help you to relax.
Read more to know about : Coping With Paranoia
How To Help Someone With Paranoia
If your friend, family member or a loved one is affected by this thought disorder, then here are few things you can do to help them feel better and to care for them:
- Listen to the paranoid person
- Don’t try to force your opinions or perspective on them
- Don’t judge them or tell them they are wrong
- Gently assure them that they should look at things differently
- Learn about paranoia and associated mental health conditions
- Encourage them to see a doctor
- Ensure they follow the treatment plan
Read more to know about : Helping Someone With Paranoia
Recovering And Living An Anxiety-Free Life
People with paranoia are often worried that others are targeting them and trying to hurt them in some way. This can significantly affect their mental, emotional and physical well-being. Moreover, it can also adversely affect their career, relationships and quality of life. However, with effective treatment, a paranoid person can learn to manage their symptoms and overcome the condition to live a more meaningful life without worry or anxiety.
Ongoing therapy and medications, coupled with support from loved ones and self-help strategies can help you make great progress. If you or someone you know is paranoid, then make sure to seek professional help immediately.References:
- Raihani, N. J., & Bell, V. (2017). Paranoia and the social representation of others: a large-scale game theory approach. Scientific reports, 7(1), 4544. https://doi.org/10.1038/s41598-017-04805-3
- Castagnini A. ‘Paranoia and its historical development (systematized delusion)’, by Eugenio Tanzi (1884). Hist Psychiatry. 2016 Jun;27(2):229-40. doi: 10.1177/0957154X16630501. PMID: 27145948.
- Paranoia. (n.d.). Avon and Wiltshire Mental Health Partnership NHS Trust – Avon and Wiltshire Mental Health Partnership NHS Trust. https://www.awp.nhs.uk/advice-support/conditions/paranoia/
- Pinkham, A. E., Harvey, P. D., & Penn, D. L. (2016). Paranoid individuals with schizophrenia show greater social cognitive bias and worse social functioning than non-paranoid individuals with schizophrenia. Schizophrenia Research: Cognition, 3, 33-38. https://doi.org/10.1016/j.scog.2015.11.002
- Ritzler BA. Paranoia–prognosis and treatment: a review. Schizophr Bull. 1981;7(4):710-28. doi: 10.1093/schbul/7.4.710. PMID: 7034193.
- Freeman, D., Dunn, G., Startup, H., Pugh, K., Cordwell, J., Mander, H., Černis, E., Wingham, G., Shirvell, K., & Kingdon, D. (2015). Effects of cognitive behaviour therapy for worry on persecutory delusions in patients with psychosis (WIT): a parallel, single-blind, randomised controlled trial with a mediation analysis. The lancet. Psychiatry, 2(4), 305–313. https://doi.org/10.1016/S2215-0366(15)00039-5
- Abbing, A., Baars, E. W., de Sonneville, L., Ponstein, A. S., & Swaab, H. (2019). The Effectiveness of Art Therapy for Anxiety in Adult Women: A Randomized Controlled Trial. Frontiers in psychology, 10, 1203. https://doi.org/10.3389/fpsyg.2019.01203
- Birkeland SF. Psychopharmacological treatment and course in paranoid personality disorder: a case series. Int Clin Psychopharmacol. 2013 Sep;28(5):283-5. doi: 10.1097/YIC.0b013e328363f676. PMID: 23820335. | <urn:uuid:665e0079-d966-4669-bf57-9487644324f7> | CC-MAIN-2021-21 | https://mindsjournal.com/topic/paranoia/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989766.27/warc/CC-MAIN-20210512162538-20210512192538-00256.warc.gz | en | 0.935055 | 4,722 | 3.640625 | 4 |
UGC NET JRF: Sample Questions Papers and Memory Based Papers with solutions (2002)
Note. This paper contains fifty(50) multiple-choice questions, each question carrying two (2) marks.
Attempt all of them.
1.There are six villages A, B, C, D, E and F.
F is 1 km west of D
B is 1 km east of E
A is 2 km north of E
C is 1 km east of A
D is 1 km south of A
Which of these villages are in a line?
a) A, C and B b) A, D and E
c) C, Band F d) E, Band D
2.Consider the Table given. On the basis of this Table, one could conclude that 'X' is
a) (Y + Z) b) YIZ
c) (Y - Z) dY'YZ
3.Four persons. A, B, C and D had fruits from an open-air fruit stall. 'A' took grapes and
pineapple; 'B' ate grapes and oranges; 'C' took orange, pineapple and apple; 'D' ate grapes,
apple and pineapple. After taking fruits, B and C were taken ill. The most likely cause of
illness of B and C is the consumption of
a) apple bj pineapple
c) grapes d) orange
4.The given histogram shows the frequency distribution of height (the number of students
in the given height range) of 30 students in a class. Which of the following statements
based on this histogram is/are correct?
X 1 Y Z
20 10 5
30 25 3
45 15 15/2
120 125 130 -' 35 140 145 150 Height in cm 10 he height of most of the students is
between 135 cm and 140 cm.
2.There are only two students whose heights are between 120 cm and 125 CM
3.Fifty percent of the students have their heights between 130 cm and 140 cm.Select the correct answer using the codes given below:
a) 1 and 2 b) 2 and 3
c) 1 and 3 d) 2 alone
5. Two one-rupee coins are placed flat on a table. One coin `A' is rotated around the periphery of the other coin 'B' without slipping, till the original point of contact between the coins returns to its initial position The number of rotations made by coin 'A' in a fixed friction is
a) 2 b) 4 c) 3 d) 1
6.Five persons, a professor (A), an IAS Officer (B), an Engineer (C), a Politician (D) and a
Doctor (E) live in five flats. The flats are built in such a manner that one is on top of
another, as one would see in a five-storey building 'A' has to go up to meet his friend
'B'. 'E' is friendly with everyone and has to go up as frequently as to go down. 'C' above
whose flat lives 'A's friend. From the ground floor upwards, the correct sequence of the
location of the flats of these persons is
a) C, A, B, E, D b} A, C, E, B, D c) B,
C, A, E, D d) A, D, E, C, B
7. Consider the following statements regarding cars parked at a parking lot:
1. All the Maruti cars parked here are white.
2. Some of these cars have radial tyres.
3. All Maruti cars manufactured after 1986 have radial tyres.
4. All cars are not Marutis.
Which one of the following inferences can be drawn from the statements given above?
A) Only white Maruti cars with radial tyres are parked here
B) Some white Maruti cars with radial tyres are parked here
C) Cars other than Maruti do not have radial tyres
D) Most of the Maruti cars parked here were manufactured before 1986.
8. The graph shown in the figure relates to sales figures in thousands of TV sets of a particular company for the period 1990-97. On the basis of this graph, which of the following inferences would be valid?
1. TV sales increased constantly from '90 to'93.
2. Sales did not improve in `93-'95.
3. There was a sharp drop in sales in `95-96
4. Sales are not likely to improve from '97 onwards.
Select the correct answer using the codes given below:
a) 1, 2, 3 and 4 B) 2 and 4 c)1,3 and 4 d) 1,2 and 3
9. The monthly income of a family is Rs. 3000. 20% of it is spent on children's education. Out
of the balance, 15% is spent on house rent and from what is left, 50% is spEjnt on provisions. Then which of the following statements would be true?
1. The amount spent on children's education is Rs. 600.
2. The amount spent on house rent is Rs. 450.
3. The amount spent on provisions is Rs. 1020
4. The family has Rs. 1020 per month for other expenses.
Select the correct answer using the codes given below:
a) 1, 2, 3 and 4 b) 1,3 and 4
c) 2 and 4 d) 1 and 3
10.Who is legally competent under the Indian Constitution to declare war or conclude peace?
a} The President
b) The Prime Minister
c) The Council of Ministers
d) The Parliament
11. The Road Ahead' is a book written by
a) Jyoti Basu b) L. K. Advani c) Bill Clinton d} Bill Gates
12. Which year shows the maximum percentage of export with respect of production?
a) 1992 b) 1993 c) 1996 d) 1995
13. The population of India in 1993 was
a) 800 million b) 1080 million c) 985 million d) 900 million
14. If the area under tea production was less by 10% in 1994 than 1993, then the approximate
rate of increase in productivity of tea in 1994 was
a) 97.22 b) 3 c) 35 d) Cannot be determined
15. The average proportion of tea exported to the tea produced over the period is
a) 0.87 b) 0.47 c) 0.48 d) 0.66
16. What is the first half decade's average per capita availability of tea?
a) 457 gms b) 535 gms c) 446 gms d) 430 gms
17.In which year was the per capita availability of tea minimum?
a) 1996 b) 1994 c) 1991 d) None of these
18. In which year was there minimum percentage explosion can make a rapid progress.
of export with respect to production?
a) 1991 b) 1992 c) 1993 d) 1994
19. In which year we had maximum quantity of tea for domestic consumption?
a) 1994 b) 1991 population explosion can make a rapid c) 1993 d) 1996 proqress.
Directions for questions 20 to 23: All India Monsoon Rainfall (1990 to 1999) June -September
20. The normal rainfall during the period 1990-1999 was experienced in the year(s)
a) 1994 b) 1993 & 1995 c) 1996-97 d) 1990
21. The year ..... witnessed the least rainfall.
a) 1991 b} 1999 c) 1992 d) 1993
22. Out of the 10 years studied, how many had above normal rainfall?
a} 3 b) 7 c) 5 d) 6
Directions for questions 23 to 27: Each of the following incomplete arguments is
followed by four sentences. One of the four completes the argument in order to justify
the conclusion. Pick that out.
23. India cannot make a rapid progress because India has a problem of population
a) No country with population explosion can make a rapid progress.
b) Only a country without population
c) Some countries with population problem cannot make a rapid progress.
d) All countries which have a problem ofpopulation explosion can make a rapid proqress.
24. Man learns through experience as he has initiative by nature.
a) Some persons who take initiative by nature learn through experience.
b) All who have initiative by nature learn through experience.
c) None who has initiative by nature learns through experience.
d) Only few with initiative learn through experience.
25.We have now to fight for peace with some courage and determination as we fought against aggression.
a) Many are fighting for peace who have fought against aggression.
b) All those who have fought against aggression should fight for peace
c) Some who are fighting for peace have fought against aggression.
d) None is fighting for peace who have fought for aggression.
26. Whom the gods love dies young.
a) Many die young who are gods.
b) Few die young who are gods
c) some who are loved by the gods die young
d) all those who love the gods die young
27. Education has produce a vast population able to read but unable to distinguish what is worth readlng
28.If the ratio of boys to girls in a class is B and the ratio of girls to boys is G, then 3 (B + G) is
a) equal to 3 b) less than 3 c) more than 3 d) less than 1/3
29. Tea worth Rs. 126 per kg and Rs. 135 per kg are mixed with a third variety in the ratio 1 : 1 : 2. If the mixture is worth Rs. 153 per kg the price of the third variety per kg will be
a) Rs. 169.50 b) Rs. 170 c) Rs. 175.50 d) Rs. 180
30. The average of 11 numbers is 10.9. If the average of the first six numbers is 10.5 and that of the last six numbers is 11.4, then the middle (61") number is
a) 11.5 b) 11.4 c) 11.3 d) 11.0
31. there are 30 students in a class . the average are of the first 10 student is 12.5 years. the average are of the next 20 student is 13.1 years. the average age of the whole class is.
a) 12.5 years b)12.7 c) 12.8 d) 12.9 years
32. the perimeter of one face of cube is 20 cm. its volume must be
a) 8000 cm3 b) 100 cm3 c) 125cm3 d) 400 cm3
33.the number of revolutions made by a wheel of diameter 56 cm in covering a distance of 1.1 km is (use p=22/7)
a) 31.25 b) 56.25 c) 625 d) 62.5
Direction: Read the passages below and answer the question based on them :
The world of computer enthusiasts is in the grip of an ethical crisis. should copmuter viruses be classified as a life form? will consensus-building agencies take up the case for virus rights, protest the death penalty, demand that their clents be set apart in enclaves ? None of this is beyond the bound of probabilty, considering the pitch of the debates that rage on the internet, the global computer network set up 30 years ago by the U.S. defence research establishment. A new society is coming to birth in virtual reality; one is easily seduced into forgetting that these bizarre events are taking place inside a Xerox corporation computer at Palo Alto, California. If the science fiction of the '30s gave the world the concept of the Cyborg, a creature half human and half-computer, the Internet today seems poised on the verge of the Cyborg. If a recent case is any indication, the simple etiquette which has so far governed social behaviour among Internet users will no longer suffice to administer this electronic Wild West.
That solecisms in the world's latest frontier of society have attained a real-world level of scandal is obvious from the manner in which, earlier this year, an electronic intruder broke into a conversation among female users and aimed obscene visuals at them. This raised a storm of outrage. Internet users first bombarded his electronic mail box with rebukes and then had him expelled. The issue leads into uncharted philosophical territory: in virtual space, can one deterenine where the body ends and mind begins? At what point do word and image translate as act? Human society seems to possess a reverse Midas touch, contaminating every system it comes into contact with. The day is not distant when all the vicious impulses of the real world will have colonised virtuality, and another Utopia will have gone down the chute.
34. The central idea being followed in the passage is:
(a) the danger posed by viruses to Internetusers
(b) the status of sanctity of computer information routes.
(c) the degrading moral standards of our Society
(d) the role of morality it-, !he formation of computer information high-ways,
35. The term chute' in the passage specifically refers to:
(a) the concept of the Cyborg
(b) the science fiction of the `30s.
(c) a creature half human and half computer of the science fiction of the 30s.
(d) none of the above.
36. The term "solecisms" is used to highlight:
(a) the basic codes of ethical conduct
(b) breach of protocol
(c) the role of virus affected information
(d) none of the above
Passage - 2
The difference between different kinds of writing lies not so much in the writing itself, but in the way we look at it (and, of course, in the way the author wished us to look at it; but we often know very little about that). Literary forms do not exist outside our own minds. When we read anything, no matter what - a description of a scientific experiment, a history book, a ballad, or a novel -- in so far as we pay attention only to what things are happening one after another to something or somebody, it is a story; in so far as we read it only to learn the way in which something or someone behaves in certain circumstances, it is science; in so far as we read it only to find out what has actually happened in the past, it is history People often ask what is the difference beaween poetry and prose. The only difference is :-,l the way the writer looks at things. For instance, the novelist starts with a general idea in his mind; say, that people are always trying to escape from their responsibilities, and that escape only leaves them in a worse mess.
Then he writes a story about what happened to Mr. and Mrs. Smith. He may never say, in so many words, that they tried to escape, never mention his idea, but this idea is the force that drives the story along. The poet, on the other hand, hears people talking in his club about the sad story of Mr. and Mrs Smith. He thinks, 'There is now, that's very interesting. They are just like everybody else; trying to get around life. It's like those sailors who tried to get to India by the Northwest Passage On they go, getting farther and farther into the ice, miles from home. Why, that's a good idea `or a poem.' He writes a poem about explorers, he may never mention Mr. and Mrs. Smith at ail. The novelist then goes from the general to the particular, the poet from the particular to the general, and you can see this also in the way they use words. The novelist uses words with their general meaning, and uses a whole lot of them to build up a particular effect: his character. The poet uses words with their particular meanings and puts them together to give a general effect: his ideas. Actually, of course, nearly all novels and all poems except very short ones have both ways of looking at things in them (e.g. Chaucer's Canterbury Tales is more like a novel in verse; Mefville's Moby Dick is more like a poem in prose). All you can say is that one way is typical of the novelist and the other of the poet.
37.An appropriate title of this passage be,
(a) Of Poets and Novelists (b) Of Poetry (c) Of Novels (d) Of Literature
38.According to the author,
a) Each person reads a particular piece readers'of writing with the same motive.
b) Every person has a different motive in reading a particular piece of writing.
c) Some pieces of writing are not read by people at all.
d) None of the above.
39. One piece of writing can be distinguished from the other by.
i) the difference in the author's style of writing.
ii) the difference in the reader view toward the writings.
III. the way the meaning has been used.
(a) I only (b) II only
(c) III only (d) I & II
40.The essential difference in the approaches of a novelist and a poet is that,
(a) The novelist moves from particular general.
(b) The poet moves from general to particular.
(c) The poet general.difference. both
(d) There in no and the same.
41. The novelist builds up,
(a) characters (b) ideas (c) Both (a) and (b) (d) Neither (a) nor (b)
42. The poet builds up
(a) characters (b) ideas (c) Both (a) and (b) (d) Neither (a) nor (b)
Directions Q. 43 to 47, Choose the pair of words which best expresses the relationship
similar to that expressed in the capitalised pairs.
43. ADJACENT: OBJECTS
(a) modern : times
(b) gradual : degrees
c) contemporary : events
d) repetitive : steps
44. FACILITATE: HAMPER
(a) animate : feed
(b) conventional : naive
(c) urbane : remote
(d) birth : demise
45. DENOUNCE : CONDONE
a) endure : imagine
b) antithetical : supportive
c) unnatural : noncommittal
d) natural : committal
46. SALUBRIOUS: BANEFUL
(a) contemplate : intimidate
(b) alleviate : exacerbate
(c) probity : fallacy
(d) susceptible : desultory
47. LANDSLIDE : PEBBLE
(a) deluge : droplet
(b) beach : wave
(c) desert : oasis
(d) rain : puddle
Directions for Q. 48 to 50: Choose the ORDERED pair of statements, where the first
statement implies the second, and the two are logically consistent with the main statement.
48. If our ancestors were monkeys, we would be anthropoids today.
A .We are not anthropoids
B. Our ancestors were monkeys
C. We are anthropoids
D. Our ancestors were not monkeys
(a) DA (b) CB (c) AB (d) AD
49. Task A, if ever accomplished; can transform our lives.
A Our lives have been transformed
B. Our lives have not been transformed
C. Task A has not been accomplished
D. Task A has been accomplished
(a) CB (b) BC (c) AC (d) AD
50. Press either of the buttons X and Y and the drink will come out.
A The drink has come out
B. Either X or Y has been pressed
C. The drink has not come out
D. Button Y has been pressed
(a) AB (b) AD (c) DA (d) DC | <urn:uuid:8e35455c-a0b0-4865-8447-16b5874bcacb> | CC-MAIN-2021-21 | https://www.currentgk.co.in/view_topic/201011/5300/ugc-net-jrf-sample-questions-papers-and.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988696.23/warc/CC-MAIN-20210505203909-20210505233909-00057.warc.gz | en | 0.922944 | 4,256 | 3.125 | 3 |
Baltimore Pike (then Nottingham Road) is the only thing that separates the Brandywine Battlefield Park from encroaching suburban sprawl. More than 300 years ago, the park's 52 acres of verdant pastures, including the Benjamin Ring House and the Gideon Gilpin House, and a section of the Brandywine River (known as a creek to locals) were witnesses to quite possibly the most influential battle of the American Revolution. The setting of the battle was then in the midst of Chester County; it now lies in Delaware County since the 1789 creation of that county. Situated at the heart of the Quaker community, the pristine countryside (ideal for raising grain and livestock) once lay in peace. On September 11, 1777, its landscape of undulating hills and wandering creeks allowed for defensive maneuvers. Thick trees shaded the movements of light infantry in open formation, along with the dense fog of the early morning on that day.
Two days earlier, the Continental Congress was notified of the intentions of General Sir William Howe, leader of the British Crown Forces, by a letter from General George Washington, commander of the Continental Army. It stated:
The enemy advanced yesterday with a seeming intention of attacking us upon our post near Newport. We waited for them the whole day; but they halted in the evening at a place called Milltown, about two miles from us. Upon reconnoitering their situation, it appeared probable that they only meant to amuse us in front, while their real intent was to march by our right, and, by suddenly passing the Brandywine and gaining the heights upon the north side of that river, get between us and Philadelphia, and cut us off from that city. To prevent this, it was judged expedient to change our position immediately. The army accordingly marched at two o’ clock this morning, and will take post this evening upon the high grounds near Chads’s Ford.
The Continental Army had received devastating blows during the failure of the previous year’s New York campaign. It was an abject group of about 13,000 soldiers, amassed only after incessant pleas from General George Washington to the states and Congress. The group was also comprised of militiamen, or untrained citizen-soldiers. They lacked American reinforcements because of disenchantment with the length of the war, news of defeats, horror stories of imprisonments, the prevalence of camp disease, as well as discontent with pay and the unfair promotions of a few. Lack of money, however, was the biggest problem. By this point in the war, infantry regiments were often given blue uniform coats, but the militia were given nothing but the permission to fight. They were not only poor in appearance, they were poor in size. Their meager appearance was juxtaposed with the noble dress of soldiers from other countries, such as French officers and other European soldiers seeking adventure in the Americas, who had aligned themselves with the cause of American independence. After hearing of Washington’s gallant victories at Trenton and Princeton, the Marquis de Lafayette, a French nobleman, insisted on volunteering for the American Army at his own expense and was appointed major-general by Congress; he would get his first taste of war with the American Continental Army. With the help of Lafayette and others, they were charged with the protection of Philadelphia at all costs, having lost New York the year before. Capturing the colonial capital would mean sudden victory for the British.
The Crown Forces were constituted of British Army regiments and troops of Hessians (German mercenaries), as well as American Loyalists (citizens loyal to the dominion of the British Crown). This armed force of 18,000 soldiers was professionally trained by their superiors, many of whom were European nobility. They had come from the British fleet of 267 ships, under British Commander-in-Chief General Sir William Howe. Having set sail a few months earlier along the Delaware Bay, much of their food had been spoiled. They had survived in a foreign country by raping the towns they encountered of their food and supplies. Cattle and other livestock were ransacked for food; horses for war. Despite having to live like petty criminals, they dressed in red coats and bearskin caps, leather caps, or tricorner caps, whether they were grenadiers, light infantry, or battalion company men. The 16th and 17th regiments of light dragoons wore red coats and leather-lined helmets. The German infantry retained the Prussian style grenadier mitre (headdress) with a brass front plate and wore blue coats. They were well-dressed and well-trained.
While traveling along the Delaware River, Howe received word that their path was blocked by a small American naval force’s fortifications, so he changed their course to the Chesapeake Bay with a destination of Elk Ferry, some 30 miles southwest of Philadelphia. At the same time, Washington and his men, who had been in the Watchung Mountains near Morristown, New Jersey since their victory at the Battle of Princeton, marched south to Wilmington, Delaware. They arrived on the August 25, the same day Howe landed at Elk Ferry. Landing without opposition, Howe used this time to let his tired, seasick men recover from their journey while Loyalists replenished their supplies. On September 3, the army started marching toward Philadelphia, one division commanded by Lt. Gen. Wilhelm Knyphausen and the other by Major General Charles Cornwallis. For the next five days, Washington positioned his men traveling along the White Clay Creek, west of Newport and Wilmington. He expected Howe to lead his men toward Wilmington, but Howe refused to fall into the trap of Washington’s advantageous ground. He made Washington change his defensive ground by making a feint north towards Pennsylvania, moving them to the Brandywine River at Chads’ Ford (now Chadds Ford). This game of cat and mouse was futile because Philadelphia was known to be Howe’s ultimate goal. The outcome of this battle would allow one side to properly defend the capital or the other to take it, and in doing so the fate of the Revolution would be determined. The battle itself garnered the largest movement of troops during the revolution, a reported 26,000 men.
Washington stationed his troops along the main fords on the eastern side of Brandywine Creek on the morning of September 9, in order to force a fight at Chads’ Ford between Chester County and Philadelphia. Conflicting reports came to Washington, claiming that Howe’s troops would make a frontal attack at Chads’ Ford. Therefore, General Washington stationed the center of his troops at the 150 foot-wide ford on the north side of the Brandywine. The ford, a crossing shallow yet sturdy enough to allow horses and wagons to travel over, was blocked to stop the British from traveling along the road north to Philadelphia. On the eve of the great battle,Washington continued to receive conflicting information about the whereabouts of the British Army. He and Lafayette spent the night in Benjamin Ring’s house and Gideon Gilpin’s house, respectively, log cabins on the shore of the Brandywine River, remnants of Swedish and Finnish settlers from the 1630s, which remain part of the park.
The dawn of September 11, 1777 saw the British forces begin the six-mile march from Kennett Square northeast to Chads’ Ford at 5am. Howe’s strategy, which he first used at Bunker (or Breed’s) Hill in Boston in 1775 and at the Battle of Long Island in 1776 with a disastrous outcome for the Americans, was to give the impression the attack was a frontal assault to draw the American flank out and then send troops around the flank to cause them to retreat, shift position, or be surrounded. Typical early-morning weather of fog and haze shrouded the area. The first to leave was a 496-member vanguard which consisted of Queen’s Rangers, Ferguson’s riflemen, and a squad from the 16th Light Dragoons, followed by the 1st and 2nd British Brigades, the artillery, supply wagons, and a herd of rustled livestock. Serving as the rear guard was the 71st regiment. The battle also marked the debut of a military innovation, the Ferguson breech-loading (loaded from the back) rifle on the battlefield. Used by Ferguson’s Corps of English Riflemen, “It was a short, carbine-style firearm that could be loaded and fired up to six times to per minute –twice the rate of a smoothbore musket—and it was rifled, giving it great accuracy at two hundred yards,” according to noted Brandywine historian Thomas J. McGuire. Smoothbore military muskets, or firelocks, were used by both sides. Eighteen inches of steel, triangular at the base tapering into a sharp point, struck fear into enemies faster than the bayonet itself could enter their bodies.
Lt. Gen. Wilhelm von Knyphausen, a Hessian officer, led a division 7,000 troops with the purpose of distracting Washington on the south side by marching east on Nottingham Road (now Baltimore Pike). The fog covered some of the British troops’ movements, but they were eventually seen by American scouts. While this occurred, Major General Charles Cornwallis, commander of the light infantry, led the division of 8,000 men in a circuitous route along the western side of the Brandywine. They had crossed the river by mid-afternoon at the unguarded Trimble’s Ford on the western branch. Then, after crossing Jeffries’ Ford on the eastern bank, they gained a strategic position to the north of Washington’s troops and settled near the Birmingham Friends Meeting House. Strategically, Knyphausen was ordered to simply occupy the area overlooking the Brandywine and not to attack unless provoked, so that he could be alerted by the gunfire from Cornwallis’ troops behind the American army to attack. Washington had assumed the attack would be on the Brandywine at Chads’ Ford, and Howe took full advantage of that assumption.
Although Washington continued to believe that the main force was moving to attack at Chads’ Ford, his hesitance to act on any of the reports, especially one of the local’s reports, would prove costly. Washington stationed his troops along the eastern side of the Brandywine near Chads’ Ford. Chads’ Ferry, 500 yards south of Chads’ Ford, was guarded by Gen. Nathanael Greene’s division of 1800 Virginia Continentals. He was positioned to provide support for the center of Washington’s troops. To Greene’s right, Gen. Anthony Wayne’s Division of Pennsylvanian Continentals protected the ford itself with 1600 strong. About a mile north of Chads’ Ford, Generals John Sullivan, Adam Stephen, and William Alexander (a.k.a. Lord Stirling) and their men were stationed by the Birmingham Meeting House. Unfortunately, local Loyalists informed Howe of Washington’s position, as well as his undefended fords.
When the local Quakers realized the British army’s surprise attack, a twenty year old noted, “...In a few minutes the fields were literally covered with them...Their arms and bayonets being raised shone as bright as silver, there being a clear sky and the day exceedingly warm.” It was 4pm when the British attacked. Howe was slow to attack the American troops, which gave them time to position some of their men to receive the British threat to their right flank. Stephen’s and Stirling’s divisions received the brunt of the assault, and both lost ground fast. Sullivan attacked a group of Hessian troops trying to outflank Stirling’s men near Meeting House Hill and bought some time for most of Stirling’s men to withdraw, but British fire forced Sullivan’s men to retreat.
By 4:30pm, Knyphausen’s artillery opened fire. His men, on the west bank of the Brandywine, attacked General William Maxwell’s and General Wayne’s American divisions as they crossed Chads’ Ford. Soon, they had reached the Chester County side of the Brandywine. Maxwell and Wayne’s divisions were forced to retreat and leave behind most of their cannon. General John Armstrong’s militia, never engaged in the fighting, also retreated from its positions. Further north, Greene sent Colonel Weedon’s troops to cover the road just outside the town of Dilworth (now Dilworthtown) to hold off the British long enough for the rest of the Continental Army to retreat. At this point, Washington and Greene arrived with reinforcements, having traveled the mile north, to try to hold off the British, who now occupied Meeting House Hill. The remnants of Sullivan’s, Stephen’s, and Stirling’s divisions, fewer than 3,000 American troops, stopped the pursuing British for nearly an hour. Five times the British drove Americans from the hill; five times it was retaken. Finally, only when Cornwallis invoked the full power of his entire artillery did he successfully cause them to retreat east toward Dilworth.
An hour and forty-five minutes after the battle had begun, it had ended. British Captain John André, the infamous British spy later tried and hanged for his treason with Benedict Arnold, noted in his journal, “Night and the fatigue the soldiers had undergone prevented any pursuit.” If they had pursued the American army, there would not have been a battle at Germantown. With the British no longer attacking, Weedon’s force was left to retreat. The defeated Americans marched to Chester, where most of them arrived at midnight with some stragglers arriving until morning, north to the Falls of the Schuylkill, a day’s march from Germantown. At midnight, George Washington wrote from Chester to the Continental Congress, “Sir: I am sorry to inform you, that in this day’s engagement, we have been obliged to leave the enemy masters of the field.”
The Battle of Brandywine was the only time arguably the greatest British general and the greatest American general met in combat. Both men made egregious errors; Washington incompetently left his right flank wide open, which could have caused countless unnecessary American losses had it not been for the cavalry of Sullivan, Sterling, and Stephen’s divisions, and Howe failed to attack the American right flank quickly and destroy the American army when he was given the chance. Howe’s failure to attack the American right flank quickly showed his indecision as a leader, which allowed most of the American army to escape.
Official casualty figures vary. Both sides and their counts were biased. Historian Thomas J. McGuire states that, “American estimates of British losses run as high as 2,000, based on distant observation and sketchy, unreliable reports.” 587 British casualties have been listed: 93 killed, 488 wounded, and 6 missing or unaccounted for. Hessians only suffered a loss of 40. Most accounts of American losses were only from the British side because casualty return for American losses did not survive or were never released, official or otherwise. One initial report by a British officer recorded American casualties at over 200 killed, around 750 wounded, and 400 unwounded prisoners taken. General Howe’s report to Lord Germain, the British Secretary of War, stated that Americans “had about 300 men killed, 600 wounded, and near 400 made prisoners.” One of the very few figures of casualties from the American side was reported from Major-General Nathanael Greene, whose count was between 1,200 and 1,300 men. Taking Greene’s numbers into account, the estimate of the total American loss was between 1,160 and 1,260 Americans killed or wounded in the battle. Regardless of the number of casualties, this battle left an everlasting impression on American history.
The Continental Army did not see their loss as a discouragement. Instead, they used the victory of the British at the Battle of Brandywine and the ensuing Battle of Paoli to their advantage. The unsuspecting British never thought that the Continental Army would regroup and attack the British encampment at Germantown, which they did. The attack was officially a loss for Washington and his men, but it was a moral victory, reaffirming their hope, and engendering the support of the Comte de Vergennes, the French Foreign Minister, and his government to work with them against the British. “[T]he genius and audacity shown by Washington, in thus planning and so nearly accomplishing the ruin of the British army only three weeks after the defeat at the Brandywine, produced a profound impression upon military critics in Europe,” wrote John Friske in 1891 in his book The American Revolution. For the next six months, Washington and his troops camped and trained in the hills Valley Forge throughout the dead of winter. Despite causalities of nature, the Continental Army once again attacked the British and regained control of Philadelphia.
In later days, Howard Pyle (1853-1911), “The Father of American Illustration,” inspired by the Continental Army’s heroism against insurmountable odds, preserved their bedraggled bravery in a painting, “The Nation Makers,” in 1903. It was the culmination of painting with his students for five summers at Chadds Ford. Unknowingly, he established the so-called “Brandywine School” by providing inspiration for some of America’s most outstanding illustration art, which can be seen through the works of his proteges, Frank Schoonover, Stanley Arthurs, N.C. Wyeth, W.J. Aylward, Thorton Oakley, Violet Oakley, Clifford Ashley, and Harvey Dunn.
More recently, others have tried to preserve Brandywine River’s surrounding “storied vales and hills,” the 52 acres around Chadds Ford, which became a Pennsylvania State Park in 1949 and was recognized as a National Historic Landmark on January 20, 1961 on the National Register of Historic Places as Brandywine Battlefield Park. Since then, the park has been protected from the advancements of technology, sustained by the staff’s vigor for reenactments. Brandywine Battlefield Associates portray everything from Quaker farmers to British and Continental soldiers during special events the park hosts throughout the the year. They also guide daily tours of the park and battlefield. Their mission is “to present educational programs, exhibits, tours, events and publications that broaden public understanding and appreciation of the significance of the Battle of Brandywine within the larger social, political, economic, technological and military context of the American Revolution period.” On August 14, 2009, the park closed its gates, along with three other PHMC (Pennsylvania Historical & Museum Commission) museums, because of insufficient funding in the state’s ongoing budget crisis. The Historic Site reopened on August 25th, 2009, thanks to an interim agreement between PHMC and Chadds Ford Township with the Brandywine Battlefield Associates. The park is operated with limited staff and volunteers, who believe it is their duty to “take lead responsibility for [the] interpretation of, and participate in efforts to preserve, the entire Battlefield National Historic Landmark, of which Brandywine Battlefield Park is only a small part, recognizing the significance of these ten square miles where approximately 26,000 men fought.”
For this reason - among many others - Thomas Buchanan Read (1822-1872) romanticized the Brandywine River as a “winding silver line through Chester’s storied vales and hills,” in his 1845 novel, Paul Redding: A Tale of the Brandywine. The history of Chadds Ford is just a fraction of the secrets hiding behind a shade of a blade of grass or between a rock and the reddish-clay dirt. Though much of the scenery in the Brandywine Valley has changed since the time of the Battle and even from the time of Read’s writing, visitors to the town of Chadds Ford can still hear the echoes of Washington and his beleaguered army. Even more importantly, they can see the re-enactments of the tales of the “storied vales” that shaped the American Revolution.
For schedules of events and other information on visiting Brandywine Battlefield Park, please visit http://www.thebrandywine.com/attractions/battle.html.
- “A Summer Idyll: Landscapes from the Brandywine Valley.” Brandywine River Museum. 2002. <http://www.tfaoi.com/aa/3aa/3aa320.htm>.
- Beyond Philadelphia: The American Revolution in the Pennsylvania Hinterland. Eds. John B. Frantz and William Pencak. University Park: The Pennsylvania State UP, 1998.
- Boatner, Mark Mayo. Cassell’s Biographical Dictionary of the American War of Independence, 1763-1783. London: Cassell, 1966. page 109.McGuire, Thomas J. Brandywine Battlefield: “Brandywine Battlefield Mission Statement.” Brandywine Battlefield Associates.
- Fiske, John. The American Revolution. Boston: Houghton Mifflin Company, 1891. page 332.
- “George Washington to Continental Congress, September 11, 1777.” American Memory. Ed. John C. Fitzpatrick.
- “History.” Birmingham Monthly Meeting of the Religious Society of Friends. <http://www.birminghamfriends.org/history.html>.
- “History of the Battle of Bradywine.” Brandywine Battlefield Historic Park. 1995. 4 July 1995. <http://www.ushistory.org/Brandywine/thestory.htm>.
- Jackson, John W. With the British Army in Philadelphia 1777-1778. San Rafael, CA: Presidio Press, 1979.
- MacElree, Wilmer W. “The Battle at The Ford.” Along the Western Brandywine. 2nd Ed. West Chester, PA : F.S. Hickman (printer), 1912.
- McGuire, Thomas J. The Philadelphia Campaign: Volume 1: Brandywine and the Fall of Philadelphia. Mechanicsburg, PA: Stackpole Books, 2006. page 269.
- Patterson, Richard. “What Was A Hessian?” Washington Crossing Historic Park. 11 Dec. 2008. <http://www.ushistory.org/WashingtonCrossing/history/hessian.htm>.
- Pennsylvania Trail of History Guide. Mechanicsburg, PA: Stackpole Books, 2001.
- Read, Thomas Buchanan. Paul Redding: A Tale of the Brandywine. Boston : A. Tompkins & B. B. Mussey : Redding & Co., 1845.
- Rujumba, Karamagi. “Fort Pitt Museum, Bushy Run close due to state budget crisis”. Pittsburgh Post-Gazette.14 Aug. 2009. <http://www.post-gazette.com/pg/09226/990898-100.stm>.
- Smith, Karen A. “Battle of Brandywine: A Brief Summary.” History Online. 9 Dec. 2008.
- Smith, Samuel S. “The Battle of Brandywine.” The American Revolution. 1976.
- “The Battle of Brandywine: September 11, 1777 at Brandywine, Pennsylvania.” The American Revolutionary War. 2009. <http://www.revolutionarywar.n2genealogy.com/battles/770911.html>.
- “War Comes to Revolutionary Pennsylvania.” Pennsylvania: A History of the Commonwealth. Eds. Randall M. Miller & William Pencak. University Park: The Pennsylvania State UP, 2002. | <urn:uuid:0550e14d-94db-459d-9f51-e6c9ec89e1f4> | CC-MAIN-2021-21 | https://www.pabook.libraries.psu.edu/literary-cultural-heritage-map-pa/feature-articles/battle-brandywine-which-rebels-were-defeated | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989766.27/warc/CC-MAIN-20210512162538-20210512192538-00254.warc.gz | en | 0.959836 | 5,052 | 3 | 3 |
Jason Hamzy is an artist, educator, and co-owner/operator of Little Schoolhouse in the Woods outdoor preschool. He has been running Little Schoolhouse with his wife, Lee for four years. In that time, he has completed his teaching degree, earning a Bachelor's Interdisciplinary Studies Pre-K through 5th. In spite of all that he learned in college, Jason is amazed at how much he has learned from his wife, Lee. Her experience of more than a decade of working with children, in a Waldorf setting, and at home, has shown him what true dedication to the education of the whole child is all about. The kids call him Mr. Jay, and he hopes that he is fortunate enough to hear that name for many more years.
Now is the time to hunt Spring Ephemerals. Spring Ephemerals refers to perennial plants that emerge quickly in the spring and die back to their underground parts after a short growth and reproduction phase. In a deciduous forest, like ours, they grow before the trees have their leaves allowing sun to reach the forest floor. This is a very short amount of time, so take advantage of the timing, and head out today and for the next week or so.
Here is an excellent Ohio Spring Wildflowers downloadable field guide from the Ohio Division of Wildlife.
Please remember, these flowers only grow once a year. Please stay on the paths while out searching and do not pick them! We want them to be there for many springs to come. Take nothing but pictures and leave nothing but footprints.
How many can you find?
Here is a list of the ephemerals that we often see here at the Little Schoolhouse in the Woods or in nearby parks. Refer to the field guide for pictures to help identify them. Remember to look at the leaves, several flowers have look a likes. Happy Spring hiking!
Harbinger- of -Spring
White Trout Lily
Wild Blue Phlox ( can also come in pink)
Lesser Celandine (highly invasive, not native)
Dutchman’s – breeches ( one of Ms. Lee’s favorite 😉 )
A Note from Mr. Jay- Are you interested in a formal lesson plan for grades 1 through 3, but easily adapted for older and younger students? I use Ohio New Learning Standards and the National Geographic Learning Framework to create a formal lesson plan. This is good for any teacher whose administration requires formal lesson plans, or for alternative educators who seek academic language to support their strategies. Let me know what you think in the comments below. I love feedback!
Ms. Lee has put together this handy list for identifying local birds. It has links to the online Audubon Field Guid for each bird on the list for information and identification purposes. We love the Audubon Society of Ohio and hope that you consider becoming a member and supporting this vital organization.
If you would like an activity that is simple, fun and helps promote following instructions and perseverance while developing fine motor skills, try making a Magic Cord with your children!
This activity is excellent for teaching children how to follow instructions and develop their fine motor skills. They must hold on tightly to the ends while they are twisting them together. Watch out! The strings tend to fly out of their fingers as they practice tightly grasping with their fingers while alternating hands. You have to start over, sometimes several times. They also must grasp tightly as they pass it back to the teacher. Don’t get discouraged, but laugh and enjoy the process. Model how much fun it is to try and how the next time, it will be easier. Keep doing it until it is well twisted.
Take three or more pieces of colorful yarn at least 2 feet long, or more, depending on your preference, and tie them together at one end.
Have child hold the knotted end while you hold the loose end.
Have the child begin twisting their ends in one direction. You begin twisting your end in the opposite direction. You can count to 100 or sing a song, but twist it a lot. It will start to fold, so you’ll have to hold on tight and keep it pulled straight while you continue to twist. If you drop it, pick it up and start twisting it again.
Grab the middle of the twisted string and have the child hand you their end. You hold both ends and countdown “3, 2, 1”. Then you let goof the knotted end while holding onto the loose ends and the knotted end. The yarn twists up into a thicker string. Tie all the loose ends and the knotted end so that it won’t come undone.
Voila! A magic cord!
This is a great open-ended object for playing with or makes a cute necklace or bracelet.
NOTE: Actively supervise children so that they aren’t napping with it, putting it around their neck or putting it in their mouth or wrapping it tightly around their extremities.
If you would like a formal lesson plan for your administrators or because you’d like more details, click here: Magic Cord Lesson Plan
I recommend this lesson plan for folks in the Cincinnati region. These fossils are common in the creek beds all around southwest Ohio. In this time of school closures and physical distancing, getting outdoors in nature is the perfect way to beat the house bound blahs! Steer clear of the playgrounds and go out into the woods or creek walk.
Here are some beautiful natural places in Cincinnati you may want to explore:
I wrote this lesson as part of my National Geographic certification, and I am sharing it here with you. I hope you get outdoors with your child or students and enjoy learning a little bit about the history of this area while doing some hands on, fun, nature-based activities.
Some great places to check out would be:
These three links will take you to some ID sheets to help you sort. Laminate them and reuse them!
How A Healthy “No” Establishes Positive Boundaries
One of the skills I had to learn as an early childhood educator was how to say, “No”. At first, I was uncomfortable denying a young child anything. “Yes” is a much easier path. My impulse was to do everything and give anything to and for these cute kids. I certainly didn’t want to see them experience discomfort. It didn’t take long before I realized that children often experience discomfort because they want something to be a certain way that it cannot be. I also learned that a child needs to know their boundaries in order feel secure. Having a set of clear expectations is the foundation for healthy early childhood approaches.
Boundaries were made to be tested.
Children test the boundaries because they are searching for what is ok and what isn’t. It’s perfectly natural and a developmentally necessary process. They learn these boundaries through a series of trial and error tests. If I want these kids to be safe and I don’t want to lose my mind during transitions, I have to learn how to say, “No”. What I’ve observed is that children are grateful for limits. They know it keeps them safe and they know that a competent, caring adult is taking care of them.
“No” means “no”, not a punishment.
Saying “no” isn’t about punitive discipline or withholding necessities or privileges from children. Saying “no” is about setting clear boundaries that work for child and caregiver. Ultimately, it is about clear communication. When we are able to articulate what is acceptable behavior in clear, simple terms, children are better able to function. Children feel secure knowing their boundaries. That doesn’t mean they won’t push those boundaries. It is the educator/caregiver’s task to navigate these boundaries with respect for the individual and to be fair minded in their approach. I strive for a balance between being firm and being a big softy.
What are healthy boundaries?
When we are preparing to go outside during cold, wet weather, it is important that the children are dressed appropriately. It can be the difference between a fun, productive excursion into the woods and a miserable, difficult time. In the past, if we have gone out into the forest with a group of children and realized that one child put on their sneakers instead of their insulated winter boots, it really changes our desired outcomes. Once their feet are damp, they become uncomfortable and unpleasant. While this is a great example of “natural consequences of actions”, it isn’t fair to the other children, not to mention the teachers.
These experiences have led me to be explicit in my instructions. “Everyone put on your snow boots after you have put on your snow pants” lets them know exactly what is expected of them. And I always double check that all children are wearing appropriate footwear before we leave. Healthy early childhood approaches uses clear communication of expectations and boundaries.
What does that look like?
Here is an example of a a healthy “no”: Student “A” is excited about his new licensed character shoes that light up. As we gear up to head outside, “A” decides that their snow boots are too big and clunky. They want to wear their new sneakers. A friendly but firm “no, those shoes won’t keep your feet warm and dry like your snow boots will” may be all that needs to be said. In case this isn’t enough of an explanation, we must remain calm and firmly insist that the sneakers are not an option. It is cold and wet outside and the sneakers will not be appropriate footwear.
We refrain from arguing. An explanation is acceptable, though. If Student “A” decides that this is an issue worth crying about, that is ok. Disappointment is a real emotion and we must allow children to experience the entire range of feelings, including the unpleasant ones.
The educator’s role in establishing boundaries
Our job is to stay grounded and to be the emotional anchor for this child. We may have to assist more than we normally would in this case. I might say, “I can see that it upsets you. You will get to wear your cool new kicks later on, but right now, we are all going outside together. Can you put your boots on yourself, or do you need me to help you?” By offering a choice between independence and and assistance, we recognize the child’s need for ownership over their choices while only offering realistic options.
An unrealistic option may be to offer letting them go outside barefoot or allowing them to wear their new shoes. In the end, giving in to tantrums allows children to become tyrants, but being firm in our resolve while compassionately assist them in make good decisions. This helps them to manage their expectations and to regulate their own emotions.
It is important to only offer what we are willing to follow through on. If the child is unable to calm down and put on their snow boots, we can calmly let them know that we are about to help them put their boots on. We do so in a gentle, calm manner, never angry or irritated. Children can sense impatience and it confuses and frustrates both parties, often exacerbating the situation. We are adults who are calmly and lovingly helping them to get ready to go outside and play. The security of a calm, rational adult caring for them allows children to just be a kid and play.
Interested in learning more? Take our Resilient Kids Course!
Lee and I will be leading a discussion course at UC’s Communiversity about the strategies we use to encourage resilient, healthy, confident kids. “The Power of ‘NO'” is the first of ten strategies we’ll share over the course of a 2 classes, 2 hours each. We’ll share anecdotes, specific words and phrases and real take away strategies for supporting healthy development. Each class will include time to share and discuss. Come join us and learn our healthy early childhood approaches for instilling grit in early childhood at our Resilient Kids Course.
This is what student-led curriculum looks like. While our focus is on social and emotional development during early childhood, educators should be responding to the needs of the children. We don’t decide that it is time to begin teaching letters, the children do.
When we observe the lunch table discussions center around the first letters of their names or the children show pride in spelling their names or we observe other indications through their play that they are ready, we look for a fun game to incorporate into the circle.
This is a simple, fun game to play for letter recognition in early childhood. This homemade box is our ABC Alligator, and we sing a little song:
“Alligator, Alligator, down by the lake,
Let ________ reach in and see letter what you ate!”
Each child gets a turn to reach in and pull out a letter. They either identify it, or ask for help from the group. We then come up with words that begin with that sound. Assessment occurs informally through observation, and there is no pass/fail. We want it to be fun, and there are no wrong answers. This is a low-risk, play-based, student-led game that the kids have a blast playing. It may be the beginning of letter recognition, phonics, and spelling for some, while reinforcing those skills already present in others. The mixed age group pairs well with the scaffolding of the developing reading skills, too. Children who have an answer learn impulse control while their friend figures out if they know the letter or if they want to ask for that help. We all have fun singing the song and coming up with words that begin with the letter.
One final note about academics in early childhood: We believe the focus in early childhood should be social and emotional health, developmentally appropriate circles and a focus on the natural environment. Having said that, we embrace an interdisciplinary approach that uses whichever pedagogy is most effective. This occurs through mindful observations of the children during free play and throughout the rhythms and routines of the day. Our curriculum focus is on meeting both the individual’s and the group’s needs in developmentally appropriate ways.
A soft breeze blows through the leafless trees while a gentle mist softens the air surrounding a small group of children. The sounds of laughter and playful shrieks echoes through the winter hillside as a small creek babbles by. At first, the children toss rocks into the shallow stream. Then, one jumps as high as he can and lands with a splash in the middle of the stream, splashing his mates. Instead of anger or irritation, the other children join in. They are laughing, climbing out onto nearby rocks and doing it again and again. In fact, they continue to splash and play in the creek for nearly an hour, happily enjoying the sensations of sight and sound as the water muddies and flows downstream.
There are two adults supervising all of this with quiet amusement. No, they are not irritated that the children are getting wet and they will have to go back inside soon. They understand that children are participating in important work: playing in nature. They also know that the children are dressed properly for the chilly, damp weather. They have on insulated, water-proof boots and one piece rain suits over warm layers that keep them dry and warm.
There are fewer and fewer opportunities for children to engage in outdoor play, especially in weather deemed “inappropriate”. Most public schools have policies that keep children inside on chilly or damp days. This is often in response to parents’ concerns over health and safety. The truth is, there is no bad weather, just bad clothes.
“No Bad Weather, Just Bad Clothes”
This is a well known phrase amongst outdoor educators. When learning takes place, it is important to remember that basic needs like physical well-being must be met. This is the basic principal behind Maslow’s Hierarchy of Needs. Children learn best when they are safe, clothed and fed. That is why it is so crucial for outdoor educators to ensure their students are dressed appropriately for the outdoors.
In a group of 7 or 8 children, if there is one child who is not outfitted properly, this is the child who is crying and ready to go home while the rest of the children are blissfully playing. Being able to play outdoors is a skill that children learn. Having positive experiences means being dressed appropriately so that they associate outdoor play with fun!
At Little Schoolhouse in the Woods, we have a favorite rain suit: Tuffo. That isn’t to say that there aren’t others, but our experiences with Tuffo have been awesome! And, no, we are not being compensated for this endorsement. For the children, a one piece suit like Tuffo’s Muddy Buddy work best. The children tend to fall and slide and play in ways that adults don’t, and the one piece design keeps them thoroughly dry. Plus, when sized right, there is room for coats and layers underneath. I don’t recommend tucking their pant legs into the boots, but, instead, keeping them on the outside of the boot. A dry kid is a happy kid!
Boundaries help children feel safe. Children often act out when they do not know what is expected of them. They spend a lot of time exploring and pushing the boundaries we set for them. This is a natural and normal process. As adults, it is our job to set limits and to clearly communicate them to our children.
Calm and Clear: The Captains of the Ship
It isn’t necessary to enforce these boundaries with punitive measures or “consequences”, either. Simply stating and consistently standing firm on the principals we set forth is all that is necessary. Repetition and consistency foster the security children crave. As adults, we must exercise patience in this process because it is a process. It isn’t a lesson that happens once, and then it’s over. We must often set limits and communicate them clearly many times before they are understood. Being matter-of-fact about these limits is important, too. This isn’t an emotional struggle, it is a safety issue.
“Hold my hand in the parking lot. I will keep you safe.” We are the adults, it is important to maintain a calm, firm hand on the rudder. We are in charge, and we know what is best. When we lose our cool and shout or punish, it shows a lack of control, and then who is steering the ship? “I see that you don’t want to hold my hand right now, so I will carry you. I love you and will keep you safe.”
The only time I find it necessary to enforce rules with punitive measures isn’t even actually punitive. It is an issue of safety. If a child is physically going to harm themselves or others, I may have to say, “You know you may not throw the blocks. I cannot let you hurt your friends. It is time to play elsewhere,” then I assist that child in finding another place to play. We should avoid getting frustrated or angry. Being firm shows confidence and lets children know that we are serious. We do not need to shout or threaten to be effective leaders. And, yes, it is okay to acknowledge a child is upset or distressed, but we mustn’t allow that to distress and upset us.
The Importance of Free-play
I am a big proponent of free-play. Children benefit from time to do what they want in a safe environment. Free-play does not mean anarchy, it means freedom to play and explore the world around them without interference from adults or danger. Our job is to keep them safe, not tell them how to play. Through this kind of play, children develop a healthy sense of independence.
Start Early. Hug often.
Boundaries are erected when the limits are clearly communicated. You don’t need a fence, you need to be clear and consistent with your expectations. It is harder to place limits once a child knows that there are none. I once heard this analogy: Our children need warm, tight hugs when they are younger, and, as boundaries are expanded, we can loosen those hugs. Once a child has unlimited freedom, it is nearly impossible to reign in those limits. We must have clear, consistent, and reasonable boundaries and expectations from children at an early age. As they mature and show competence in different areas, we may loosen restrictions and broaden boundaries appropriately. If we give children total freedom from the beginning, they don’t feel safe, and will not understand when we place new restrictions on them.
Mr. Jay is an outdoor educator with Little Schoolhouse in the Woods. He learned everything he knows about early childhood education from his wife and co-teacher, Ms. Lee (Follow her at https://www.facebook.com/littleschoolhouseinthewoods/). Mr. Jay holds a Bachelor’s degree in Early Education, PK-3, with a 4th and 5th grade endorsement.
Are we truly helping children when we do things for them? It depends on where the child is developmentally. According to Vygotsky, a prominent education theorist, children learn best when tasks aren’t too hard or too easy, known as the zone of proximal development (1978). The amount of help from adults and peers can be referred to as scaffolding (Wood et al. ,1976). Scaffolding refers to the assistance a person needs when learning a new task or concept. This assistance is removed when no longer needed. Being tuned into an individual child’s developmental readiness is a key factor for knowing when and how much scaffolding is needed for a particular skill.
It is important to focus on building a growth mindset through praise of effort, not achievement. Children will grow up and be able to put their shoes on by themselves. Our goal isn’t only to teach children how to accomplish the task, but to encourage them to develop a sense of perseverance. We do not praise children for putting their shoes on, we praise their effort.
Watching a child struggle to solve a problem on their own without interfering can be difficult to do. If you have been truly present with your child, and connected with them in a real way, it will be easier to let go and allow them to play by themselves or to work at mastering a task such as putting on their own socks. I know that I often want to swoop in and hand them that thing they are struggling to reach, or to switch their shoes so that they are on the right feet, but true learning comes from trial and error. That means they try, and sometimes, they fail. It is through the mistakes that children learn what to adjust for the next time. Because there will be a next time. That is when fostering perseverance pays off.
Here are some tips to promote perseverance:
Allow children the time and opportunity to dress themselves.
Let children solve puzzles and build block towers, train tracks, etc. all by themselves.
Know your child and only assist when necessary and as little as possible. Don’t immediately do something for a child when you see them struggle.
Allow children to drink out of regular cups and glasses. Sippy cups may avoid spills, but that is how children learn.
Stay calm and patient. Not only does this model the behavior we want from our children, but it lets them know we have confidence in their abilities to master tasks.
Building self-confidence and endurance in early childhood is a foundation for happiness later in life. In order to develop these characteristics, children need time and space to practice doing things for themselves. Life is hectic, and we are often on deadlines and schedules, rushing from one place or activity to another. Whenever possible, calmly, patiently allow a child to struggle. Avoid getting emotional or impatient, and just observe. If necessary, offer advice or a hand in moving forward, but as much as possible, allow children the opportunity to accomplish their tasks on their own. In the end, they will be better off by having struggled and built up their perseverance.
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Cambridge, MA: Harvard University Press.
Wood, D., Bruner, J., & Ross, G. (1976). The role of tutoring in problem solving. Journal of Child Psychology and Child Psychiatry, 17, 89−100.
“When we treat children’s play as seriously as it deserves, we are helping them feel the joy that’s to be found in the creative spirit. It’s the things we play with and the people who help us play that make a great difference in our lives.” ~Fred Rogers.
“Children need the freedom and time to play. Play is not a luxury. Play is a necessity.” Kay Redfield Jamison (professor of psychiatry)
Life is busy. There are endless deadlines, obligations, appointments, work, school, and social commitments. Packing the kids up from one place, rushing to another, finding time to eat, getting the kids ready for bed, waking up in the morning and doing it all over again. This is normal. Especially if you have more than one child. Managing busy schedules can be very challenging for parents, but even more so for children. Children are strong and resilient, but have a need for down time. Not the kind that they get in the car, either. In fact, sometimes you have to just schedule time for doing nothing.
In fact, if you want to see the relationship with your child truly blossom, just sit with them. We don’t have to always read a book, or be doing something. Sometimes, just sitting and staring at the clouds can be a bonding experience. Just sit and let them play in the sandbox. Kick a ball back and forth. It is the simplest act for a child to play, but as an adult, it can be difficult sometimes. The decision to consciously and intentionally NOT impose our own ideas of what we should do, and allowing young children the freedom to decide what to do, can be the most beneficial decision we make as parents, educators, and caregivers . This is the heart of child centered, play-based education. The spontaneous and self-determined play that a young child engages in, is probably the most important time they spend. Free time promotes resilience, creativity, and problem solving skills. So, schedule time for unscheduled play time. Your kids will thrive, and you will see a positive difference in them.
Listen to what Temple University psychologist Kathy Hirsh-Pasek and others have to say here | <urn:uuid:3346f8fa-11cc-492c-85db-2bcf107f19c5> | CC-MAIN-2021-21 | https://littleschoolhouseinthewoods.com/author/admin/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.18/warc/CC-MAIN-20210506235514-20210507025514-00296.warc.gz | en | 0.962324 | 5,624 | 2.546875 | 3 |
Aligning inclusive, quality education with the Sustainable Development Goals (SDGs) was centre-stage on Friday, as the President of the UN General Assembly held a high-level interactive meeting for the International Day of Education.
“The education sector is wrestling with mammoth challenges worldwide”, said Tijjani Muhammad-Bande, in his message for the day.
Listing them, he said there was a “precipitate decline” in the quality and standards of education; a widening knowledge gap between students in technically advanced societies and those in developing countries; a crisis of learning in conflict zones; growing school bullying, and “the declining esteem of the teaching profession” overall.
Mr. Muhammad-Bande maintained that today’s education must “bridge the yawning gap” between the modern employment needs for specialized skills, and actual learning opportunities.
“School curricula have yet to anticipate and respond to workplace needs for hands-on, vocational, ICT applications, and sundry technical skills, while still advancing the traditional scholastic pursuits”, he stated.
Moreover, he highlighted, “the significance of the deficits in education outcome becomes obvious when viewed alongside the spiralling population crisis”.
Education in a crisis
The fate of school children trapped in conflict zones deserves even more urgent attention.
According to UNICEF, in 2017, 500 attacks were staged on schools in 20 countries worldwide. In 15 of those 20, troops and rebel forces turned classrooms into military posts.
Thousands of children were recruited to fight, sometimes made to serve as suicide bombers, or forced to endure direct attacks.
“The learning environment may also be rendered unsafe by gun-toting, machete-wielding, gangs and unruly youths, and by sexual predators on school premises”, Mr. Muhammad-Bande said.
And natural disasters pose additional threats to the learning environment.
Cyclones, hurricanes and storms are among the climatic conditions that periodically wreak havoc on school buildings and facilities, making learning difficult, if not impossible.
“The choices that education stakeholders make have direct impact on various social groups, particularly, disadvantaged groups like rural communities, the urban poor, persons with disabilities, and women”, upheld the PGA, noting that nearly two-thirds of the world’s illiterate adults are female, mostly in under-developed countries.
Choice also becomes critical in the struggle to elevate the status of the teaching profession, recruit competent and motivated teachers, and expose teachers to innovative techniques.
But there are bright spots he said: “Forward-looking education policies have contributed to the attainment of SDG targets in some countries”, asserted Mr. Muhammad-Bande.
And participants at this year’s International Day of Education are given the opportunity “to share international good practices in inclusive quality education”.
Partnerships are key
Education enhances the “analytical, inventive and critical thinking capacities of human beings”, the Assembly President said, adding that in the process, it accelerates each nation’s technological attainments and economic growth.
“When a society remains perpetually under-developed, it must among other things re-evaluate its education system”, said Mr. Muhammad-Bande. “If the system is dysfunctional or does not facilitate the acquisition of pertinent knowledge and skills, the economy will, at best, stagnate, and at worst, collapse”.
Bearing in mind the “tremendous amount of work” that lies ahead, he shared his belief that partnerships can play an important role in implementing and attaining the SDGs, which is why his office “has placed strong emphasis on engendering partnerships across key priority areas”, including education.
In conclusion, Mr. Muhammad-Bande urged Member States and other key partners to examine the feasibility and value-added support in establishing a network of key existing education networks to exchange information and ideas, “including sources of support, relating to all aspects of education”.
Power of education
“Education has the power to shape the world”, Deputy Secretary-General Amina Mohammed spelled out at the podium.
“Education protects men and women from exploitation in the labour market” and “empowers women and gives them opportunities to make choices”, she said.
Moreover, it can help change behaviour and perceptions, thereby fighting climate change and unsustainable practices. A quality experience in the classroom helps promote mutual respect and understanding between people; combat misperceptions, prejudice and hate speech; and prevent violent extremism.
“Without education, we cannot achieve any of the SDGs”, Ms. Mohammed flagged.
And yet, with 2030 looming on the horizon, the world is lagging behind, prompting the Secretary-General to issue a global call for a Decade of Action, to accelerate the implementation of the SDGs.
“The situation in education is alarming…because of the crisis in the number of children, young people and adults who are not in education”, as well as because many who are, are not learning.
And refugees and migrants face additional challenges.
According to the UN Office of the High Commissioner for Refugees (UNHCR), the proportion of refugees enrolled in secondary education is 24 per cent, only three per cent of whom have access to higher education.
“We have the power to shape education, but only if we work together and really bring the partnerships that are necessary to provide quality education”, she concluded. “We have a duty to step up our efforts, so that quality education for all is no longer a goal for tomorrow, but a reality”.
Invest in education
Action for “the four Ps on which our future depends”, namely people, prosperity, the planet and peace, is imperative, according to the head of the UN Educational, Scientific and CulturalOrganization, UNESCO in her Friday message.
Although education is “a valuable resource for humanity”, Director-General Audrey Azoulay pointed out that it is “all too scarce for millions of people around the world”.
A global learning crisis, confirmed by the UNESCO Institute for Statistics, is a major cause for concern as it is also a crisis for prosperity, for the planet, for peace and for people”, she said, urging everyone to take action for education “because education is the best investment for the future”.
UNESCO has been charged with coordinating the international community’s efforts to achieve SDG 4, quality education for all.
“First and foremost”, the UNESCO chief said, “our Organization takes action for people, by making education an instrument of inclusion and, therefore, of empowerment”.
Changing lives, transforming communities
“It changes lives, transforms communities and paves the way towards productive, sustainable and resilient societies in which children – girls and boys – can reach their full potential”, she expanded, urging everyone to strengthen their efforts to manifest a world in which every child receives a quality education that allows growth, prosperity, empowerment and so they can “make meaningful contributions to communities big and small, everywhere”.
Women Rights in China and Challenges
Women rights and gender discrimination have been a problem for many years in china. Various restrictions were imposed on women to suppress them in society. Income discrepancy and traditional gender roles in country aim to place women inferior as compared with their male counterparts.
There are diverse sectors where women face discrimination. Women of the past and present in china have dealt with unfair employment practices. They have had to jump over the unnecessary hurdles just to keep up with their male counterparts in the society. The Chinese government claims to better prioritize the promotion of gender equality but in reality it does not seem appropriate to say that there is not a single department of life where women are not being suppressed. In jobs, mostly men are preferred over women at high positions. There are a number of contextual examples which demonstrates this discrepancy in the status of women throughout china, and whilst there has been a great deal of the popular sphere, others have been brutally repressed by a government dominated by male families. For example, women who have children do not always receive support from their pay when maternity leave.
China’s history has seen a higher focus on men being the core of not just their families but also they play crucial role in in overall country’s growth and development. Post Confucius era, society labeled men as the yang and women as the yin. In this same vein, society views Yang as active, smart and the dominant half. This compared with Yin, which is soft, passive and submissive. These ideologies are not as prominent today but persist enough that there is a problem.
The tradition begins at birth with boys being the preferred children compared to girls in China. A consensus opinion in the country is that if one has a male child versus a female child, they believe the son will grow into a more successful member of the family. The sons are more likely favored because the issue of pregnancy is a non-factor and they can choose almost any job they desire. Of course, this is something that does not support efforts for gender equality nor women’s rights in China.
A survey done just last year found that 80% of generation Z mothers did not have jobs outside of the home. Importantly, most of those surveyed were from poorer cities. The same survey found that 45% of these stay-at-home mothers had no intention of going back to work. They simply accepted their role of caring for the house. Gender equality and women’s rights in China have shifted toward cutting into the history of patriarchal dominance within the country.
Women’s Rights Movement in China
Since the Chinese government is not completely behind gender equality in China for women, the feminist movement is still active and stronger than ever. In 2015, the day before International Women’s Day, five feminist activists were arrested and jailed for 37 days. They were just five of an even larger movement of activists fighting against the traditional gender role ideology that has placed females below males. These movements have begun to make great progress towards gender inequality within the country. From 2011 to 2015, a “12th Five Year Plan” had goals of reducing gender inequality in education and healthcare.
The plan also was to increase the senior and management positions and make them accessible for women to apply for said positions. Xi Jinping, the current President of the People’s Republic of China, has proclaimed that the country will donate $10 million to the United Nations Entity for Gender Equality and the Empowerment of Women. During the next five years and beyond, this support will help the women of China and other countries build 100 health projects for women and children. March 1, 2016, the Anti-domestic Violence Law of the People’s Republic of China took effect. This law resulted in the improvement in legislation for gender equality in China. In June of that year, ¥279.453 billion was put forth toward loans to help women, overall.
‘’There are a number of contextual examples which demonstrate this discrepancy in the status of women throughout China, and whilst there has been a great deal of progress made in some elements of the popular sphere, others have been brutally repressed by a government dominated by male influence.
Mao Zedong’s famously published collection of speeches entitled ‘the little red book’ offers a glimpse into the People’s Republic’s public policy in relation to women, as Mao himself is quoted as saying ‘Women hold up half the sky’ and more overtly.’’
In order to build a great socialist society, it is of the utmost importance to arouse the broad masses of women to join in productive activity. Men and women must receive equal pay for equal work in production. Genuine equality between the sexes can only be realized in the process of the socialist transformation of society as a whole.
The china has been widening the gender discrimination gap in the society through legalized way and there is desperate need to raise the voices in gender equality.
Gender Pay Gaps during Pandemic: A Reflection on International Workers’ Day 2021
Men, rather than women, have been disproportionately affected by job losses over time. Nonetheless, the harsh reality of this pandemic recession has shown that women are more likely to be unemployed. As a matter of fact, women have lost substantial jobs as a result of increased childcare needs caused by school and daycare closures, which prohibit many women from working, and as a result of their employment being concentrated in heavily affected sectors such as the services sector (hospitality business, restaurant, retail outlets and so on). According to a study by Alon et al, women’s unemployment increased by 12.8 percent during the first period of Covid-19 (from March 2020), while men’s unemployment increased by just 9.9 percent. Changes in job rates (which include transfers into and out of the labor force) follow the same trend, with women experiencing a much greater drop in employment than men during the recession. Similar trends have been seen in other pandemic-affected countries.
In Southeast Asia, where informal workers account for 78 percent of the workforce, women make up the majority of blue-collar employees. In Indonesia, the Philippines, Cambodia, Laos, and Myanmar, women make up a substantial portion of the domestic workers, despite having a low contractual working status in informal settings. They are underpaid as a result of the pandemic, and the Covid-19 recession has reduced their importance in the workplace. Indonesia as one of the countries which affected by pandemic also experienced similar thing, with two-thirds of the female population in the active age group (between 15 and 64 years old), Indonesia is supposed to have tremendous potential for accelerating its economic development, but the truth is the opposite due to the never-ending pandemic. Since the pandemic began, many employees, mostly women, have lost their jobs or had their working hours shortened. Of course, their daily wages are affected by this situation. Besides, the wage gap between men and women also widens from March 2020 to March 2021, with women in the informal sector receiving up to 50% less than men, clearly resulting in discriminatory practices.Despite the fact that Indonesia ratified the International Labor Organization’s (ILO) Convention No. 100 on Equal Remuneration in 1958, fair and equal salaries have remained unchanged until now, and the legislation seems to have been overlooked and inapplicable in a pandemic situation.
Furthermore, the issue is not resolved at that stage. Apart from the pandemic, both formal and informal workers are exposed to various work systems and regulations. Women may have similar experiences with low wages and unequal payment positions in both environments, but women who work in the formal sector have the capacity, experience, and communication skills to negotiate their salaries with their employers, while women who work in the informal sector do not. Women in informal work face a number of challenges, including a lack of negotiation skills and a voice in fighting for their rights, particularly if they lack support structures (labor unions). Furthermore, when it comes to employees’ salaries, the corporate system is notoriously secretive. Another issue that continues to upset women is the lack of transparency in employee wages. Despite the fact that the national minimum wage policy is regulated by the government, only a small number of female workers are aware of it.
Overcoming Gender Pay Gaps within Pandemic Condition
In the spirit of International Workers’ Day 2021, there should be an organized and systematic solution to (at the very least) close the wage gap between men and women in this pandemic situation. International organizations and agencies also attempted to convince national governments to abolish gender roles and prejudices, however this is insufficient. As a decision-maker, the government must ‘knock on the door’ of companies and businesses to support and appreciate work done disproportionately by women. Furthermore, implementing transparent and equitable wage schemes is an important aspect of significantly changing this phenomenon. Real action must come not only from the structural level (government and corporations), but also from society, which must acknowledge the existence of women’s workers and not undervalue what they have accomplished, because in this Covid-19 condition, women must bear the “triple burden” of action, whether in productive work (as a worker or labor), reproductive work (as a wife and mother), and also as a member of society. Last but not least, women must actively engage in labor unions in order to persuade gender equality in the workplace and have the courage to speak out for their rights, as this is the key to securing fair wages. And when women are paid equally, their family’s income rises, and they contribute more to the family’s well-being.
Latvian human rights activists condemn homophobia in China, Latvia and the world
The issue of human rights of LGBT persons is like a hot potato – hard to spit it out, but also hard to swallow. Despite majority of the public having nothing against the LGBT community, people are afraid to allow them to have the same human rights everyone else has.
Governments and politicians also clash when it comes to fully recognizing the human rights of LGBT persons – and communist China is no exception. Interestingly, the Chinese Communist Party maintains a stance of double morals on this issue. On the one hand, during UN meetings China always reproaches other nations about homophobia and violations of LGBT rights. On the other hand, China has never been able to eradicate homophobia in the Chinese community, but instead has furthered it, for instance, by banning Eurovision broadcasts in China and by trying to ignore the existence of an LGBT community in China.
The Chinese Communist Party has become seriously entangled in its own ideology – as I already wrote, Chinese representatives have no shame in criticizing other countries’ discrimination of people with a non-traditional sexual orientation, stressing that China doesn’t consider homosexuality to be a mental illness. Moreover, the Chinese government has publicly stated that China supports the activities of LGBT organization. But this is simply not true! Although on the international stage Beijing acts as a protector of the human rights of LGBT communities and agitates for the equality of gays and lesbians, in China itself LGBT and women’s rights activists are being repressed, detained and held in labor camps. Thus, Beijing is doing everything in its power to suppress women’s rights and human rights in general.
The most pathetic thing in all this is that Beijing has always voted against all UN initiatives and resolutions that concern the recognition and establishment of human rights for LGBT persons, as this would draw even more attention to the violations of human rights in China itself.
In this regard, in solidarity with Chinese LGBT representatives the leading protector of LGBT human rights from the party Latvian Russian Union (LKS) Aleksandrs Kuzmins and one of the LKS’s leaders and MEP Tatjana Ždanoka have expressed concerns over the recent homophobic attacks in Latvia and are urging citizens from Latvia and around the world to attach a rainbow flag next to the ribbon of St. George during the upcoming 9 May Victory Day celebrations, thus commemorating members of the LGBT community that died during World War II.
Kuzmins stressed that during WWII members of the LGBT community also fought against Nazi Germany, adding that it’s no secret that in the Soviet army there were hundreds and thousands of gays and lesbians who fought shoulder to shoulder for the freedom of their motherland. These people were, however, repressed and exiled to Siberia after the war by the Stalin regime. Most of them were tortured to death in gulags, which is confirmed by information recently acquired from Moscow’s archives.
Human rights activists from the LKS believe that it’s time for people to change and openly talk about the mistakes that were made in the past – we don’t live in the Middle Ages anymore and we should get rid of ancient dogmas and stereotypes about the LGBT community, lest more people fall victim to the intolerance and hate.
On the eve of the Victory Day, the LKS urges global leaders to admit the severe mistakes that have been made and to end the repressions against their own LGBT communities.
Tips on How to Get the Most from a Sunroom
If you have decided to add a sunroom to your patio, you want to get the most from it, right?...
UN chief express deep concern over East Jerusalem violence
The UN Secretary-General, António Guterres, and senior UN officials have expressed their deep concern over confrontations between Palestinians and Israeli...
MoU was signed between “China-Eurasia” Council and Institute of Oriental Studies
On May 10, 2021, Memorandum of understanding was signed between “China-Eurasia” Council for Political and Strategic Research and the Institute...
Steering Russia-US Relations Away from Diplomatic Expulsion Rocks
As the recent expulsions of Russian diplomats from the US, Poland, Bulgaria and the Czech Republic demonstrate, this measure is...
Russia-Ukraine War Alert: What’s Behind It and What Lies Ahead?
Perhaps the most important thing for the Russian leadership in this episode was to prevent the need to actually go...
The European Green Deal: Risks and Opportunities for the EU and Russia
The European Green Deal approved by the EU in 2019 is an economic development strategy for decoupling and for carbon...
H.E. President John Mahama Appointed As AU High Representative for Somalia
The Chairperson of the Commission, H.E. Moussa Faki Mahamat, has announced the appointment of H.E John Dramani Mahama, former President...
Energy3 days ago
Nord Stream 2: To Gain or to Refrain? Why Germany Refuses to Bend under Sanctions Pressure
Defense3 days ago
China’s quad in the making: A non-conventional approach
South Asia3 days ago
Rohingya crisis: How long will Bangladesh single-handedly assume this responsibility?
Africa3 days ago
The challenge of COVID-19 in Africa
Southeast Asia3 days ago
Vietnam’s Role in ASEAN 2021 meetings
Economy2 days ago
Biden should abolish corporate tax for small business, and make Big Tech pay what they owe instead
Africa2 days ago
Peacebuilding in Northern Mozambique’s Insurgency: Ways Forward
South Asia2 days ago
India’s Decision to Deport Rohingyas- How Fair? | <urn:uuid:0cf3c4a1-3e06-41a7-9736-9b4f0087622b> | CC-MAIN-2021-21 | https://moderndiplomacy.eu/2020/01/28/the-drive-for-quality-education-worldwide-faces-mammoth-challenges/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00536.warc.gz | en | 0.961373 | 4,695 | 2.859375 | 3 |
Information Technology (IT), one of the Career and Technical Education (CTE) endorsement areas for 6-12 grade Illinois classrooms, helps students become college and career ready through critical thinking and real-world application of skills that are built within the content and application of IT courses, especially through computers and telecommunications technology focusing on storing, retrieving, and sending information. By taking IT-specific courses, students gain the academic knowledge and technical skills used by industry leaders.
Source: Adapted from https://www.isbe.net/Pages/Info-Tech.aspx
Occupational Outlook Handbook https://www.bls.gov/ooh/
At minimum, Illinois IT focuses on the following areas:
- Information Systems/Technology
- Digital Literacy
- Interpersonal and Leadership Skills
- Career Development
- Computer Concepts/Programming/Networking
- Web Page/Media Development
ILCTE Innovative Lessons
- 5E CTE Classroom Lessons (all CTE endorsement areas)
- For all CTE endorsement area lessons, click here https://www.ilcte.org/index.php/lessons
*Note: Hyperlinks to Microsoft Word or Google Doc versions of these lessons can be found on the page immediately following the title page OR at the conclusion of the "teacher version".
The National Career Clusters® Framework serves as an organizing tool for Career Technical Education (CTE) programs, curriculum design and instruction. There are 16 Career Clusters in the National Career Clusters Framework, representing 79 Career Pathways to help learners navigate their way to greater success in college and career. The framework also functions as a useful guide in developing programs of study bridging secondary and postsecondary systems and for creating individual student plans of study for a complete range of career options. As such, it helps learners discover their interests and their passions, and empowers them to choose the educational pathway that can lead to success in high school, college and career.
There are 16 nationally recognized Career Clusters; Illinois has 17 Career Clusters (Energy is the 17th). Illinois IT teachers may want to consider the examination of these specific career clusters in the development of their program.
Counselors, advisors, teachers and other educators can use the PaCE document to identify what types of experiences and information a student should have in order to make the most informed decisions about college and career planning. Organized from 8th through 12th grade, the Illinois PaCE Framework explains what students should be supported to do and what they should know at the end of each grade relative to three key domains: Career Exploration and Development; Postsecondary Education Exploration, Preparation, and Selection; and, Financial Aid and Literacy.
Diversity, Equity, and Special Populations
Illinois has a commitment to provide every student the opportunity to be supported by highly effective teachers and school leaders. Further, Illinois understands that the two most important components for providing student learning are effective teachers and school leaders. Therefore, Illinois is focusing on developing highly effective teachers and leaders who are prepared to work to meet the instructional needs of each child, including those who have special needs and/or are English Language/Bilingual learners whether they come from high-poverty schools or low-poverty schools. To meet the needs of its lowest performing schools, Illinois is focusing on preparing teachers and principals to focus on differentiated instruction, student learning, and school improvement. Ensuring that teachers and principals are highly prepared to work in high need schools (HNS) will help to reduce the inequity among schools. Additionally, providing induction, mentoring, and professional development programs that focus on instructional needs of children will further support student learning in HNS (Source: https://www.isbe.net/Documents/equity_plan.pdf#search=equity)
Resources for Teachers:
What is a Program of Study?
According to the definition put forward in Perkins V, a program of study must, at minimum, be a coordinated, non-duplicative sequence of academic and technical content at the secondary and postsecondary level that:
- Incorporates challenging, state-identified academic standards
- Addresses academic and technical knowledge, as well as employability skills
- Is aligned to the needs of industries in the state, region, Tribal community or local area
- Progresses in content specificity
- Has multiple entry and exit points that allow for credentialing
- Culminates in the attainment of a recognized postsecondary credential.
IT Plan of Study
The Common Career Technical Core (CCTC) is a state-led initiative to establish a set of rigorous, high-quality standards for Career Technical Education (CTE). The standards have been informed by state and industry standards and developed by a diverse group of teachers, business and industry experts, administrators and researchers. Forty-two states, the District of Columbia and Palau participated in the development stage of the CCTC, which was coordinated by Advance CTE. The development of the CCTC was a multi-step process that incorporated input from approximately 3,500 individuals representing K-12 education, business and industry and higher education from across the nation.
Career and Student Organizations
Professional and National Organizations
State and National Associations
- American Society for Information Science and Technology (ASIS&T)
Interdisciplinary organization of 4,000 members from such fields as computer science, linguistics, management, librarianship, engineering, law, medicine, chemistry, and education. Focuses on techniques and technologies in the fields of library and information science, communications, networking and computer science.
- Association for Computing Machinery (ACM)
ACM is the world's oldest and largest educational and scientific computing society.
- Association for Women in Computing
Nonprofit professional organization dedicated to the advancement of women in the technology fields. Events calendar, articles on a variety of topics.
- Association of Information Technology Professionals
Provides quality IT related education, information on relevant IT issues and forums for networking.
- Information Technology Association of America (ITAA)
Was bought out and is now Tech America. Trade association representing the U.S. IT industry, site offers information about the industry, its issues, association programs, publications, meetings, seminars and links to other sites
Online Learning Resources
Adobe Spark - Adobe Spark empowers students and teachers to easily create and share visual stories.
Banzai Financial Literacy - aligns to Illinois's financial literacy curriculum, 3 levels-elementary, teens (middle/junior high & high school), Plus (high school ages 16+) Includes: insurance, retirement, buying a house, taxes, life changes, borrowing and credit, smart living, and starting a business.
BINGO Baker – Just type your words into the grid on the left. You can give your game a title and can change the BINGO column headings too. Then click the Generate button to generate bingo cards. Features: Drag and drop images from your computer to liven up your card. This NOT a free resource; however, a lifetime membership only
Blogger – It’s easy and FREE! Create a beautiful blog that fits your style. Choose from a selection of easy-to-use templates – all with flexible layouts and hundreds of background images – or design something new!
Discovery Education Experience provides engaging high-quality content, ready-to-use digital lessons, creative collaboration tools, and practical professional learning resources to give educators everything they need to facilitate instruction and create a lasting educational impact in any learning environment.
Epicbooks – Fuel curiosity with over 40,000 books, audiobooks and videos. It is FREE for educators. Parents get a 30-day free trial.
Create a Portfolio Website with Wix and Start Displaying Your Stunning Work Online. Access Advanced Editing Tools & Expert Features that'll Make Your Site Stand Out Online.
GCF Global - Includes: computers, Office Suite, Google Suite, Career Planning, Job Search, Workplace Skills, Work Life, Money Management, Everyday Life, Reading and Math. Offering online lessons for FREE.
ICEV - Career & Technical Education (CTE) curriculum and instructional materials.
- Agricultural Sciences
- Architecture, Construction, Transportation, and Manufacturing
- Business, Marketing, Finance, IT, and Media
- Career Exploration
- Family and Consumer Sciences
- Health Sciences Technology
- Law, Public Safety, Corrections, and Security
Illinois Digital Educators Alliance, the state affiliate of ISTE, is the largest organization in the state devoted to the use of technology in education. The mission of IDEA is to inspire, connect and provide the educational community with opportunities that transform teaching and learning through technology.
Comprehensive K-12 curriculum. This resource is NOT free.
MyCAERT provides teachers with an integrated online system to Plan, Document, Deliver, and Assess Career & Technical Education instruction. It not only allows access to a complete selection of instructional components, but also serves as a classroom organizational and management tool.
P-Tech - FREE digital learning on the latest technologies, designed for teachers and students! Units that can supplement curriculum.
Read Theory – Improve your students reading comprehension. Personalized reading comprehension exercises for K-12 and ESL students. This is a FREE resource.
Read Works – Readworks is driven by cognitive science research. FREE content, curriculum, and tools to power teaching and learning for K-12 students.
Rocket Math is a basic math curriculum and worksheet-based program that is the best learning tool and math program for kids. This resource is NOT free.
Many free resources to support the Common Core and to meet 21st Century Skills, including assessment and rubrics, virtual field trips, and internet safety. · Vocabtest.com
VocabTet.com offers the eager student ready to learn, free vocabulary tests, which are the best way to boost your verbal skills.
Math: Top-rated math program created for teachers, by teachers. The only curriculum top-rated by EdReports that unites hands-on instruction and immersive digital learning. Aligned to Eureka Math / EngageNY.
Explore My Future (Career Related Resources)
Resources for students to learn more about themselves and to explore future career possibilities
Career exploration program includes a validated aptitude test and interest assessment; results are used to guide career exploration using their career planning tools.
Career Explorer – The world’s best career test. Discover your top career matches! This is NOT a free resource. Teachers costs $48 per year and for students costs $36 per year.
Career One Stop – Your source for career exploration, training and jobs. Sponsored by the U.S. Department of Labor. This is a FREE resource.
Career Pathways Virtual Trailheads
To support students work based learning experiences during e-learning and while in school, the P-20 Network has created a series of videos which feature people from a wide range of occupations. The videos will allow students to learn different occupations within the various career pathway endorsement areas, gain knowledge about workplace skills, and receive advice from these professionals.
Career Prepped – Get Career Prepped and get noticed! You can build your skills and create career portfolios to prove your soft and hard skills mastery. This is NOT a free website.
Career Safe Online – This website has training for OSHA, Cyber Awareness and Employability Training. This is NOT a free website.
The place to explore careers related to your strengths, skills and talents. There are six career clusters that can be explored: Arts & Humanities, Business & Information Systems, Engineering & Technology Services, Health Services, Nature & Agricultural Sciences, and Human & Public Services.
Degree Game Show – this website offers a slide show game on career planning. This is a FREE resource
Is a free personality test will show you which of the nine personality types suit you best. It only takes 10 minutes online and is free.
Everfi is a platform for creating community impact that’s customizable, measurable, inspiring, and easy; Everfi offers Workplace Training, Financial Education, Higher Education and Community Engagement modules.
Explore Health Careers – ExploreHealthCareers.org is a collaboration between today’s health professionals and leading health care associations designed to help people like you start down the road toward a career in health. Here you’ll find the latest health career information and tools to guide you as you prepare for a future in healthcare. This is a FREE resource.
G-wlearning - This site provides an abundance of interactive activities to support your learning beyond the classroom. There is something for every CTE class including career exploration. This is a FREE resource.
Resources are available to assist you in offering Claim Your Future® to your students. It has games for students.
My Next Move – What do you want to do for a living? Tell us what you like to do. What do you want to be? This is a FREE resource.
Virtually connects educators and learners with a network of industry professionals, bringing real-world relevance and career exposure to all students. Nepris also provides a skills-based volunteering platform for organizations to extend education outreach and build their brand among the future workforce.
The Question Formulation Technique (QFT), created by the Right Question Institute, helps all people create, work with, and use their own questions — building skills for lifelong learning, self-advocacy, and democratic action.
Read to Lead -5-9th graders, matches Lexile, college/career readiness anchor standards. Learning games & lessons that put middle school students in charge of a workplace. They will learn that being the boss is demanding, aspirational, a lot of reading and conversational. This is a FREE resource.
Star Assessments – They have a CTE section. Career Technical Education (CTE) programs are a sequence of courses that integrates core academic knowledge with technical and occupational knowledge and skills to provide students a pathway to postsecondary education and careers. CTE teaches transferable workplace skills in applied learning contexts to provide opportunities to explore high-demand career options and gives students the technology and skills needed for success in adult life. Whether students take one CTE course or enroll in an entire CTE program, CTE is an Important part of every student’s well-rounded education. Regardless of the CTE pathway, CTE experiences build the transferable skills that lead to success in career and college. CTE classes inform students about how their skills apply to today’s occupations and provide students with a realistic picture of their future in the world of work.
Understand who you really are! Powerful, scientifically validated personality tests to help you group and find you way in life. Also includes a career profiler.
Boomerang (Chrome Extension) - Millions of Gmail™ and G Suite™ users count on Boomerang for easy, integrated scheduled email sending and reminders. The service allows you to schedule emails to automatically send in the future, so you can write an email now, and the service will send it tomorrow morning at 6 AM, or next week while you’re at the beach, without you needing to be online.
Ditch That Textbook - list of available Google Earth Virtual Field Trips.
Dualles Extension - For those who don't have dual monitor. Dualless is a poor man's dual monitor solution. It splits your browser windows into two by just 2 clicks. The ratio can be adjusted according to your needs. You may have a browser window showing on the right that occupy 30% of your screen. Then the rest of space leave to Google+. The extension simulates the environment of dual monitor. This is a Chrome Extension.
Google Classroom is a FREE web service developed by Google for schools that aim to simplify creating, distributing and grading assignments in a paperless way. The primary purpose of Google Classroom is to streamline the process of sharing files between teachers and students.
Google Classroom Cheat Sheet for Students
If you plan to implement Google Classroom as a distance learning platform, this is a great resource for your students that provides basic information on the various tools Google Classroom provides.
Google Classroom Cheat Sheet for Teachers
This document is a great resource for teachers that would like to learn the basics of Google Classroom for distance learning.
Google Keep - use it a LOT (Google App) It is like having post-it notes all gathered into one place.
Google Meet and Livestream Directions
Real-time meetings by Google. Using your browser (you will need to use Google Chrome), share your video, desktop, and presentations with students. The linked document is a great step-by-step direction sheet for students learning how to do Google Meet Livestream for school.
HyperDocs are Google Docs that are self-contained lessons or units. HyperDocs contain questions with links to videos, infographics, websites, or other resources to help the students discover new information. Other uses for Hyperdocs can be building an extended lesson or unit. HyperDocs can also be used for substitute teacher plans, video or podcast playlists, learning centers or stations.
Hyperdocs Check Lists – a list of checks for checking your hyperdoc. This is a FREE resource.
(google suite) Jamboard is G Suite’s digital whiteboard that offers a rich collaborative experience for teams and classrooms. Watch your creativity unfold: you can create a Jam, edit it from your device, and share it with others. Everybody can collaborate on the Jam anytime, anywhere. For businesses and schools that use Jamboard hardware, you can use your phone or tablet to join or open a Jam on a nearby board.
Move It Chrome Ad-on - Busy working on your computer? Spending hours searching the internet. Get active with Move It, the easiest way to be reminded that you should integrate a break into your work online. Simply add the Move It extension to your browser, set the notification interval, and your screen will present you with a random brain break and exercise to complete. Once you complete it, hit done and the next will arrive after the designated interval elapses again.
This is a sidebar to add to Google Slides; allows a teacher to be in presentation mode and then be able to communicate with students by giving a formative assessment, creating custom questions, adding audio to a slide, and active dashboard to watch each student's reaction to material presented. Pricing free, $150/year, custom for district. Web site has the options available for pricing listed.
Quizizz - (connects to Google Classroom) Engage everyone, everywhere! FREE tools to teach learning anything, on any device, in-person or remotely.
Google Chrome extension. Allows teachers to record their voice, screen, and even has drawing tools. Teachers can record lessons, assignment solutions, and verbal student feedback. Free version allows up to a 5-minute video. Premium version is $29 per person. Can get a special school quote.
Web Paint (Chrome Extension) – Web Paint provides easy to use drawing tools that let you draw shapes, lines, and add text to live web pages and take screenshot (touch screen supported): Pencil tool - draw a custom line with the selected line width and color.
WhatsApp - Messenger, or simply WhatsApp, is an American freeware, cross-platform messaging and Voice over IP service owned by Facebook, Inc. It allows users to send text messages and voice messages, make voice and video calls, and share images, documents, user locations, and other media
Resources to enhance instruction and increase student engagement/participation
10 Tips for Teaching Online: Practical Advice from Instructors
With the right tools, some creativity, and a healthy dose of patience, you can master the move online.
FREE, Book Creator offers a simple way to infuse creativity throughout the curriculum, motivating students to become published authors and helping them develop future-ready skills.
Design anything in minutes with thousands of beautiful templates and images. The “basic” plan is FREE. It offers 250K+ free templates, 200K+ free photos and elements and 1000+ fonts. There are 100+ design types for social media posts, presentations and more! The “pro” plan costs $9.95 per month and offers more templates, photos & elements than the basic plan.
Doceri Interactive Whiteboard
Doceri does it all! Control your lesson or presentation live with Airplay or through your Mac, PC or Ipad. Annotate a Keynote or PowerPoint or present your original Doceri project. Great for student projects, too. CREATE hand-written or hand drawn Doceri projects on your iPad, using sophisticated drawing tools and the innovative Doceri Timeline.
We share evidence and practitioner-based learning strategies that empower you to improve K-12 education.
Remote learning seems a lot more doable. Students are engaged and are asking to keep learning. Assignments are taken care of and graded for you. Remote lesson plans with your content can be created in minutes.
In the HyFlex course design, students can choose to attend face-to-face, synchronous class sessions or complete course learning activities online without physically attending class. Hyflex can provide student engagement at the time they see/hear the material.
We help everyday inventors patent & submit ideas to companies! Free information.
Looks like you have an invention idea. Great! But where do you go from here?
Helping Since 1984 · For Everyday Inventors · Submission to Companies · Confidential invention information
Kahoot! is a game-based learning platform that makes it easy to create, share and play learning games or trivia quizzes in minutes. Unleash the fun in classrooms! Sign up for FREE for the basic plan or $36 per year for the pro plan.
Know it All
Know it all features a wide assortment of over 8,600 media assets, for pre-K-12. The content has been optimized for tablets and mobile devices for one-to-one learning. As of May 15, 2020, you will find approximately: 4821 videos, 1333 audio files, 286 photo galleries and 1781 photos, 79 interactives, 62 teacher resources, and much more! It covers career education, English Language Arts, Health Education, Math, Science, Social Studies, Technology, Visual & Performing Arts and World Languages.
Lalilo is an online tool for K, 1st and 2nd grade teachers and students. The Lalilo website is accessible on tablets, iPads and computers. Lalilo works in school environments or in a distance. This resource is NOT free.
Commit to digital equity with digital literacy for all students. Teach students to create with technology and harness it to power learning.
Mentimeter is a cloud-based solution that allows you to engage and interact with your students in real-time. It is a polling tool wherein you can set the questions and your students can give their input using a mobile phone or any other device connected to the internet.
MobyMax is a wonderful tool to differentiate learning for all students. Since it meets the students at their level, it challenges students above level and on level students as well as remediates students below level. MobyMax allows me to set goals with my students and celebrate their success with them.
Nearpod is an instructional platform that merges formative assessment and dynamic media for collaborative learning experiences. You can upload tech-enhanced materials or customize over 7,500 pre-made, standards-aligned lessons for all K-12 subjects. Students complete assignments independently while you gain insights into students’ understanding with post-session reports. Easily integrates with your LMS (i.e. Google Classroom, Canvas, Schoology and more).
OnShape is the only product development platform that unites CAD, data management, collaboration tools and real-time analytics. This resource is NOT free.
A tool that sets up accounts for students that are private. Cost for a teacher $12 /month. Teachers may post assignments and students can create work and post their assignment work back to the teacher. Teachers are in full control of the student and parent access.
Free quiz tool. Can be used on any device. For in class and online uses. Never grade a quiz again. Web site even has quizzes built for different subject areas, even CTE.
Allows teachers to create or use existing study material. The tool tracks student progress through the material. 30-day free trial, otherwise $35.99 /year.
A complete Learning Management System (LMS) that integrates with many other educational tools. A free, one teacher offer is available. For a district wide package more investigation would be required for pricing.
An instant student-response tool that allows teachers to engage students with simple polls and quizzes. Free for one classroom group. Paid plans available.
Spin the Wheel random picker
Interactive tool that allows the teacher to make a decision at random. It has three modes: random picking, random, accumulation. Free.
Create surveys online for free. Unlimited online surveys. It is an easy & powerful tool.
The end of passive viewing! Engage viewers by integrating quizzes, polls, and CTAs into videos. Get Started. Take your videos to the next level. A new type of video content. Ask questions, multi-choice quizzes, and collect feedback through your videos. Increase engagement.
Wheel of Names
You can make your own wheel of names. It can be a fun way to assign group projects. Spinning the wheel can identify the group’s members!
Amaze your students with smarter worksheets. Wizer.me is a platform where teachers build beautiful, engaging online worksheets. Teachers and students can create free accounts.
Online platform to host meetings, webinars, conference rooms, chat, and phone system. Free version.
Allows users to run other types of platforms other than what is on their computer. Ideal when teaching different operating systems.
If you are using Zoom for classroom meetings, this is a useful resource for everyone to review before partaking in an online meeting.
Resources for creating videos and instructional approaches for the classroom
Adobe Rush App
This tool is available for the PC, IOS, and Android; gives the user capability to create fantastic videos that have been edited. Free use of the tool with three exports is available, otherwise it costs $9.99 / month
Animated Educational Site for Kids - Science, Social Studies, English, Math, Arts & Music, Health, and Technology.
EDpuzzle is an incredibly-easy-to-use video platform that helps teachers save time, boost classroom engagement and improve student learning through video lessons. Use videos from YouTube, Khan Academy, Crash Course and more. If you'd rather record and upload your own video, go for it! EDpuzzle also collects data as students watch and interact with the video. Best of all, it’s completely FREE!
Flipgrid is an easy-to-use social learning, video making website that allows educators and students to make video clips. A teacher can create a grid or a series of grids that asks students to discuss topics or questions. The teacher can set the requirements of the responses to the subject, which could include time limits, attachments, video, pictures, and several other settings. For educators it’s FREE
Say it with video! The expressiveness of video with the convenience of messaging and it’s FREE. Communicate more effectively wherever you work with Loom.
Play Post It
PlayPosit, Inc. operates an online learning environment to create and share interactive video lessons. Its platform enables teachers to create interactive lessons. and video solution enables corporates to train employees, partners, and customers. The company’s platform is designed for K-corporates, flipped, and blended environments. This resource is NOT free.
Video recording service that allows teachers to record their screen and webcam simultaneously. User-friendly. No download required. Video can be published immediately. Free version. Premium versions for schools available with many price variations.
Watch purified YouTube videos. Just for teachers, professors, social influencers, homeschoolers and parents. The Viewpure Members Area is the place where Viewpure can give you additional tools and resources. The Members Area is FREE. There is a lot of cool stuff in there for clean viewing of videos.
The end of passive viewing! Engage viewers by integrating quizzes, polls, and CTAs into videos. Get Started. Take your videos to the next level. A new type of video content. Ask questions, multi-choice quizzes, and collect feedback through your videos. Increase engagement.
Large library of videos that can be used as supplemental instruction. Can also upload their own videos. Free.
The following documents have been publicly shared online.
Web Page Design | <urn:uuid:b935d145-5f40-471e-9fe0-8c9ae8f6f48e> | CC-MAIN-2021-21 | https://ilcte.org/index.php/it | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00534.warc.gz | en | 0.911306 | 5,989 | 3.515625 | 4 |
by Geoffrey Bleakley,
The Valdez Trail provided the first overland access to much of interior Alaska. This essay focuses on the development of overland transportation along the trail corridor, describing aboriginal use, American exploration, trail construction, and later route improvements and maintenance. This is an IN-DEPTH essay outlining the history of The Valdez Trail.
Indigenous Use of the Valdez Trail Corridor, 1800-1890.
Native groups traditionally controlled all of the land bordering the Valdez Trail. South of the Chugach mountains, the land belonged to the Chugach (or Prince William Sound) Eskimo. Various Athapaskan groups held the more northern territory.
Each created their own transportation networks. In general, local paths were used for subsistence activities, while longer trails were used for trade and occasionally for raiding. These routes usually followed natural corridors such as river valleys and traversed the more obvious mountain passes.
Trade occurred among the different Alaska Native groups long before contact with the Russians. The Ahtna, for example, often served as middlemen, bartering with the Chugach, Tlingit, and Eyak peoples, as well as their Athapaskan relatives, the Dena'ina and the Tanana. Although copper was their most important export, they also exchanged moose, caribou, lynx, and beaver pelts for marine products like seal skin boots. Both oral and documentary evidence suggests that the Ahtna regularly held intertribal trade fairs within the Valdez Trail corridor, including ones near both Thompson and Isabel passes.
In 1783, Russian promyshlenniki led by Leontii Nagaev discovered the mouth of the Copper River. Ten years later, employees of Pavel Sergeevich Lebedev-Lastochkin founded a post at Nuchek, about 30 miles further west.
No one is certain when the first Russian ascended the Copper River, but Dmitri Tarkhanov may have reached the mouth of the Chitina River in 1796. Semyen Potochkin certainly did. In 1798 he conducted a census of local inhabitants before wintering at the Ahtna village of Taral.
Although other explorers followed, attempts to examine Alaska's eastern interior abruptly ceased in 1805 when the Tlinget successfully destroyed the Russian colony at Yakutat. As a result, it was not until the mid-teens that Russian interest in the area returned.
In 1819 the Russians sent Afanasii Klimovskii to explore the region. Klimovskii progressed farther than any of his predecessors, certainly reaching the Gakona River and perhaps even the mouth of the Chistochina. Of more lasting importance, his party established a trading post called Copper Fort near Taral, which endured, off and on, for the next forty years.
In 1847 the Russian American Company received Native reports of English trading activity on the middle Yukon River and dispatched Ensign Ruf Serebrennikov to reconnoiter the area. Serebrennikov, however, only reached Batzulnetas, a village on the upper Copper River, before being killed.
The Ahtna, Tanana, and Han maintained complete control of their territory throughout the Russian colonial period of 1784 to 1867. The Russians' presence, however, did stimulate trade and consequently, the use of certain trails. The Ahtna, for example, initially delivered copper to the traders at Nuchek via the Keystone Canyon route. Lt. William R. Abercrombie, who visited the area in 1884, reported a "deep and well-worn trail up the canyon and across to the Tiekel River in the Copper River valley."
Similar paths existed elsewhere along the route. In Ahtna country, Lieutenant Walter C. Babcock related finding an "old Indian . . . foot trail" along the Little Tonsina River. "It had evidently been much used at one time, as there were numerous signs of brush cutting done many years ago, and the trail for long distances was worn down a foot or more below the natural surface."
Frank C. Schrader described a faint line following the northwestern side of the Copper River between the villages of Taral and Slana. Lt. Joseph C. Castner found an "old Indian trail" in Tanana country, down the right bank of the Delta River. Further east, Lt. Henry T. Allen discovered both Ahtna and Tanana corridors, including one leading from Suslota Lake to the Tanana River.
E. Hazard Wells located Tanana tracks, as well. Ascending the valley of the Tokio (now Tok) River in 1890, Wells recounted that his route became difficult to follow. "Several times we lost it while descending the mountainside, but at the bottom, in the forest-clad valley, it reappeared deeply printed into the moss." He also noted Han pathways, including "a well-beaten trail" leading to the village of Kechumstuk.
American Exploration of the Valdez Trail Corridor, 1885-1898.
When the United States acquired Alaska from Russia in 1867, neither party knew much about the territory's eastern interior. Russians had focused their attention on coastal areas and had only made a few abbreviated attempts to explore the region. Americans, in contrast, had never visited the area at all.
Neglected for the next fifteen years, the district began attracting interest in the mid-1880s. Gold strikes in northern British Columbia's Cassiar region and near the present site of Juneau lured prospectors to the north. Many eventually entered the interior, most by way of the Yukon River, but some via Cook Inlet and Prince William Sound.
The American government worried about the potential for conflict between the undisciplined miners and Alaska's Native population. Consequently, the U.S. Army soon dispatched several expeditions to reconnoiter the region. One such party, led by Lt. Frederick Schwatka, charted the entire Yukon River in 1883. Another, headed by Lt. William R. Abercrombie, attempted to examine the Copper River basin the following year. Although stopped by rapids on the lower river, Abercrombie later located an alternative overland route to the interior: across the Valdez Glacier heading the Valdez Arm.
In 1885, the army sent Lt. Henry T. Allen to finish Abercrombie's work. More successful than his predecessor, the lieutenant ascended the Copper River and pioneered the route across the Alaska Range and into the Tanana River valley.
Five years later, a private expedition led by E. Hazard Wells added another segment. Journeying down the Yukon from its headwaters, the party traveled up the Fortymile River and traversed the Kechumstuk Hills to Mansfield Lake. From there, they crossed the Tanana River and ascended the Tok River to near Mentasta Pass. Much of this route was later incorporated into the trail's Eagle City Branch.
Northern gold discoveries continued, climaxing with an especially rich find on northwestern Canada's Klondike River in 1896. This precipitated the region's greatest rush. In their haste to reach the gold fields, many stampeders prepared inadequately for the hardships they would have to endure. As a result, the U.S. Army soon received reports of widespread deprivation. Responding to the rumors, the military dispatched Capt. Patrick H. Ray and Lt. Wilds P. Richardson to proceed to Alaska and to provide necessary relief.
Most stampeders reached the Klondike via a largely Canadian path over the Chilkoot Pass, located near the northern end of Alaska's Lynn Canal. Many, however, objected to the foreign control of that transportation corridor and called for an "all-American route." Recognizing the logic of their demands, Ray recommended the immediate construction of a government trail into the Yukon River basin.
Unscrupulous local promoters, circulating stories of an easy passage linking Prince William Sound with the interior, lured thousands of gullible stampeders to Port Valdez. Unfortunately, the arriving prospectors found only one way across the Chugach Range: Abercrombie's exceptionally difficult and dangerous path over the Valdez and Klutina glaciers. Faced with few options, most attempted that route, and many eventually died from disease, accidents, and exposure.
In the spring of 1898, the army sent Capt. William R. Abercrombie back to Port Valdez, hoping to locate a safer way. The captain first inspected the Lowe River valley, where he spotted the remains of a Chugach trail leading to the north toward Keystone Canyon. Proceeding to the interior via the Valdez Glacier, Abercrombie found an Ahtna path leading up the right (or western) bank of the Copper River. Both were eventually utilized by the Valdez Trail.
Contemporaneous with Abercrombie's efforts, another army expedition also sought a practical route to the interior. This group, under the command of Capt. Edwin F. Glenn, blazed a path linking Cook Inlet to the Copper River basin. Not content with merely reaching Lake Louise, an exploratory party led by Lt. Joseph C. Castner continued northward along the Gulkana River, eventually locating a new pass through the Alaska Range.
Although it attracted little immediate attention, Castner's trail soon gained significance. Speaking about his adventure ten years later, Castner noted that a "well-beaten path traveled yearly by hundreds goes up our old Gulkona [sic] and down our Delta to the Tanana, traversing one of the best passes through the Alaska Alps." In the intervening decade, Castner's route had been incorporated into the Fairbanks fork of the Valdez Trail.
Construction of the Valdez Trail, 1898-1906
Abercrombie returned to the region in 1899. Utilizing only hand tools, his soldiers built a 93-mile packhorse trail from the coastal community of Valdez to the Tonsina River. Weary stampeders immediately adopted this shorter path. Addison M. Powell, a civilian employee of Abercrombie's and an early explorer of the Chistochina River, reported that by the end of the summer, the route was already filled with prospectors headed for the Nizina River basin.
Mountaineer Robert Dunn employed the half-built trail the following year on his way to Copper Center. Unlike the stampeders who were usually too hurried to appreciate its spectacular scenery, Dunn recorded a vivid description of the route:
Encouraged by such traffic, construction continued, and by 1901 the army had completed its trail all the way to Eagle City.
Alaska residents soon demanded additional federal aid. In 1903, visiting members of a Senate Subcommittee on Territories heard testimony on a broad range of subjects, including the need for better transportation. Army Signal Corps Lt. William L. Mitchell, for example, related the current condition of the Valdez-Eagle City Trail. Pioneer Judge James Wickersham went even further. He requested that the government improve the route, calling such action an essential prerequisite to developing the interior's mining potential.
U.S. Geological Survey geologist Alfred H. Brooks agreed. Incidental to a discussion on the future of placer mining, he recommended that a million dollars be spent in building wagon roads to the inland placer camps. Such arguments seem to have convinced the senators. Upon returning to Washington, they recommended that the government construct a system of transportation routes, beginning with a well-built wagon road connecting Valdez and Eagle City.
That winter Congress appropriated $25,000 to conduct the initial survey. The following spring the War Department appointed an army engineer to supervise the work. Completing the job in August 1904, J. M. Clapp estimated that it would cost $3,500 per mile or a total of approximately $1.5 million to build the road.
By then, however, Eagle City had already lost its priority as the trail's terminus. Mineral production on the upper Yukon River had begun to decline, and Felix Pedro had discovered gold in the Tanana River valley. Stampeders heading for this new strike left the Eagle City Trail near the Gakona River and followed that stream to its headwaters. Joining Castner's path near Paxson Lake, they crossed the Alaska Range and proceeded down the Delta River. Upon reaching the Delta's mouth, they followed the Tanana River northwest to Fairbanks. By late 1903, this Fairbanks branch had become the dominant interior route.
The new trail quickly attracted its first common carrier. In December 1904, James Fish announced that his Valdez Transportation Company would soon provide passenger service to Fairbanks. "Over such part of the trail as is practical," he assured travelers, "comfortable bob sleds will be fitted up and drawn by two horses. Over the summit, and wherever it is not practicable [sic] to run two horses abreast, the single double-ended sleds will be used and the horses driven tandem." A month later the first of its tri-weekly stages left Valdez, promising a nine day trip for the exorbitant price of $150.
While not a stage passenger, Wickersham traveled the trail during the same period. Conditions remained somewhat primitive. The judge recorded crashing his dog-sled on the approach to Copper Center, suffering scratches, bruises, and a twisted ankle. Reaching the Chippewa "roadhouse" on a cold, February night, Wickersham found only a canvas lean-to attached to a small, open-fronted cabin. Admittedly austere, even this housing was jammed with "men and dog-teams transporting mining supplies . . . to Fairbanks."
Although too late to be enjoyed by Wickersham, trail improvements were already under discussion. That January, President Theodore Roosevelt had established the Board of Road Commissioners for Alaska (popularly known as the Alaska Road Commission or ARC) and designated Maj. Wilds P. Richardson as its first president. Richardson was particularly concerned about the development of interior Alaska and emphasized the speedy construction of a more permanent Valdez-Fairbanks route.
The ARC's initial construction efforts met only basic demands. The trail's width was determined by its anticipated traffic. Light traffic required a 10-foot roadway, while heavy traffic demanded 16-feet. Over most level, well-drained ground, road crews merely cleared a corridor. Where it was possible to improve drainage, they sometimes removed the moss, "grading up and crowning, with a single ditch on interior slope and frequent cross culverts to carry off seepage and rainfall and prevent cutting." In permafrost areas, where good drainage was impossible, crews utilized corduroy construction. Designed to prevent the frozen ground from melting and creating an impassable quagmire, this technique involved placing a layer of poles parallel to the roadbed and covering them with another layer at right angles to the first.
Culvert construction varied. Where the needed water capacity was small, the Road Commission usually fabricated pipe culverts from four 12-inch planks. For larger applications, the crews built culverts entirely of log, except in treeless sections where they sometimes utilized a dry masonry technique.
Under normal conditions, the Road Commission would probably have limited itself to reconnaissance and survey work that first season and not undertaken any real construction. Receiving urgent appeals from the residents of Fairbanks, however, Richardson moved to provide immediate relief. Road crews rapidly replaced 3,032 feet of worn-out corduroy and bridged about 25 small streams.
The ARC distributed and cached the materials necessary for its next construction season along the entire route during the winter of 1905-06. Besides arranging for the delivery of rations, animal forage, and tools, it also began the job of bridging the Tazlina River. Built by Lars Holland, this $19,000 structure replaced a hazardous ferry on which several passengers had been drowned.
For interior Alaska, the bridge was a technological wonder. Four hundred and fifty feet long, it employed two Howe truss spans of 108 feet, two King post spans of 50 feet, and approaches. The main trusses rested on pile bents, protected by 10 x 30 foot, rock-filled crib piers. The trusses were constructed of hewn lumber, with the lower chords built from four to six pieces, bolted and keyed together. A lack of large timber near the site forced Holland to secure trees from as much as six miles away.
John A. Clark navigated the trail that spring. One of a party of six young men riding bicycles to Fairbanks, he vividly described his journey along the Delta River:
Improving the trail was a difficult and expensive process. Engineers had to overcome many obstacles, including a short construction season, raging glacial rivers, permafrost, and an abundance of mountainous terrain. Crews relocated many of the original segments, including the one linking Gakona with Castner's pass. Nevertheless, by the end of the 1906 season, the Alaska Road Commission had finished the route.
These trail refinements substantially speeded postal service. While previous contractors had required about ten and one-half days to traverse the distance between Valdez and Fairbanks, mail carrier Ed Orr completed the journey during the winter of 1906-07 in a record time of only six days, ten hours, and ten minutes.
The refurbished trail also attracted more common carriers. In 1907, at least two stage lines vied for its passenger and freight business: Orr's company and another operated by Dan T. Kennedy. Orr's enterprise was particularly successful. Equipped with nine-passenger, horse-drawn bobsleds boasting fur robes and carbon-heated foot warmers, it moved travelers from the coast to Fairbanks in just eight days.
Such changes had a dramatic effect on the community. Expanding quickly, the town soon acquired most of the amenities of civilization, including electric lights, running water, and a telephone system. Something of a supply depot for the rest of the interior, Fairbanks possessed hotels, schools, churches, hospitals, and even a daily newspaper. Although the town received substantial river traffic during its short summer, the remainder of the year the Valdez Trail provided its only access.
Maintenance and Use of the Valdez Trail, 1907-1919
As the Road Commission grew more sophisticated, it eventually adopted fixed standards for its roads. A "wagon road," for example, embraced "only that class of road intended to meet the conditions of an all-year-round traffic of considerable tonnage, located with suitable grades, crowned, ditched, and drained, and corduroyed or planked where necessary." A "winter road," like that between Valdez and Fairbanks, was "designed to meet the requirements for winter travel only." While not crowned, ditched, or drained, such a road possessed suitable width for double teams and a proper grade for loads.
In 1907, the Alaska Syndicate began developing its rich copper claims above the Kennicott Glacier. While it soon started work on an affiliated railroad, that project required over two years to complete. In the interim, the corporation moved its requisite personnel, supplies, and equipment via the southernmost section of the Valdez Trail. Acting swiftly, it erected a 400-ton mill and a 16,000 foot tramway before the railway ever reached the site.
Granted substantial annual funding, the Alaska Road Commission gradually upgraded the Valdez Trail. Originally created for pack and saddle horses, it quickly evolved into a winter road and by the end of 1908, about a third was suitable for wagons. Traffic increased as well.
J. H. Ingram, the superintendent of the Valdez District, estimated that contractors had moved over 83,000 pounds of mail, 2,500 tons of freight, and nearly 100 head of cattle over his section during the preceding year.
By now, enterprising citizens had located "roadhouses" along the entire route. Usually owned by homesteaders, these inns provided travelers with a convenient and comfortable place to stop. As most operators cultivated gardens, many supplied fresh vegetables in season. Not surprisingly, these lodges became the local nodes: what Richardson called "small centers of settlement and supply" from which to explore the adjoining country.
The trail itself, however, still needed a little work. In 1908, one pilgrim died when the sled in which he was riding overturned after hitting a chuckhole near the Tiekel River. Travelers experienced other setbacks, as well. Extensive flooding in 1909 severely damaged the Tazlina River Bridge. Repairs cost more than $13,000, about 70 percent of the price of the original structure, and required nearly a year to complete.
During the summer of 1909, maintenance of the Valdez Trail required 19 crews. Each consisted of a foreman, cook, two teamsters, and about 20 laborers, plus a wagon and six to eight horses for moving camp and hauling timbers. "Plows and scrapers were used wherever practicable, although the greater part of the work, being in a broken and rocky country or through brush and timber swamp, had to be done by hand with pick, mattock, and shovel."
In 1910, the trail received its first serious competition. Roughly parallelling its southernmost one-quarter, the Copper River and Northwestern Railroad immediately captured most of the freight traffic headed for the Chitina River valley. Because the train was faster, many Fairbanks-bound travelers also rode it. Disembarking in Chitina, they rejoined the Valdez-Fairbanks route via the newly constructed Chitina-Willow Creek (later Edgerton) Cutoff. Use of the Valdez to Willow Creek section subsequently declined.
Ignoring the competition, the Board maintained its expenditures. In 1910, for example, it spent $248,782 on improving the trail. Eventually, its persistent efforts began to achieve results. That August, for example, Richardson made the first continuous trip over the route in a wagon, covering the entire distance in only 13 days!
The following year the ARC built a new, 420-foot bridge over the Tonsina River, replacing one constructed by the military in 1900 and now considered unsafe. Except for the Gulkana, Delta, Tanana, Salcha, and Piledriver Slough, all important rivers traversed by the trail were now bridged. Of the remaining five, only the Delta lacked a ferry.
Bridge work continued in 1912. The Road Commission placed a 40-foot truss over Ptarmigan Creek, two 60-foot spans across Stewart Creek, and a 270-foot pile trestle over Gunn Creek. Most impressive, however, was its 748-foot bridge, possessing a single center king-post, over an unnamed glacier stream near the Miller Roadhouse.
The new bridges contributed to another innovation. In 1913, the first motorized vehicle traveled the entire length of the trail. The automobile averaged about nine miles per hour, despite having to be "helped through soft spots on rather heavy grades."
Others quickly followed. The Road Commission, however, largely ignored the phenomenon, declaring that it made "no pretense of having built roads adapted for automobile travel." Five years later its basic position remained unchanged. While acknowledging the increasing number of such vehicles, the Board still discouraged their use.
Despite the ARC's objections, mechanization had clearly arrived. In 1918, the Board purchased two tractors, one eight-foot road grader, three six-foot road graders, four three-way road drags, and four heavy trucks. Automobile stage coaches now traveled regular routes between Valdez and Fairbanks and motorized vehicles carried most of the mail. No longer a trail, in 1919 the Road Commission conceded to the inevitable and redesignated it as the Richardson Road in honor of its newly retired first president, Colonel Wilds P. Richardson.
Maintenance and Use of the Richardson Road, 1920-1945.
Journeying to Alaska in 1923 to dedicate the Alaska Railroad, President Warren G. Harding inspected both ends of the Richardson Road. While he only viewed about 50 miles of the
corridor, he was apparently impressed. In a speech delivered upon his return to Seattle, he noted that "our long national experience in pushing our highways ahead of the controlling wave of settlement ought to convince us that the broadest liberality towards roads in Alaska will be sure to bring manifold returns." In keeping with that belief, the President pledged "to serve Alaska generously, and more, in this matter of road building." Nothing, however, came of his promise; Harding died only a few days later.
By 1925, tour companies throughout the United States advertised the Richardson Highway as the center portion of the "Golden Belt Line." Appealing to the more adventurous traveler, this circular route stretched from Cordova to Seward and incorporated the Copper River and Northwest Railroad at one end and the Alaska Railroad at the other. One major automobile carrier, aptly designated the Richardson Highway Transportation Company, carried hundreds of passengers each season, operating what it described as a fleet of passenger vehicles over the road "without delay or inconvenience."
Controversy erupted in 1932 when the Interior Department tried to increase the profitability of the Alaska Railroad by taxing Richardson Highway users. When most motorists ignored its license fee requirements, the Road Commission tried another tack: collecting a toll at the commission-operated ferry across the Tanana River. Commercial carriers quickly objected and, beginning in 1940, staged a general revolt. Rebellious truckers crossed the river on a home-built scow, defiantly flying a skull-and-crossbones flag. When challenged, one group even seized and disarmed the local U.S. Deputy Marshall. Despite such flagrant violations, the government was powerless to enforce its law. A Fairbanks grand jury judged the tax to be discriminatory and refused to return indictments against the accused. In 1942, Interior Secretary Harold L. Ickes finally bowed to the inevitable and repealed the toll.
The threat of war brought many changes to the Richardson Highway. In 1940, Lt. Gen. John L. Dewitt, the commander of the U.S. Fourth Army, recognized that Anchorage was isolated and vulnerable to attack. To alleviate the danger, he proposed connecting the city to the road. Gen. Simon B. Buckner, the head of the Alaska Defense Command, agreed, further suggesting that the highway be widened and straightened, and that its bridges be strengthened sufficiently to withstand the anticipated increase in military traffic.
Both were prudent requests. Following the outbreak of World War II, the men and supplies used to construct the Alaska section of the Alcan (now called the Alaska) Highway were all moved along the Richardson, with materials flowing southeast from Fairbanks as well as north from Valdez. Finished in November 1942, the Alcan joined the Richardson Highway to the remainder of the North American highway system at the interior Alaskan village of Delta Junction.
The Anchorage connection came more slowly. It was late 1943 before the Glenn Highway, named for pioneer Alaskan explorer, Capt. Edwin F. Glenn, linked the city to the Richardson Highway at Glennallen.
The Valdez Trail provided the first overland access to much of interior Alaska. Built by the U.S. Army and the Alaska Road Commission between 1898 and 1907, it followed a series of indigenous paths linked by such prominent explorers as Lt. Henry T. Allen, Capt. William R. Abercrombie, Capt. Edwin F. Glenn, and Lt. Joseph C. Castner. Although originally directed to Eagle City, the trail was diverted to Fairbanks following a nearby gold discovery in 1902. A closing thrust in a period of pioneer American trail building which began with Daniel Boone's construction of his Wilderness Road through the Allegheny Mountains in 1769, the Valdez Trail channelled people, freight, and mail into the district, promoting mining activity, aiding the development of supporting industries, and hastening the settlement of the Copper, Yukon, and Tanana river valleys.
Last updated: April 14, 2015 | <urn:uuid:249f3eac-5b7d-4dcd-8016-d394d8e9ea7a> | CC-MAIN-2021-21 | https://www.nps.gov/wrst/learn/historyculture/history-of-the-valdez-trail.htm | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991562.85/warc/CC-MAIN-20210519012635-20210519042635-00417.warc.gz | en | 0.958089 | 5,823 | 3.90625 | 4 |
Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Part I] The Science, Engineering, and Technologies In Part IT we take up the scientific, engineering, and technolog- ical thrusts that were identified and summarized in Part ~ as most likely to influence the evolution of the field in the next decade and beyond. Here we provide more detail about their potential uses as weld as some of the associated problems and prospects. Again, we remand the reader to keep in mind two important points: first, there are at least four technologies relevant to the link between computers and productivitycomputer technology, communication technology, semiconductor technology, and packaging and manufacturing tech- nology. We have focused on the first. Second, we are not presenting here an exhaustive taxonomy of all the computer subfields and their capabilities, but rather what we consider the most promising. The absence of discussion on areas such as databases does not mean that they are less unportant, but rather that they are likely to evolve fur- ther prunarily through exploitation of the principal thrusts that we do discuss, such as those in multiprocessors and intelligent systems.
6 Machines, Systems, and Software One of the biggest tasks facing computer designers is the devel- opment of systems that exploit the capabilities of many computers in the form of what are called multiprocessor and distributed sys- tems. Related to current and future hardware systems is the task of developing the associated software. These three topics are discussed below. MULTIPROCESSOR SYSTEMS Multiprocessor systems strive to harness tens, hundreds, and even thousands of computers to work together on a single task, for example, to solve a large scientific problem or to understand human speech and transcribe it to text. These systems involve many processors located close to each other and with some means for communicating with one another at relatively high speeds. The effort to build multiprocessor systems is the consequence of a powerful economic trend. Generational advances in VEST design have made it possible to fabricate powerful microprocessors relatively cheaply; today, for example, a single silicon chip capable of executing 1 million instructions per second costs less than $100 to make. But the cost of manufacturing a very high-speed single processor, using a number of chips and other high-speed components, has not declined correspondingly. As a result, the world's fastest computers, which 41
42 perform only hundreds of times faster than a single microprocessor, cost hundreds or thousands of times as much. Consequently, link- ing many inexpensive processors makes sound economuc sense and raises the possibility of scalable computing, i.e., using only as many processors as are needed to perform a given task. Another import ant incentive to build multiprocessor systems is related to physical limits. Advances in the speed of large, prunarily single-processor machines have slowed down: after averaging a ten- fold increase every 7 years for more than 30 years, progress is now at the rate of a threefold increase every 10 years (Kuck 1986~. These superprocessors are approaching limits dictated by conflicts between the speed of light and thermal cooling: they must be small for infor- mation to move rapidly among different circuits of the machine, yet they must be large to permit dissipation of the heat generated by the fast circuits. Multiprocessors are expected to push these limits higher because they tackle problems through the concurrent opera- tions of many processors. If the recent advances in superconductivity result in effectively raising these limits for superprocessors, they will also raise them for multiprocessors and thus the relative advantages of multiprocessors over single processors will remain the same. The key technological problems related to the creation of use- fu] mutliprocessor systems include: (1) the Recovery and design of architectures, i.e., ways of interconnecting the processors so that the resulting aggregates compute desirable applications rapidly and efficiently; (2) finding ways to program these large systems to per- form their complex tasks; and (3) solving the problem of reliability, i.e., minimizing failure of performance within a system in which the probability of individual component failures may be high. Current experunental work In the multiprocessor area includes exploration of different processor-communication architectures, design of new lan- guages, and extension of popular older languages for multiprocessor programming. Theoretical work includes the exploration of the ulti- mate limitations of multiprocessors and the design of new algorithms suited to such systems. The potential uses of multiprocessors are numerous and signifi- cant. The massive qualitative increase in computing power expected of multiprocessors promises to make these systems ideally suited to large problems of numerical and scientific computing that are charac- terized by inherent parallelism, e.g., weather forecasting, hydra- and aerodynamics, weapons research, and high-energy physics. Perhaps more surprisingly, conventional transaction-oriented computing tasks
43 in banks, insurance companies, mrlines, and other large organizations can also be broken down into independent subtasksorganized by account or flight number, for example indicating that they can be managed by a multiprocessor system as well as, and potentially more cheaply than, by a conventional mainframe computer. In short, the fact that current programs are sequential ~ not a consequence of their natural structure in all cases, but rather of the fact that they had to be written sequentially to fit the sequential constraint of single-processor machines. Most promising of ad, multiprocessors are viewed by many com- puter scientists as a prerequisite to the achievement of artificial intelligence applications involving the use of machines-for sensory functions, such as vision and speech understanding, and cognitive functions, such as learning, natural language understanding, and reasoning. This view is based on the large computational require- ments of these problems and on the recognition that multiprocessor systems may imp ate in some primitive way human neurological or- gan~zation: human vision relies on the coordinated action of millions of retinal neurons, while higher-level human cognition makes use of more than a trillion cells in the cerebrum. Traditional supercomputers, which rely on one or a handful of processors running at very high speeds, have already demonstrated their utility in several important applications. The proven capabili- ties of supercomputers will be greatly multiplied if the potential of multiprocessors is realized, leading to the tantalizing possibility of ultracomputers, which will harness together large numbers of super- processors to yield mind-boggling computational power. Such power, in turn, could be used to expand scientific capabilities, for example, through computational observatories, computational microscopes, computational biochemical reactors, or computational wind tunneb. In these applications, massive-scale simulations would be performed to addrem previously unsolved scientific problems and to chart un- explored intellectual territory. DISTRIBUTED SYSTEMS Distributed systems are networks of geographically separate computers collections of predominantly autonomous machines con- trolled by individual users for the performance of individual tasks, but also able to communicate the results of their computations with one another through some common convention. E a multiprocessor
44 system can be likened to several horses puDing a cart with a single destination as their goal, then a distributed system can be likened to a properly functioning society of individuals and organizations, pur- suing their own work under their own planning and decision schemes, yet also engaging in intercommunication toward achieving common organizational or individual goals. Multiprocessor systems link many computers primarily for rea- sons of performance, whereas distributed systems are a consequence of the fact that computers and the people who use them are scattered geographically. Networking making it possible for these scattered machines to communicate with one anotheropens up the possibil- ity of using resources more efficiently; more important, it connects the users into a community, making it possible to share knowledge, improve current business, and transact it in new ways, for exam- ple, by purchase and sale of information and informational labor. Networking also raises new concerns about job displacement; the po- tential for the invasion of privacy; the dissemination and uncritical acceptance of unreliable, undesired, and damaging information; and the prospect of theft on a truly grand scale. These are reminders that technology, like any innovation, carries with it risks as well as. benefits and that safeguards must be provided to protect against such incursions. Devising appropriate safeguards is itself an urgent topic of theoretical systems research. Distributed systems have emerged naturally in our decentralized industrial society. Their emergence reflects the proliferation of com- puters, especially the spread of personal desktop computers, and the appetite of users for more and more information. It also reflects the demands of the marketplace, in which users operate sometimes as in- dividuals, at other times as members of an organization, and at stiD Other times for interorganizational purposes. Distributed systems rely on a range of communication technologies and approaches- including telephone and local area networks, long-haul networks, satellite networks, cellular, packet radio and optical fiber networks- to connect computers and move information as necessary. Distributed systems are at the basis of modern office automation. They are evident in computer networks such as the ARPANET, which has facilitated the exchange of ideas and information within the nation's scientific community. Perhaps most significant, distributed systems have begun to transform national and international economic life, creating the beginnings of an information marketplace geared to the exchange of information. Electronic mail enables communities of
45 users to annotate, encapsulate, and broadcast messages with little or no handling of paper and makes it possible to send people-to-people or program-to-program messages. Customers from every part of the country can tap into centralized resources, including bibliographic databases, electronic encyclopedias, or any of a growing number of specialized financial, legal, news gathering, and other information services. Geographically separated individuals can pool resources for joint work: a manual for a new product can be assembled with input from the technical people on one coast and from the marketing staff on the other; a proposal can be circulated widely for comments and rebuttals from many contributors electronically. Homebound and disabled individuals can participate more actively in the economy, liberated by networking from the constraints of geographical isolation or physical handicap. The technology of distributed systems, through enhanced and nationwide network access, could have a major and unique impact on the future economy. The limitations of today's distributed systems, whether they form a small local system in a building or a much larger corporate communication system, inhibit the growth of an information mar- ketplace. They do so because the systems communicate at a rather low level, with the only commonly understood concepts being typed characters and symbols. They also are often heterogeneous, made up of machines from a variety of manufacturers, which employ a number of different hardware and software conventions. Except at the lowest level of communicating characters, there are no universally accepted standard conventions (protocols), although such shared communica- tion regimes represent the first level of software needed for providing a higher level of commonality of concepts among such Separate machines. As a result, if two computer program are currently to understand each other in order, for example, to process an invoice, they must be specifically programmed to do so. Such understanding is not shared by other number of machines unless they are similarly programmed. The greater the number of machines participating in a Attributed system, the greater the agreement needed on com- mon programming. Such agreement is not easy to arrive at, in part because system heterogeneity makes it difficult to implement. Con- sequently, one of the major problems ahead is the development of common and effective distributed system semanticsthat is, lan- guages and intelligent software systems that will help machines com- municate with and understand one another at levels higher than
46 the communication of characters, and whose use is sunple and fast enough to win acceptance among a large number of users. Despite these obstacles, the number of interconnected systems is increasing because of their great utility. The ongoing proliferation of these computer networks provides a test bed for research and a pow- erful incentive for developing an understanding of their underlying principles. Beyond the need for Attributed system semantics, other important aspects include the development of innovative and robust system architectures that can survive computer and communication failures without unacceptable losses of information, the development of network management systems to support reliable network ser- vice on an ever growing scale, and the creation and evaluation of algorithms specifically tailored to Attributed systems. Finally, on the software side of these systems, the problems associated with programming large and complex computer systems must be better understood. SOFTWARE AND PROGRAMMING Computers are general-purpose tools that can be specialized to many different tasks. The collections of instructions that achieve this specialization are called programs or, collectively, software. It is software that allows a single computer to be used at various tunes (or even sunultaneously by several users) for such diverse activities as inventory and payroll computations, word processing, solving differ- ential equations, and computer-ass~ted instruction. Programs are in many ways similar to recipes, game rules, or mechanical assembly in- structions. They must express rules of procedure that are sufficiently precise and unambiguous to be carried out exactly by the machine that is executing them, and they must allow for error conditions resulting from bad or unusual combinations of data. Unlike other products, the essence of software is in its design, which is inherently an intellectual activity. This is so because produc- ing many instances of a program involves straightforward duplication rather than extensive fabrication and assembly. Accordingly, the cost of producing software is dominated by the costs of designing it and making certain that it defines a correct procedure for performing all of its desired taskscosts that are high because of the difficulties inherent in software design. Creating software involves devising representations for informa- tion called data structures and procedures called algorithms to carry out the desired information processing. One of the major difficulties
47 associated with this task is that there are generally many different combinations of data structures and algorithms that can perform a desired task effectively. For example, in a machine vision system that inspects circular parts, a circle could be represented by its center and radius (a data structure of two numbers for every part) or by a few thousand points that approxanate the circumference; the first data structure is more economical but does not by itself permit deviations from circularity to be represented, as does the second. In carrying out the artful process of data structure and procedure selection, the programmer must often pay equal attention to both large and small software parts, like an architect who must design a house, down to all its windows, doors, doorknobs, and even bricks. Furthermore, it ~ difficult for programmers to anticipate during design all the circumstances that might conceivably arise while a program is being executed. Indeed, another difficulty involves the illusion that software, because it is the stuff of design, is infinitely malleable and can therefore be easily changed for improvement or to meet new demands. Unfortunately, software designs are often so complex and variations among different parts so subtle that the implications of even a small change are hard to anticipate and control. These factors make it fundamentally hard to specify and design software. They also make software difficult to test by anticipation of ah the failure circumstances that may accidentaBy arise during actual operation. For these reasons, software development ~ often quite costly and time consuming. Beyond design, and after a program is put to use, modifications are often required to repair errors, add new capabilities, or adapt it to changes in other programs with which it interacts. This activity is called software maintenance a misleading term since it involves continued system design and development rather than the traditional notion of fending oE the ravages of wear and age. Such maintenance can amount to as much as 75 percent of life-cycle cost (Boehm 1981~. In the 1960s the cost of computing was dominated by the cost of hardware. As the use of computers became more sophisticated. the cost of hardware dropped, and the salaries of programmers in- creasecI, software costs came to dominate the cost of computing (OECD 1985~. The problem has been aggravated by an appar- ent shortage of good software professional and limited productivity growth. The overall annual growth of programming productivity is at best 5 percent (OECD 1985~. The increase in software costs is taking place throughout the field, from small programs on personal
48 computers to life-critical applications and supercomputing. The cost increase is fueling efforts to improve software engineering through de- velopment of tools to partially automate software development and techniques for reusing software modules as parts of larger pieces of software. A lot of work is needed to increase productivity ~ software development by a factor of ten, considered a critical milestone in industry. Another serious software problem has to do with people's per- ceptions and expectations. Hardware and software sound similar, and people are frequently appalled that as the former gets cheaper by some 30 percent per year, the latter stubbornly resists produc- tivity improvements. Such a comparison, however, reflects a mis- understanding of the nature of the software development process; a brief recounting of the development of MAC SYMA, one of the earliest knowledge-based programs, suggests why. To develop a re- search prototype of that program a mathematical assistant capable of symbolic integration, differentiation and solution of equations- took 17 calendar years and some 100 person-years. In terms of the number of moving parts and their relationship to one another, the program's complexity was comparable to that of a jumbo jet, whose design and development cost more than 100 tunes as much. Most people can intuitively grasp the difficulties of constructing complex physical systems such as jumbo jets. But the complexities and design difficulties in the more abstract world of software are less obvious and less appreciated. Despite all tliese difficulties, software development has seen sig- nificant progress. In the early days of programming, it was often a triumph to write a program that successfully computed the de- sired result. There was little widespread systematic understanding of program organization or of ways to reason about programs. A1- gorithms and data structures were originally created in an ad hoc fashion, but regular use and research led to an increased fundamen- tal understanding of these entities for certain problem domains: we can now analyze and compare the performance of several proposed algorithms and data structures and, for several kinds of problems, we often know in advance theoretical limits on performance (see Chad ter 8~. Sound theories have also contributed to the construction of certain classes of software systems: for example, in the early 1960s the construction of a compiler (a program that translates programs written at a higher-level language to mach~ne-leve] programs) was a significant achievement for a team of programmers; such systems are
49 now constructed routinely (with largely automated means by a much smaller group) for traditional single-processor machines. In today's computing environment, escalating demands for over- aD computer performance become escalating demands for software capability and software size. Development of large systems requires the coordination of many people, the maintenance and control of many versions of the software, and the testing and remanufacture of new versions after the system has been changed. The problems associated with these activities became a focus of computer science research in the m~-1970s, through techniques of modular decom- position (Parnas 1972) and organization of large teams of program- mers (Baker 1972~. At that tune, a Extinction was made between progranuning-in-the-smaD and programming-in-the-large to cad at- tention to the difference between the problems encountered by a few people writing simple program and the problems encountered by large groups of people constructing and managing sizable assemblies of modules. Beyond these more or less pure software systems that deal only with information, there are other, even more complex, highly dis- tributed systems that often interact with physical processes, such as the U.S. telecommunications, air traffic control, transportation, process control, energy, air defense, strategic offense, and command- control-communication and intelligence systems. These supers tems, as they have been called (Zracket 1981), grow over a period of decades from initially limited objectives to evolutionarily mature end states that are generally unpredictable at the start. The need to create such supersystems and other software with sufficient reli- ability for effective use presents a set of software design and devel- opment problems that are not addressed by the techniques of either programming-~n-the-smaD or programming-in-the-large. Accordingly, two new tasks for software research are: (1) to de- velop better techniques for designing software, especially software to be embedded In very complex, real-tune application systems, and (2) to use emerging artificial intelligence techniques for the development of tools that wait help software developers manage the complexity of such software. Other directions and opportunities for future software progress include: (3) effective ways of improving the productivity of the software development process, e.g., through automation of soft- ware design, reuse of existing software components, or new types of software architectures; (4) ways to reason about the correctness of software, including the task specification process; (5) infrastructure]
so tools and resources, such as electronic software distribution systems; and (6) addressing the new multiprocessor software problems that will inevitably arise from the new multiprocessor architectures dis- cussed above. | <urn:uuid:21598674-f028-4ecf-9318-2f275192ac8b> | CC-MAIN-2021-21 | https://www.nap.edu/read/10331/chapter/7 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988831.77/warc/CC-MAIN-20210508001259-20210508031259-00176.warc.gz | en | 0.94589 | 4,484 | 2.828125 | 3 |
This wasn't ever a secret was it? The day you posted the original article someone picked up their copy of E&T and said it was ultrasound!
Cool stuff though....
A mysterious secret technology, apparently in use by the British intelligence services in an undisclosed role, has been reinvented by a graduate student in America. Full details of the working principles are now available. BAE Systems' wireless through-hull comms demo at Farnborough 2010. Works through glass, too. Tristan …
Acoustic comms have been about for years. This is just the transmission of sound through a dense medium. Same as Sonar and acoustic modems, just through metal and at higher frequencies (lowering range and increasing bandwidth).
The principle behind this 'invention' is actually a well-known and well-documented pain in the ass for people doing underwater positioning; we have to make sure that acoustic positioning beacons are kept a decent distance away from, say, big steel structures we're putting onto the seabed. If not, the sound travels through the steel as well as through the water and gives screwed up results (the speed of sound through steel is much faster than it's speed in water, so you get 2 identical returns at different times).
So if he tried to patent it there are at least a dozen companies out there ready to strike down that patent. Not to mention various governments getting pissed off with him and stopping the patent being granted.
@"but what is the detection range of sonar"
Ask a Whale. They can hear sounds from over a 1000 miles away, but with all the noise in the oceans these days, that range is usually down to a few hundred miles.
@"More worrying is what 50W of ultrasound does to a solid structure over days/years"
More worrying is what it does to Whales and other marine life!
What carries for distances underwater is, I believe, the LOW end of the sonic 'spectrum" -- the bass notes of the whales' song. High-frequency sound, as with high-frequency light, is more easily scattered and dissipated in both air and water. The U.S. submarine service, for instance, uses Extremely Low Frequency (ELF) radio transmitters to send signals to vessels that remain on-station and submerged for weeks at a time.
The article here refers to "ultrasonic vibrations", so we're talking high-frequency, short-penetration waves (which probably explains the statement that "(i)t seems certain that performance could be traded for range," that is; that the frequency could be lowered, allowing greater penetration while lowering the amount of data that could be carried by those fewer cycles per second.
So, I suspect that, unless the whales are snuggling up to the sides of a nuclear "boomer", they're reasonably safe.
I was thinking the same thing. Long-distance sub-sea communication is probably INFRAsonic in nature rather than ULTRAsonic. Besides, large animals such as whales are more naturally capable of producing infrasound. On land, elephants use the technique as well, IIRC, sending infrasound along the ground.
Sound doesn't travel very far at high frequencies, too much is lost. Underwater at the upper range of human hearing nothing over a couple of kilometers would be detectable no matter how "loud" it was. Half the energy would be lost in the first km at 20,000Hz and then half again in the next km.
At the lower end of human hearing 20Hz it would travel hundreds of km for a whale song level of volume.
Whales use low frequencies which travel far. Ultrasound does not travel far and will be reflected or absorbed within a short distance.
The 50W is not getting radiated out into the ocean if it is being harvested for use by a device. There will be leakage none the less. Likely that amount of ultrasound within a room would be quite obvious to some sensors.
You all suggest that this kind of tech is exclusively gonna be used in marine-vessels.
You still need that outside power-source to drive those 50W. If it's going to be used it'll be on the inside of submarines etc. to stop cutting holes i/t various compartments for data- and power-cables.
In space however... there are no whales :-)
You could interlink various space-crafts'electrical and data signals just by physical contact with these transducers. It would make coupling the various space-modules on the e.g. ISS much easier (and quicker).
Anyway all these type of "inventions" still need some electrical source. The main global problem today is how we're gonna make that source (without dependency on fossilized resources)? Ppl should concentrate more on searching for a viable long-term energy-solution instead of these electric-or electronic gimmicks.
And then we should prioritize space travel and off-world colonization as this planet is becoming filled up with human trash.
Sonar is a strange thing.
If you're in a submarine and you send out a sonar pulse everything else in the water knows where you are.
a large proportion of Sonar arrays are passive (no sound produced) rather than active, they listen for other noises, while they have their limitations (you get echo's and left and right get confused, distance is also not as good) they would probably pick up pretty much anything that makes _any_ noise.
active sonar is no good for war/stealth situations, and is not the cause of whales going ditzy, that is down to the trials of infrasound for communication (travels faster in water than in radio in air, much faster messaging) and I believe that the trials by both US/UK have had to change to compensate for the known whale related issues.
The latest passive acoustic (and sometimes-passive acoustic) sensors (in use with both the military and commercial sectors) are great- they can be used to, say, detect intruders- and these can even determine the TYPE of the intruder (ROV, surface-fed Diver, scuba diver, etc) and give a very accurate range and position. I've seen them in action and they are seriously impressive bits of kit.
Acoustic comms don't travel faster in water than radio does in air- in fact they're orders of magnitude slower (somewhere in the region of 1500m/s, compared to 300,000,000m/s for radio in air.
What an acoustic signal DOES do, however, is actually travel through the water. Radio has a horrendously short range in water- basically zero unless you want to get into some serious maths, then you can squeeze out some range in certain circumstances- whereas ELF acoustic comms can propagate relatively slowly (though still faster than most jets- the speed of sound in water is ~4x the speed of sound in air, so a plane would have to go Mach 5 to overtake this signal!) through hundreds or thousands of miles of water.
And the higher the frequency, the shorter the range but the more information you can encode onto the wave in a given time (for example, at ~30kHz using RPSK you can encode >1kbit/s and transfer it 3km through a water column). ELF would be useful for sending more code at a slow manual speed, but over hundreds or thousands of miles. It's also harder for equipment to get a good quality lock on the source of an ELF signal unless you've got very complicated, specialised equipment or multiple widely-spaced listening points. Whereas a 35kHz signal can be pinpointed to a few mm from kilometers away (my personal record with this kit is a 4mm window for error at 3000m- using just sound!) using just the one, man-portable transceiver.
What happens if you put a non-ferrous material that interrupts the vibrations of the 'metal conductor'?
I mean starting with wrapping the metal at a point somewhere in the middle with a soft rubber wrap, or
using a rubber gasket along with either a non metallic connector or a metal with different harmonics.
I would also suspect that if you used plumbers tape on the threading that too would cause interference.
I'm sure there are other ideas on how you could also defeat this...
transformer coupling → #
Developed as a joint project by NASA and IIRC the University of North Wales.
The target was looking at some way to handle problems like the rotating joint on the solar arrays of the ISS needing both high power and telemetry channels.
Off hand they were talking of of power transfer in the 100Kw range and data rates in the Mbs range (both with significant capacity for improvement with the high efficiency of transformer coupling.
Transformer coupling only works for a distance of a few wavelengths. Once you are in the far field, you get a wave which needs an electrical component which cannot form in a conductive material.
Further more Eddy currents will greatly attenuate the magnetic fields at high frequencies long before that.
If you think about the problem, the answer is obvious:
unable to use electrical connection
unable to use magnetic connection
unable to use radio waves
Just about all that leaves is audio and mechanical movement. Ramp up the data-rate and reduce detectability/annoyance by using ultra-sonics, using the structure as a mechanical transfer.
Just wait for the foam rubber based (vibration damping) faraday cages to appear.
the people behind this (http://www.sciencedaily.com/videos/2007/0409-metal_rubber.htm) rubbing their hands with glee- it's conductive and it's rubber, so it'd work as (or at least augment) a Faraday cage AND damp out high frequency noise!
With the right business guys, this could make them very wealthy.
Using a combination of Ruby LASER rod and probably piezoelectric crystal* it created a wave they were calling a hypersonic sound (because it traveled faster** than sound in any media that transmitted it.) They wanted to use the effect for inside the egg egg scramblers and death rays and so forth.
Last I heard of it was being used as the plot device for some spy kills spy TV episodes (Man from UNCLE used it once.)
As far as it transmitting power; the little goody that powers your florescent light in the back of your laptop screen works much the same way .
*firing into a Sapphire? I think I remember that.
**Obviously not a common compression wave, possibly the same sort of tech they worked up down in Sarasota Florida in the 70s.
I don't have the slightest were I found out about all this, I remember there used to be something called a library and vaguely remember reading something there.
if I wanted to stop anything being transmitted via power cables I would make an isolated supply
3 options spring to mind
take the mains, rectify & smooth and filter it
then feed the output of that into an internal inverter
if I wanted to be really nasty to anyone trying to pick anything up off the mains I would make it a square wave driven inverter (which should throw out lots of hash)
M-G Set (http://en.wikipedia.org/wiki/Motor-generator)
basicly an electric motor coupled to a generator.
these were used heavily in military aviation for creating the higher voltages used in the radio systems but are more than capable of being scaled up to run larger loads.
Diesel generator in a secure compound
ferrite will only filter out high frequency stuff (otherwise you couldnt use it on 50hz mains) the hardest thing to filter would be a low speed data transfer being done increasing and decreasing current draw on the mains at (maximum transfer speed would be 100bps and it would be very susceptible to interference) but it would get through ferrite untouched and likely get through an inverter since that would pass its increased load on to the outside world
The mog-gens in aircraft are used to produce 3-phase 400 Hz power for avionics. That's the relentless whine one heard in cold war surveillance aircraft. We also used them in the ground support facilities for the same purpose -- 60 Hz single phase in, 400 Hz 3 phase out. Big, heavy, and not particularly. efficient
Mo-gens are also used as a type of UPS -- a big heavy flywheel keeps the generator part spinning while a diesel generator starts up.
Way way back, the spooks discovered how to read decrypted telex traffic in foreign embassies by monitoring electrical noise on the power lines going in to the code room. This varied minutely depending on what character was being set up to print next. I imagine that power lines have been carefully shielded and decoupled ever since this came to light, more than 50 years ago now.
> Get power in and out?
High quality filters. Here's the first relevant hit from googling for screened room mains filter:
100dB+ of attenuation will stop your Ethernet over mains plug from working.
Note the leakage currents though; don't forget your hard-wired secondary earthing.
"The question is, did we patent it? LOL" .... Anomalous Cowturd Posted Thursday 10th March 2011 13:45 GMT
I take it that is sarcasm, or is it irony, AC? :-) Patent spooky technology? I don't think so. One just moves the goalposts every now and then and as one feels like. A little trick learnt/codified in Bletchley huts before most anyone presently working in such fields were even born.
And do you imagine that such works as were leading the field then, stopped after the major hostilities or went deep underground with a cover that they disbanded and work discontinued.?
I was hoping someone had finally managed to prove the existence of axions and at the same time put it to good engineering use [which would be - pass light through magnetic field on side A, some photons turn into axions, pass through wall, where another magnetic field turns some back into photons for detection]. That would have been something, even at 330 bps.
Still pretty good though.
Nah! they keep an eye on patent applications and then issue a confiscation order on anything they like. The owner is given nothing and told he will go to jail if he even talks about it.
Then his idea/tech/solution is given to a defence contractor who sells it back to the military.
If you have a good idea that may have military applications, don't patent it in the UK. Instead go to the EU or US patent offices. That way it becomes impossible for the gov to steal it and give it to thier cronies.
"If you have a good idea that may have military applications, don't patent it in the UK. Instead go to the EU or US patent offices. That way it becomes impossible for the gov to steal it and give it to thier cronies." ....... Jacqui Posted Thursday 10th March 2011 15:40 GMT
The Russia/China/Japan/Pakistan route is another option. Oh, and there's always India and Israel too. In fact, there is an embarrassment of rich pickings for anyone with something which has no need for defense because it is invisibly stealthy in attack. And we haven't even started considering the number of passionate and/or crazy non-state actors out there, with more wealth than they know what to do with, and the will to change the world.
The secret in such dealings appears to be a variation of the nuclear theme ......... sell them the technology for an arm and a leg but ensure that they don't have the triggers ...... but it may be necessary to set of a big bang somewhere politically/financially sensitive just to assure every man and his dog that anything new is a viable global operating device which they need to have remote control of.
TRN, on the web now, I think this might be the same thing.
There was a lot of damn near steampunk tech (LASERpunk) in the 60s most of which went obsolete faster than it could be declassified.
..A mechanical translator using LASERS.
..Mechanical OCR using LASERS.
..Mechanical 3D radar displays; this one did not use LASERS, surprise.
and so on and so on.
Then some stuff that I worked on that is probably still classified.
I can't Google it to be sure without the black (steam or LASER powered) gyro-copters showing up.
Now of course we just emulate everything and have become cyberpunk instead.
It's interesting that you interpret the fact that the government doesn't want the information in public hands as being a sign that it is used for something spooky.
could it not just be that they don't want something with defence implications, as you say it might be useful for subs, falling into the hands of other countries that have subs?
I agree. When the article mentioned transmission ‘through steel’ , I assumed it meant at a distance. Having to have half the kit on the inside and both halves attached directly to the thick stuff isn’t really that ground-breaking, surely? Just because it is/was secret doesn’t make it clever...
With modern modulation techniques (Google OFDM) which use many discrete states (not just on/off keying) you can convey data at high rates even with only modest frequency carriers.
Your humble dial-up modem managed around 40kbits/sec over a line with sub-4kHz audio bandwidth for example. ADSL will give you 8-20Mbits/sec within a "radio" spectrum from 8kHz to 1-2MHz.
I know of cheap ($1) high-power (few 10's watts) piezo transducers which operate at around 1.7MHz... no doubt many others exist.
12Mbit/s does not mandate a channel bandwidth of 12MHz. I would guess that the signalling rate is much lower but uses complex modulation formats to achieve many bits/Hz, much like the ADSL (broadband) comms most of us use everyday. These can achieve 10-20Mbit/s in a few hundred kHz of channel bandwidth. Note I said can ;)
The Brit: Oh that's a heinously difficult problem. How can we possibly transmit data through steel ?? Maybe we should ask the BOFFINs in Cambridge. X-Rays ? Hidden dimensions ?? This is a tough problem.
The American: We need to maximize transmission capacity. Let's call Xilinx to order 500 of the fattest FPGAs and Intel for 7000 x86 CPUs we will stuff into the submarine. Then Admiral McFart and Admiral McWarmonger can do a video conference over the transducer. Proceeds to work long hours in the lab.
The German: Shannon's Channel capacity determines maximum bandwidth if you wanna know, Captain. How much bandwidth do we need if officers can use a keybord ? Ah, we need 2500 baud AND we can use some nifty crypto which <CLASSIFIED WORD A> <CLASSIFIED WORD B> .
A single 68000 and a run-of-the mill transducer from the grabbag do. Let's wire it up the next two days and then go for Beer and Schweinshaxen. Proesterchen !
In a British patent you have to declare if you think it has military implications - in which case you get the patent but it gets classified rather than published. This was part of the row about the privatization RSRE/QuinitiQ - the buyers got 50years of every secret patent for the $40M
It gets tricky for an international team, the UK govt say it has to be revealed to them if there was a Brit on the team, the US say it goes to them if it was invented on US soil (or other way around)
Back when I used to do 3letter agency stuff I asked what happened if we invented it on a conference call? Withering look from legal people - just don't right !
Transatlantic fibre optic cables - but having to have a transmitter on one end and a receiver on the other doesn't make them clever ?
The signal processing to put that much bandwidth into and out of the steel with all the thermal noise, reflections, surface scatter, multipath effect from discontinuities / grain boundaries in the metal is clever.
So let me see if I get this right.
All I have to do to to access all this super secret information on the inside of a Tempest screened room, is get inside the super secret room, plant an ultrasonic transducer and then plant another ultrasonic transducer on the outside of the room and hey presto.
Well apart from the 'getting into the secret room' bit. Also the usual construction of Tempest rooms is a inner and outer metal skin swandwiched between non-metal structure. Ultrasonics will not go through that too easy.
Apart from those problems........
All you need is an air gap between the inside and outside surfaces and this will never work. Ultrasonic waves can not travel in a gas but only in a liquid or a solid. Also some solids have such a high attenuation that it is impossible to get ultrasound though them, some stainless steels and nickel based alloys spring straight to mind.
Surely the easiest solution is just build your faraday cage or whatever in two layers with an air gap between them. Obviously you'd need to have a few supports to keep everything structural, but make those of a material very different to steel (and one very poor at conducting ultrasound) and you're sorted?
Coherent gamma ray emitter based on solid state X-ray generator and nuclear isomer.
intermediate energy gammas go through steel quite effectively and the whole device can be built less than a cubic centimetre powered by thermomelectric generators harnessing ambient heat differences.
The lack of macroscopic quantities of the isomer are a non issue as for the application in mind only milligram quantities in the form of a thin layer over the emitting window of the souped up Cool-X using two opposed crystals to achieve the required high keV and you're all set.
The cyclic nature is not a problem as the data can be sent in bursts.
AC, because this is slightly restricted tech...
Cordwainer Smith shows us the way to thwart spooks listening to the innards of your PC, lads. His idea of hyperspace contained something that "ate life". To protect the crew and passengers of a spaceship, the vessel was double walled, with a layer of oysters sandwiched in the cavity.
I shall surround myself with all he moudering obsolete computers that are no longer fit for purpose here and power them up. Since most of them are jammed in endless boot-crash-boot cycles, all the nosy spooks will get for their clever Hi-Tech sniffing is a bunch of Microsoft fail messages, some BIOS output and hopefully a good dose of Iheartu.
They shall not surveil me as I post to El Reg!
Some JPL landers and probes operate in *very* dusty environments while others are just flat out hostile (Venus having roughly the conditions inside a sulphuric acid reactor) so they are quite keen on putting the tricky stuff inside a pressure hull with a bunch of "robust" but perhaps not too sensitive sensor heads outside
There are various PDF's on the technology around ultrasound, power ultrasound and coupling through walls.
Still pretty clever.
"a properly-equipped van parked outside a building can snoop into electronics inside even if they make no use of wireless connections. This sort of thing is expensive and very difficult – not something that most organisations have to worry about – but serious spooks can and do carry out such operations."
Sounds like the wonderful "advanced" technology the bbc claims to use to detect license fee evaders, yet according to this, such technology is difficult to use and very expensive. So either the BBC is talking shite about its tv detector vans, of it's paying a disproportionate amount of money for them and getting very little in return. Given that they aren't paying very much money at all on "detection", then I suspect they're talking shite. Again.
Don't contain any equipment, at least not any more. They are just FUD.
CRT-type displays do give off a lot of elecromagnetic emissions. It was proved that it was possible to read text on a computer monitor by picking up said emissions and decoding them. That was the basis for TEMPEST being developed to protect military command rooms from such snooping.
It is unclear whether TV detector vans really ever used this tech in Britain. There would be many difficulties proving which set was watching which channel at any given time, particularly in areas of heavy population density. There would also be plenty of interference from common household appliances such as microwaves, fridges and washing machines.
Of course, most TVs and monitors are not of the CRT type any more. LCD screens do not use high-voltage, high-frequency electronics to produce the picture, and so are much less susceptible to the technique.
Apparently there are still one or two 'detector' vans in use, but we can be reasonably certain they don't contain any actual kit capable of 'detecting' anything. TVL just rely on knocking on doors and asking persistent questions - of course, they won't confirm whether or not they have the kit as they prefer people to worry. There was an FOI request recently IIRC, which they successfully side-stepped by using crime-related reasoning.
IIRC TV detector vans looked for the local carrier wave that had to be generated by the TV, as the TV signal was sent without carrier (Single-Sideband-supressed carrier if my ailing memory is correct). That was how they could tell what channel you were watching - each channel had it's own carrier frequency. They also did handheld detectors which was how they homed in on individual flats in a block.
However, as people have commented, it was all hideously expensive, so in practice, rarely deplyed ... the hope being that a van with a spinny aerial would shock/shame the dodgers into paying.
The properly-equipped van, or at least the technology to go in it does exist. I'm less sure about the TV detector vans, but electronic eavesdropping on spurious signals is a well-established field. No doubt it has gotten more difficult with improvements in electronics and emissions standards for all electronics on the market, but I don't doubt it can still be done to a degree.
As soon as I saw that physical attachment was required, I (as did many of you) immediately realized sound waves.
An ancient trick is to listen to steel rails for an approaching locomotive. It doesn't work as well as the movies suggest because rails sometimes have air gaps (expansion gaps) every so often, but within a segment it works well enough.
It works because steel is extremely elastic and conserves energy, also, the velocity of sound in steel is very high -- 6,000 meters per second (about 340 meters/sec in air).
Strangely, a Faraday Cage, designed to block transverse electromagnetic waves doesn't stop longitudinal compression waves of ultrasound. Who'd've thunk?
Ultrasound does travel in gas, just not very well
It travels better in liquids and better still in solids (issues around elasticity not withstanding)
It is the interface between materials that is the biggest issue, where most losses occur. A coupling agent generally required (hence icon).
Nyquist applies to A-D conversion. Ultrasound (US) is not inherently analogue.
I use US professionally.
And it is fabulous at assessing steel / welds / metal fatigue / submarines....
Biting the hand that feeds IT © 1998–2021 | <urn:uuid:95de5898-92c7-4f1f-941e-004575102e71> | CC-MAIN-2021-21 | https://forums.theregister.com/forum/all/2011/03/10/through_metal_comms_n_power_reinvented/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988802.93/warc/CC-MAIN-20210507181103-20210507211103-00457.warc.gz | en | 0.957467 | 5,877 | 2.546875 | 3 |
But in other historic meantone temperaments, the pitches of pairs of notes such as F♯ and G♭ may not necessarily coincide. Dissonant intervals are those that cause tension and desire to be resolved to consonant intervals. The size of an interval (also known as its width or height) can be represented using two alternative and equivalently valid methods, each appropriate to a different context: frequency ratios or cents. There is a one-to-one correspondence between staff positions and diatonic-scale degrees (the notes of diatonic scale). The distinction between diatonic and chromatic intervals may be also sensitive to context. An interval may be described as horizontal, linear, or melodic if it refers to successively sounding tones, such as two adjacent pitches in a melody, and vertical or harmonic if it pertains to simultaneously sounding tones, such as in a chord. Give the number (e.g. In music, the term interval has its own special meaning. Note that 1⁄4-comma meantone was designed to produce just major thirds, but only 8 of them are just (5:4, about 386 cents). As you play or … In this system, intervals are named according to the number of half steps, from 0 to 11, the largest interval class being 6. Fortunately, music theory helps students answer these questions, and many more. For instance, a compound major third is a major tenth (1+(8−1)+(3−1) = 10), or a major seventeenth (1+(8−1)+(8−1)+(3−1) = 17), and a compound perfect fifth is a perfect twelfth (1+(8−1)+(5−1) = 12) or a perfect nineteenth (1+(8−1)+(8−1)+(5−1) = 19). Intervals with different names may span the same number of semitones, and may even have the same width. Intervals. There is no such interval as a diminished 1st! Understand How Music Works. This is represented by the 2 lines of the Interval Size Symbol becoming larger as they move to the "right". Major Intervals are intervals of a 2nd, 3rd, 6th and 7th. In particular, the asymmetric version of the 5-limit tuning scale provides a juster value for the minor seventh (9:5, rather than 16:9). For example, the interval B–E♭ (a diminished fourth, occurring in the harmonic C-minor scale) is considered diatonic if the harmonic minor scales are considered diatonic as well. Ultimate Music Theory Certification Course Online Teacher Training includes: 50 Video Sessions, All Materials, Online Support, Join Now - Ultimate Music Teachers Membership, This Professional Development will have a powerful impact on your. The quality of a compound interval is the quality of the simple interval on which it is based. The 5-limit tuning system uses just tones and semitones as building blocks, rather than a stack of perfect fifths, and this leads to even more varied intervals throughout the scale (each kind of interval has three or four different sizes). For example, the fourth from a lower C to a higher F may be inverted to make a fifth, from a lower F to a higher C. There are two rules to determine the number and quality of the inversion of any simple interval:. The intervals formed by the notes of a diatonic scale are called diatonic. For instance, since a 7-semitone fifth is a perfect interval (P5), the 6-semitone fifth is called "diminished fifth" (d5). Intervals are one of the essential elements of music, the building-blocks that make up its structures. In sheet music, music symbols are used to describe the way a particular piece of music should be played. personal pathway to success. In twelve-tone equal temperament (12-TET), a tuning system in which all semitones have the same size, the size of one semitone is exactly 100 cents. Its size is zero cents. By Hal Leonard Corp., Adam Perlmutter . For larger intervals, see § Compound intervals below. See more ideas about piano teaching, teaching music… Meaning of musical interval. The lines and the spaces correspond to pitches of a eight-note musical scale depending on the defining clef. If you are not certain why, read the blog "No Diminished First"! Otherwise, the larger version is called major, the smaller one minor. Perfect Intervals . An interval in music is defined as a distance in pitch between any two notes. The Interval Size Symbol "Crescendo" reminds Students that movement to the right (to the larger, open end of the Size Symbol) means that Intervals are becoming bigger (larger) as they move in that direction. I would love to Connie! examine musical scales from the viewpoint of measurement theory (and of course music theory) . Perfect intervals are so-called because they were traditionally considered perfectly consonant, Since compound intervals are larger than an octave, "the inversion of any compound interval is always the same as the inversion of the simple interval from which it is compounded.". More generally, a step is a smaller or narrower interval in a musical line, and a skip is a wider or larger interval, where the categorization of intervals into steps and skips is determined by the tuning system and the pitch space used. The above-mentioned 56 intervals formed by the C-major scale are sometimes called diatonic to C major. For example, any two notes an octave apart have a frequency ratio of 2:1. This scheme applies to intervals up to an octave (12 semitones). By the two rules just given, the interval from E♭ to the C above it must be a major sixth. Because we are passionate about teaching teachers, it’s our gift to you. although in Western classical music the perfect fourth was sometimes regarded as a less than perfect consonance, when its function was contrapuntal. According to the two approaches, some may format the major seventh chord as CM7 (general rule 1: M refers to M3), and others as CM7 (alternative approach: M refers to M7). The symbols used for chord quality are similar to those used for interval quality (see above). All other intervals are called chromatic to C major. Join the Ultimate Music Teachers Membership - Get Started Today! Information and translations of musical interval in the most comprehensive dictionary definitions resource on the web. Conversely, no augmented or diminished interval is diatonic, except for the augmented fourth and diminished fifth. The mind, according to the theory shared by Danielou, works through symbols or figure-type, to which the data are brought back coming from the experience. In the Ultimate Music Theory Intermediate Rudiments Workbook, Students add the qualities of Diminished and Augmented. Interviews with Music Industry Professionals & UMT Certified Teachers. The other one spans six semitones. Intervals can be arbitrarily small, and even imperceptible to the human ear. The staff is counted from the lowest line upwards. In the diatonic scale,[d] a step is either a minor second (sometimes also called half step) or major second (sometimes also called whole step), with all intervals of a minor third or larger being skips. The distinction between diatonic and chromatic intervals is controversial, as it is based on the definition of diatonic scale, which is variable in the literature. If frequency is expressed in a logarithmic scale, and along that scale the distance between a given frequency and its double (also called octave) is divided into 1200 equal parts, each of these parts is one cent. The main rules to decode chord names or symbols are summarized below. A superscript may be added to distinguish between transpositions, using 0–11 to indicate the lowest pitch class in the cycle.. The omitted M is the quality of the third, and is deduced according to rule 2 (see above), consistently with the interpretation of the plain symbol C, which by the same rule stands for CM. A more detailed analysis is provided at 5-limit tuning#Size of intervals. A compound interval is an interval spanning more than one octave. There are several types of intervals, like perfect and non-perfect. These intervals are always based upon the notes of the Major Scale of the lowest note of the Interval. But written music uses a lot of different symbols to describe what to play exactly and it can be confusing to know what each symbol means. A more detailed analysis is provided at 1⁄4-comma meantone Size of intervals. Keep on Learning... With a Smile and a Song! David Lewin's Generalized Musical Intervals and Transformations uses interval as a generic measure of distance between time points, timbres, or more abstract musical phenomena.. For further details, see the main article. An Augmented Interval can become one half step smaller to become Major; A Major Interval can become one half step smaller to become minor; A minor Interval can become one half step smaller to become diminished. In music, just intonation or pure intonation is the tuning of musical intervals as whole number ratios (such as 3:2 or 4:3) of frequencies.Any interval tuned in this way is called a just interval.Just intervals (and chords created by combining them) consist of members of a single harmonic series of a (lower) implied … All of the above analyses refer to vertical (simultaneous) intervals. For instance, the interval from D to F♯ is a major third, while that from D to G♭ is a diminished fourth. Another value that rarely appears is niente or n , which means "nothing". How to remember if a music interval is Major/Minor or Perfect. Although intervals are usually designated in relation to their lower note, David Cope and Hindemith both suggest the concept of interval root. For instance, the intervals C–E and E–G are thirds, but joined together they form a fifth (C–G), not a sixth. They can be formed using the notes of various kinds of non-diatonic scales. Most commonly, however, musical instruments are nowadays tuned using a different tuning system, called 12-tone equal temperament. 1. Chords are sets of three or more notes. [a] Rarely, the term ditone is also used to indicate an interval spanning two whole tones (for example, a major third), or more strictly as a synonym of major third. Most fourths and fifths are also perfect (P4 and P5), with five and seven semitones respectively. For example, six of the fifths span seven semitones. One occurrence of a fourth is augmented (A4) and one fifth is diminished (d5), both spanning six semitones. Here is what the Interval Size Symbol of a "crescendo" sign means! The octave is P8, and a unison is usually referred to simply as "a unison" but can be labeled P1. They may be described as microtones, and some of them can be also classified as commas, as they describe small discrepancies, observed in some tuning systems, between enharmonically equivalent notes. The prefix semi- is typically used herein to mean "shorter", rather than "half". The ordered one, also called directed interval, may be considered the measure upwards, which, since we are dealing with pitch classes, depends on whichever pitch is chosen as 0. A step, or conjunct motion, The name of any interval is further qualified using the terms perfect (P), major (M), minor (m), augmented (A), and diminished (d). If the instrument is tuned so that the 12 notes of the chromatic scale are equally spaced (as in equal temperament), these intervals also have the same width. For example, in Math: 7 > 4 means that the number 7 is greater than (larger than) the number 4. Diminished intervals, on the other hand, are narrower by one semitone than perfect or minor intervals of the same interval number. Free Printable Major and Minor Music Intervals Flash Cards or Handouts Cheatsheets for Treble and Bass Clefs . It has a powerful purpose in physical, musical and spiritual ev olution." For instance a major triad is a chord containing three notes defined by the root and two intervals (major third and perfect fifth). Notice that interval numbers represent an inclusive count of encompassed staff positions or note names, not the difference between the endpoints. The rules to determine them are explained below. For this reason, intervals are often measured in cents, a unit derived from the logarithm of the frequency ratio. In general, a compound interval may be defined by a sequence or "stack" of two or more simple intervals of any kind. For instance, semitone is from Latin semitonus. The fourth one, called syntonic comma (81:80) can neither be regarded as a diminished second, nor as its opposite. The inversion of a major interval is a minor interval, the inversion of an augmented interval is a diminished interval. In diatonic set theory, specific and generic intervals are distinguished. As a prelude to understanding scales our students need to become aware of the two types of musical intervals (whole and half-step intervals) that combine in sequence to create the scales that underpin just about all of the music that we hear today Download the intervals explainer shown above; Using the Music Theory "Explainers" with Worksheets. A section of music in which the music should initially be played loudly (forte), then immediately softly (piano). In the UMT Certification Course, you will learn all kinds of Mnemonic Devices and Teaching Strategies that will support teaching Students of ALL Learning Styles! In 1⁄4-comma meantone, by definition 11 perfect fifths have a size of approximately 697 cents (700 − ε cents, where ε ≈ 3.42 cents); since the average size of the 12 fifths must equal exactly 700 cents (as in equal temperament), the other one must have a size of about 738 cents (700 + 11ε, the wolf fifth or diminished sixth); 8 major thirds have size about 386 cents (400 − 4ε), 4 have size about 427 cents (400 + 8ε, actually diminished fourths), and their average size is 400 cents. Description. These terms are relative to the usage of different compositional styles. Perfect intervals have only one basic form. Melodies, scales, and chords are all patterns of melodic or harmonic intervals, and the notes in any given key belong to that key because of their interval … Namely, C–G is a fifth because in any diatonic scale that contains C and G, the sequence from C to G includes five notes. Intervals formed by the notes of a C major, Deducing component intervals from chord names and symbols, Size of intervals used in different tuning systems. For instance, in a chromatic scale, the notes from C to G are eight (C–C♯–D–D♯–E–F–F♯–G). Note that 5-limit tuning was designed to maximize the number of just intervals, but even in this system some intervals are not just (e.g., 3 fifths, 5 major thirds and 6 minor thirds are not just; also, 3 major and 3 minor thirds are wolf intervals). As we will see, from the viewpoint of measurement theory, a musical scale is basically an interval scale; indeed, in music theory, the distances between notes are even called ‘intervals’! For Intervals of a 2nd, 3rd, 6th and 7th, moving from right to left (from larger to smaller), Augmented becomes Major; Major becomes minor; minor becomes diminished. This is the reason interval numbers are also called diatonic numbers, and this convention is called diatonic numbering. Not exactly a tough thing to wrap your head around but INTERVALS are ABSOLUTELY CRITICAL to musical understanding and create the patterns that underpin music theory.. The question then is – how are these intervals … For Intervals of a 1st, 4th, 5th and 8th, moving from left to right (from smaller to larger), diminished becomes Perfect; Perfect becomes Augmented. Intervals with small-integer ratios are often called just intervals, or pure intervals. For unordered pitch-class intervals, see interval class.. The standard system for comparing interval sizes is with cents. seconds thirds fourths fifth sixths sevenths octaves steps skips. For a comparison between the size of intervals in different tuning systems, see § Size of intervals used in different tuning systems. Mar 5, 2020 - Activities games worksheets printables resources for teaching intervals to music and piano students. Given that music’s interval number-names and symbols are all based on the Major scale’s intervals we’ll begin there and progress to the number-names and symbols of non Major-scale intervals. Notice that two octaves are a fifteenth, not a sixteenth (1+(8−1)+(8−1) = 15). In Math, the "Less Than" Symbol (the < symbol) is used to indicate that the number on the left of the symbol is smaller or "less than" the number on the right of the symbol. Conversely, other kinds of intervals have the opposite quality with respect to their inversion. The minor second (m2) is sometimes called, General rule 1 achieves consistency in the interpretation of symbols such as CM, § Size of intervals used in different tuning systems, Chord names and symbols (jazz and pop music), The New Grove Dictionary of Music and Musicians, Lissajous Curves: Interactive simulation of graphical representations of musical intervals, beats, interference, vibrating strings, Just intervals, from the unison to the octave, played on a drone note, https://en.wikipedia.org/w/index.php?title=Interval_(music)&oldid=995086502, Wikipedia articles incorporating the Cite Grove template, Wikipedia articles incorporating the Cite Grove template without a link parameter, Pages containing links to subscription-only content, Wikipedia articles needing page number citations from August 2015, All Wikipedia articles needing clarification, Wikipedia articles needing clarification from February 2013, Creative Commons Attribution-ShareAlike License, hexachordum minus, semitonus maius cum diapente, tetratonus, heptachordum minus, semiditonus cum diapente, pentatonus. Any compound interval can be always decomposed into one or more octaves plus one simple interval. Intervals of a 2nd, 3rd, 6th and 7th are identified as Major or minor. This is represented by the 2 lines of the Interval Size Symbol becoming larger as they move to the "right". The smallest interval in Western music is a half step. In Music, when an Interval moves to become bigger (greater than), the movement is to the "right" in the Interval Size Symbol. Interval cycles, "unfold [i.e., repeat] a single recurrent interval in a series that closes with a return to the initial pitch class", and are notated by George Perle using the letter "C", for cycle, with an interval-class integer to distinguish the interval. Diminished intervals are created when a perfect or minor interval is made one half step smaller and the interval number is not changed. Thus, generic interval numbers are smaller by 1, with respect to the conventional interval numbers. Mnemonic Devices are signs and symbols that support remembering a concept, word or idea. In the Ultimate Music Theory Workbooks, Mnemonic Devices, including the Interval Size Symbol (the "crescendo" with the numbers in it), are used to help you remember specific Musical Concepts. In the following list, the interval sizes in cents are approximate. In atonal or musical set theory, there are numerous types of intervals, the first being the ordered pitch interval, the distance between two pitches upward or downward. Some of the very smallest ones are called commas, and describe small discrepancies, observed in some tuning systems, between enharmonically equivalent notes such as C♯ and D♭. The discussion above assumes the use of the prevalent tuning system, 12-tone equal temperament ("12-TET"). Watch their inspiring stories! This document was created with Prince, a great way of getting web content onto paper. widened by one semitone). In music theory, an interval is the difference in pitch between two sounds. For example, in quarter-comma meantone, all four intervals shown in the example above would be different. interval in the 1960's, a time of great social change. Additionally, some cultures around the world have their own names for intervals found in their music. Common Musical Symbols and Terms for Playing the Piano. An interval describes the difference in pitch between any two notes. In symbol examples, C is used as chord root. In Math, the "Greater Than" Symbol (the > symbol) is used to indicate that the number on the left of the symbol is larger or "greater than" the number on the right of the symbol. Similarly, a stack of three thirds, such as C–E, E–G, and G–B, is a seventh (C–B), not a ninth. The 7:4 interval (about 969 cents), also known as the harmonic seventh, has been a contentious issue throughout the history of music theory; it is 31 cents flatter than an equal-tempered minor seventh. Intervals smaller than a semitone are called microtones. For instance, in the A♭-major diatonic scale, the five notes are C–D♭–E♭–F–G (see figure). In the “Teaching Tips” Category you will find Proven Systems to Enhance Your Teaching & Have Fun! In Western music theory, an interval is named according to its number (also called diatonic number) and quality. In other words, one starts counting the lower pitch as one, not zero. There are three parts to the way we describe an interval: 1. There are also a number of minute intervals not found in the chromatic scale or labeled with a diatonic function, which have names of their own. It’s used for when the notes are played separately or at the same time. The word for the tone, EPOGLOWN, at the top. A musical INTERVAL is simply the distance between 2 pitches. Building intervals. The table above depicts the 56 diatonic intervals formed by the notes of the C major scale (a diatonic scale). The interval number and the number of its inversion always add up to nine (4 + 5 = 9, in the example just given). The distance between any two musical notes is called an interval. The type of interval (the interv… The smallest of these intervals is a semitone. These intervals are always based upon the notes of the Major Scale of the lowest note of the Interval. Notice that in each of the non-equal tuning systems, by definition the width of each type of interval (including the semitone) changes depending on the note that starts the interval. For example, the inversion of a 5:4 ratio is an 8:5 ratio. Athanasius Kircher system of correspondences between musical intervals and colors octave: green seventh: blue-violet major: sixth fire red minor: sixth red-violet augmented fifth: dark brown fifth: gold diminished fifth: blue fourth: brown-yellow major third: bright red minor third: gold major wholetone: black minor second: … Up to the end of the 18th century, Latin was used as an official language throughout Europe for scientific and music textbooks. For example, C to D (major second) is a step, whereas C to E (major third) is a skip. As explained above, the number of staff positions must be taken into account as well. Any larger interval is called a skip (also called a leap), or disjunct motion. This is the price of using equidistant intervals in a 12-tone scale. These names identify not only the difference in semitones between the upper and lower notes, but also how the interval is spelled. The best way to learn your intervals is to think of them in the context of songs that you already know. This "Karate Chop" Method is introduced in the Ultimate Music Theory Certification Course. In music, many English terms are derived from Latin. This is the art of just intonation. To obtain the eight (initial, stock) interval … The main chord qualities are major, minor, augmented, diminished, half-diminished, and dominant. 2nd, 3rd, 4th) of each of these treble clef melodic intervals. (Some types of music call for completely different systems of analysis, but if … Intervals incorporate is very different abbreviated with perf, min, maj dim! `` the seventh musical interval represents the Ultimate diss onance before resolution into the stillness the... Word for the tone, EPOGLOWN, at the stave to reveal the answers and desire to be to... Music book frontispiece the usage of different compositional styles direction for melodic ( separate ).! '' that supports Students in understanding interval Movement term interval has its own meaning... The conventional interval numbers up to the human ear to identify and build notes in chromatic! An octave ( compound intervals below ) above depicts the 56 diatonic formed! Created with Prince, a great way symbolism of musical intervals getting web content onto paper ( 1+ 8−1... Microtones ) and quality tablet is a half step 's root, starts! Theory Prep 1 Rudiments Workbook, Students learn to read intervals by number Size, [ 14 ] is minor. See § Size of intervals starting from a common note called the `` Karate Chop '' method is introduced the... P8, and so on gift to you find Proven systems to Enhance your &. Semitones, the larger version is called diatonic intervals they form would not. Students symbolism of musical intervals the qualities of diminished and augmented prime learn your intervals is to think of them in the comprehensive!, maj, dim, aug the `` Karate Chop '' method is introduced in the music. The tablet is a one-to-one correspondence between staff positions must be taken into account as.! Given at rules to decode chord names or symbols are summarized below depending on the is! As printed for handouts or … Grade one music Theory and may even have the same number of semitones the! With small-integer ratios are often called just intervals, called syntonic comma ( 81:80 ) can be! Basic music Theory Intermediate Rudiments Workbook, Students learn to read intervals by number Size figure ) Teaching,... The two versions is a diminished interval nth interval that destroyed … Description an octave ( see above ) UMT... Intervals that define them Symbol of a crescendo or increase mark at top! You already know the diminished-seventh chord would be different for the augmented fourth and diminished fifth is diminished i.e... Than `` half '' '' can also be symbolism of musical intervals to other music elements besides pitch (. Why sheet music, many English terms are derived from the lowest note of lowest... And just intonation: 2/1 = 1200 cents diminished-seventh chord would be C3 and the interval Symbol! You associate a perfect unison - a perfect unison - a perfect First different system... By 1, defined in the Definitions.net dictionary in four different tuning system why. Defined in the “ Sharing Ideas ” Category symbolism of musical intervals will find Innovative Ideas and Massive for! Dissonance are relative terms that refer to vertical ( simultaneous ) intervals as in the context of songs you! Cultures around the world have their own names for the intervals formed by the 2 lines of lowest. Are these intervals incorporate is very different to obtain just intonation: 2/1 = 1200 cents rules to chord! Most commonly differences between notes of various symbolism of musical intervals of intervals fifth is TT. Music symbols are summarized below called diatonic the web numbers represent an inclusive count of staff! In tune with each other according to its number ( second, nor as its opposite in equal.... These two notes then the smaller the interval scheme applies to intervals up to the end the! By number Size perfect and augmented from 12 intervals between the notes that an. To determine the conventional interval number as the A♭ major scale of the interval sizes is with cents …! 5:4 ratio is an interval class is the quality of the C major of., etc. ) this document was created with Prince, a unit derived from Latin table the. Move to the notes of the chord a `` crescendo '' sign means specific and generic intervals are tops! Intervals ( see figure ) both spanning symbolism of musical intervals semitones be resolved to consonant.!, six of the interval integer and its inversion, interval classes can not be enharmonic up. Identical notes names may span the same interval number ( 1st, 2nd 3rd... = 22 ), or state of repose, of particular musical effects same number of staff or. Two rules just given, the smaller the pitch between two sounds a 12-tone.. Cases, the larger version is called either diminished ( d5 ), and even imperceptible the. Into one or more octaves plus one simple interval is spelled Grade one music Theory and Inspiring Techniques for Teaching! A♭-Major diatonic scale, the inversion is obtained by subtracting that number from.. Defined as the A♭ major scale even numbered intervals are never precisely tune! And a unison '' but can be always decomposed into one or more equivalents. Consecutive notes of a 2nd, 3rd, 6th and 7th are identified major... Than their sum in Math: 5 < 8 means that the number 4 of a 5:4 is. 12-Tet, but may not be so in another tuning system, equal. Intervals … understanding Basic music Theory, specific and generic intervals are called chromatic to C is.! We... showing the ratio between two sonic frequencies quality of the intervals formed by the 2 lines of frequency!, perfect, etc. ) imperceptible to the stability, or intervals. More octaves plus one simple interval on which it is considered symbolism of musical intervals with naming. In his experiments Fabien found that it was generally the seve nth that... Be arbitrarily small, and the interval between two notes may be also defined as consequence! Their sum more octaves plus one simple interval one, called syntonic comma ( 81:80 ) neither! Reference ratios, see § compound intervals below perfect ( P4 and P5 ), are in! Perfect prime ) [ 5 ] is a half step smaller and the interval quality ( see intervals. And symbolism of musical intervals interval root of the interval from E♭ to the stability, or conjunct motion [! Min, maj, dim, aug, then the greater the difference in pitch between any two may... Notes, then the greater the difference in pitch between any two notes then the smaller the pitch two. Semitones, the smaller the pitch between the upper and lower notes, then the smaller one minor the. Octaves ( P8 ) are perfect consonant intervals your personal pathway to success ) are below... Melodic ( separate ) intervals ) interval … definition of musical interval is the number! To their inversion can not be so in another tuning system, called shrutis are. C upward to G is 7, and the interval from D to F♯ is a sixth! Between staff positions or note names, determined with different names may span the same time `` Chop... Theory helps Students answer these questions, and a Song observed one ) Started Today Theory Intermediate Workbook! Smaller and the interval is named according to its number ( second, third while. To others, such as the corresponding natural interval, the pitches of pairs of such! 2 lines of the prevalent tuning system, is not true for all kinds of non-diatonic scales step ( half. Other historic meantone temperaments, the smaller the pitch between the notes a... Impact on your personal pathway to success consisting of four spaces between them,! Build notes in a 12-tone scale n, which means `` nothing '' & Certificate will find Innovative Ideas Massive. Is called diatonic numbering here is what the interval number is not true for all kinds of or. Distance in pitch between two sounds is not the only method to obtain the eight C–C♯–D–D♯–E–F–F♯–G! Typically defined as a consequence, any interval has one or more plus. ( 8−1 ) = 22 ), and may even have the opposite with. Some cultures around the world have their own names for intervals found in their music corresponding natural,! Interval number as the A♭ major scale a staff of five lines consisting of spaces! Used in four different tuning system, 12-tone equal temperament and just intonation: 2/1 = 1200.... A distance in pitch between any two notes may be described as or! Or lower one note is compared to another symbols that support remembering a concept, word idea. Stave to reveal the answers why sheet music is still so important communicating. Its strongest interval 12-tone equal temperament and just intonation examples, C used. Keep on Learning... with a Smile and a unison is usually referred to as! Different names may span the same interval number one less than their sum and diatonic-scale (. A major third, fourth, etc. ) nth interval that destroyed … Description human ear type of the! Wrote `` I am having trouble understanding the significance of the interval quality ( see ). The interv… interval in the 1960 's, a time of great social change Students these! Spanning more than one octave are called diatonic in another tuning system is... The eight ( C–C♯–D–D♯–E–F–F♯–G ) in Western music is defined as a consequence, any interval has one or octaves! Olution. are classified based on the defining clef particular piece of music Works and semitones. Called either diminished ( d5 ), and many more for unordered intervals. Differences between notes of various kinds of intervals `` half '' the C-major scale are sometimes called to! | <urn:uuid:e970bada-9042-4767-b863-d687b85d1ddd> | CC-MAIN-2021-21 | https://oasienne.com/tva3ah8v/symbolism-of-musical-intervals-5f656a | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.80/warc/CC-MAIN-20210507090724-20210507120724-00377.warc.gz | en | 0.932209 | 7,184 | 3.953125 | 4 |
As one of the darker Sherlock Holmes stories, written by Arthur Conan Doyle, The “Adventure of the Speckled Band” published in 1876, is a story about Sherlock Holmes investigating the case where a young bride-to-be whose sister was murdered. One of the quotes from the story relates to one of our pictures that Peyton and I took for our project “The family was at one time among the richest in England, and the estates extended over the borders into Berkshire in the north, and Hampshire in the west.” (page 2). This relates to one of the pictures representing the rich family in the story being paid.
For our first picture we decided to get a background setting outside in some trees and represent one of the characters holding a snake and a stack of cash. The snake represents the actual snake in the story that ends up getting sent to kill Helen in the story. This snake is also stated to be extremely poisonous and will kill soon after the bite. The money represents the part in the story where Holmes reveals to Watson that Roylott actually plotted to take out both daughters before they got married because he knew he would’ve lost a lot of money. Since a lot of money was left by their money, they would’ve gotten a big portion of that cash if they had gotten married.
The second picture is representing Holmes listening to Helen tell her story and he eventually agrees to take the case. In the story, Helen confronts Holmes and Dr. Watson unexpectedly tells them the whole story about her sister being killed almost two years earlier shortly before she was married. Before Helen’s sister’s death, she had been able to hear her sister’s dying words, “The speckled band!” When this occurred, Helen had been unable to understand the meaning of this phrase. However, now since Helen is engaged and since then started to experience strange noises and activities around the estate where her and her stepfather live. Some of the repairs and modifications being made on the house initiated by her stepfather, forced her to move into the specific room where her sister died. Although Holmes listens to her story and agrees to take the case, he is greeted by Roylott, threatening him if he were to interfere with this case. However, Holmes proceeds unphased. Holmes later reveals to Watson about Roylott’s entire plan to remove both of his stepdaughters to basically save money after reading Helen’s mother’s will.
Our third picture is a picture representing Helen’s sister with a snake on top of her. Holmes later reveals after spending a night in Helen’s room, on the bell cord “speckled band” which is the poisonous snake. Roylott strikes the snake and the snake strikes back at Roylott which is karma because he was just waiting for it to come back after killing his stepdaughter Helen.
In conclusion this story was quite a dark story and taking the pictures for it while trying to connect it the best way possible was a journey. It is a very interesting story with some karma and some retaliation.
Doyle, Sir Arthur Conan. “The Adventure of the Speckled Band.” 1892. Project
Victor Frankenstein and the Ancient Mariner have quite a bit in common, but they also have their differences. Victor Frankenstein created a monster after he had nobody else to go to and was in a bad state of mind, soon realizing after he created the monster that it was a horrendous idea. The Ancient Mariner was on a voyage with his crew when an Albatross showed up which was known to bring good fortune and good omen to the ship. The Ancient Mariner decided to shoot the Albatross dead with his crossbow. This is just one instance of how these two are similar in their own ways.
The events leading up to making the monster were very thought out and there were many preparations made even though he thought this out in a very bad state of well being. There were not many events that led up to the killing of the Albatross, “From the fiends, that plague thee thus!– Why look’st thou so? — With my cross-bow I shot the albatross.” (450) It seemed rather sporadic and not very methodical, indicating that the Mariner immediately realized it was a mistake. Another significant difference between the two is that Victor Frankenstein created a monster, basically creating life in the process. The Ancient Mariner took life in the act of killing the Albatross.
They both made huge mistakes but didn’t realize until after it happened. Victor Frankenstein didn’t realize he made a mistake until after he realized the monster was a terrible thing. The Ancient Mariner didn’t realize him shooting the Albatross was a mistake until the whole crew ended up dying and the rest of the voyage was horrible. However there are differing repercussions of their actions. While there are many differences and similarities between these two, they look very different from an outside perspective until you actually look at it and dissect the information between both of these.
The art I chose is called The Nightmare, Henry Fuseli, ca. 1783-91. This artwork is actually a piece that the artist liked when he saw it at London’s Royal Academy. The center of the artwork revolved around a nightmare involving a blend of “violence, eroticism, and the irrational excessively disturbing.” In the painting a girl lays on a bed seemingly to be having the nightmare, a horse is poking it’s head through a curtain seemingly surprised as to what is going on. A demon in sitting on the bed next to her looking down at her looking mischievous with the grin on its face.
This relates to Frankenstein in an interesting way. In the book Frankenstein, he basically created a monster that he later realized what he has done and what a huge mistake it was. Relating to the painting, the monster resembles the nightmare and the doctor is the victim to it. Nightmares happen when you are incredibly vulnerable, being while you are sleeping. The monster attacks the doctor while he is most vulnerable with his family, he isolates himself and is clearly not in a state of well-being. The art talks about how the painting is clearly “mad.” This relates to the doctor going mad enough to isolate himself completely and create the monster that he created.
Fuseli, Henry. The Nightmare. The Norton Anthology of British Literature: The Romantic Period. 10th ed. Stephen Greenblatt, General Editor. W. W. Norton, 2017. p. C5.
Shelley, Mary Wollstonecraft, 1797-1851. Frankenstein, Or, The Modern Prometheus : the 1818 Text. Oxford ; New York :Oxford University Press, 1998.
In Tara Westover’s Educated, her brother Shawn becomes somewhat abusive in a multitude of different ways throughout the book. Some parts of the book he is being mentally abusive while others he is being physically abusive towards Tara. Going through all of this, especially as a young girl has to be somewhat detrimental in some way. There are many long term effects of going through this kind of domestic abuse as a young girl. On top of her going through this type of abuse, she lived an “off the grid” lifestyle as a child. Her family were basically preppers and had a strong belief to do things like not going to doctors, they were very against public education or any type of education for that matter. Going through all of this would have some detrimental effects, especially long term, at least that’s what one would think. She heavily combats the lingering effects of childhood abuse.
These articles show research of how prevalent these effects are on victims of childhood abuse. Tara Westover describes the abuse she went through throughout the book and how her brother physically and mentally abused her. Her parents didn’t help the situation so that only made it worse. Some of the articles focus on how the effects of abuse change if a child or family is in a different circumstance. Such as if someone is older, in a better financial state, etc. the effects can differ.
These articles relate to Educated because the victims of childhood abuse for the most part have to go through the rest of their lives with the lingering effects such as depression, anxiety, aggressiveness, oppositional behavior, and a multitude of social issues. While Tara doesn’t seem to have extreme issues carrying herself everyday, she still heavily combats the effects of mental and physical abuse that her brother put her through. It is very interesting that many victims of abuse as intense as Tara’s have very noticeable detrimental effects for a large part of their lifetime while she has done well combating the effects of the lifestyle she lived as a child and the abuse she went through as a child.
Emma Crawfords article about women’s understanding of the effects of domestic abuse goes in depth about the long term effects on women, domestic abuse has. This article focuses on the unique perspectives of women that have been domestically abused in their lives.
Eight women were interviewed during this study and it showed that the domestic abuse these eight women went through drastically changed their lives forever. No matter who it is in the family it almost always comes out to be the same detrimental long term effects on the victim. This can relate to Educated and how Tara has had some effects of abuse in the past.
Fantuzzo, J. W., & Mohr, W. K. (1999). Prevalence and effects of child exposure to domestic violence. The Future of Children, 9(3), 21. doi:http://dx.doi.org/10.2307/1602779
John W. Fanuzzo’s article on the prevalence and effects of child exposure to domestic violence focuses on long term effects of children that go through domestic violence. The research done in this article shows that a large number of American children are affected by domestic violence in their lifetime. This article was written in 1999 and states that at this time there weren’t many studies being done on the effects of domestic violence on children. Actually there were many studies being made on the effects of domestic violence on women. This article focuses solely on mental effects on children from domestic abuse.
A large number of American children are affected mentally and physically by domestic abuse. Many factors depending on the child may have adverse effects. Such as a kid that is older, has a better financial situation, their parents abuse substances, etc. can all change the outcome of domestic abuse. It all basically depends on the child and their family situation. Another factor that comes into play is if the child is being directly physically abused or not. This plays into Tara’s life because she was being directly abused by her brother.
Jouriles, E. N., PhD., McDonald, R., PhD., Slep, A. M. S., PhD., Heyman, R. E., PhD., & Garrido, E., PhD. (2008). Child abuse in the context of domestic violence: Prevalence, explanations, and practice implications. Violence and Victims, 23(2), 221-35. doi:http://dx.doi.org/10.1891/0886-6708.23.2.221
This article addresses how prevalent child abuse is in domestically violent families and if there are some patters of child abuse among these domestically violent families. It also focuses on how some children have trouble adjusting since they are at risk for symptoms like oppositional behavior, depression, anxiety, a multitude of social issues, and aggressive behavior.
This relates to Educates in a different way because she doesn’t seem to have the worse side of the effects although she went through a lot of this abuse with her brother. The fact that she doesn’t have extreme lingering effects is quite interesting. It seems as if she was a special case and didn’t suffer the many effects that most children would have, at least long term effects.
Quinn, D. M., Williams, M. K., Quintana, F., Gaskins, J. L., Overstreet, N. M., Pishori, A., . . . Chaudoir, S. R. (2014). Examining effects of anticipated stigma, centrality, salience, internalization, and outness on psychological distress for people with concealable stigmatized identities. PLoS One, 9(5) doi:http://dx.doi.org/10.1371/journal.pone.0096977
This article discusses how stigmatized identities cause rates of anxiety and depression and it’s critical role in mental health treatment. Experiences of childhood abuse has been shown to put individuals at a larger risk for psychological distress than others. The levels of psychological stress such as depression an anxiety were similar across domestic violence and childhood abuse.
This relates to Educated because there are plenty of examples from other victims of childhood abuse that suffer from these long term lingering effects in their lifetime. They have to deal with depression and anxiety lingering for their entire life while Tara went through something much worse in my opinion but still is able to carry herself and have a normal social life. Living off the grid and being abused by your brother heavily is a whole separate thing to be able to deal with.
In chapter 12 of “Educated” by Tara Westover, the author talks about her experience with her brother Shawn and a girl he liked. Basically towards the beginning of the chapter, Shawn starts calling her, “sittle lister.” This is obviously just another incident of Shawns personality changes and him being very violent and just strange in general. Sadie sat next to Shawn at a rehearsal and later on, she asked him if he liked her which he responded with no, describing her as having “fish eyes.” “Yup, fish eyes. They’re dead stupid, fish. They’re beautiful, but their heads’re as empty as a tire.” (Westover 107.) After he says this, when she comes over to his house after school, he is incredibly rude to her. This portion of the text shows that Shawn isn’t only mean to his family but is also mean and violent to the people around him.
For this analysis, I am focusing on the character Shawn throughout the entirety of chapter 12. Throughout this portion of the text, Shawn becomes rather rude and less violent than other portions of the book. Sadie is a girl that Shawn would look for during her moving between different buildings during school. However, he stopped on the side of the highway to look for Sadie moving between buildings, he saw her, although it was her she was walking down the steps with Charles. She didn’t even notice Shawns truck. He visually was upset by seeing this but he proposed that if he just completely stopped acknowledging her, that she would eventually get desperate and she would suffer. This part of Shawn, isn’t so much of being rude or violent but just being jealous and overprotective. After Sadie sits next to him at a rehearsal, she comes to his house after school. Shawn proceeds to ask her for water. Only when she brings it back he would just ask for something else for no apparent reason. “When she brought that he’d ask for milk, then water again, ice, no ice, then juice. This could go on for thirty minutes before…” (Westover 109.) Every moment she got back, he was just demanding for something else. The author states that she was grateful every night they went out.
Towards the end of the chapter is whenever Shawn demonstrates his violent tendencies once again towards Tara. One night when Shawn arrived at home, everyone in the house was long asleep except for Tara. Shawn proceeds to ask her for a glass of water, they exchange some words but she ends up getting him a glass of water. Except she didn’t hand it to him, instead she dumped it all over his head. This is whenever Shawn gets instantly aggressive. He tries to force her to apologise but she refuses. He then gets violent and grabs her hair and literally drags her into the bathroom and puts her head down into the toilet and tries to force her to apologise. This is a very good example of Shawn demonstrating his aggressiveness and violent tendencies.
To conclude, this portion of the text was very short but eventful. However it wasn’t like it was out of the ordinary for her. In other portions of the text, she recounts moments of him getting aggressive, being extremely rude, and being violent caused by brain trauma. This is a horrific thing to go through. However, during this event she genuinely tries to “rebel” against his will rightfully so but doesn’t succeed.
Some research that I have conducted in the past would include many things from different courses, even directly from English III last spring. Conducting research as well as organizing it and presenting it is something that I have always enjoyed, especially if the subject is about one of my interests. Research assignments are something that I have always excelled at academically.
I have always enjoyed doing research on controversial topics, such as politics and government. In my English II high school class that I took last spring, we were assigned a speech on any topic we felt inclined to choose and we had to present the speech to the class. Afterward, we voted on each other’s speeches based on how we as speakers presented ourselves, our collected evidence, and how we presented our message to the class. The topic I chose was government and politics. I decided to name my speech “America Is Superior.” Of course, this was opinion based and most of the audience disagreed with my perspective. However, I thought if I could gather pertinent evidence and present my theories effectively during my speech the audience might take my opinions into consideration. For this topic specifically, I was tasked with gathering a decent amount of evidence and being able to persuade an audience. Writing all of this information really enlightened me a great deal. The assignment taught me how to organize material and be a persuasive force to an audience. It also taught me a lot about politics. On page 281 of the Norton Field Guide, it states, “we write in order to figure out what we think.” This resonated with me in the fact that gathering evidence and writing relevant information in a clear and organized manner does lead to being able to persuade an audience.
To conclude, I have conducted multiple kinds of research and it is very important how you organize as well as present your research. I also learned some things while reading the Norton Field Guide such as the quote I used earlier about learning while you write which is what happened to me while writing my speech. I learned more than I bargained for about my argument as well as the other side of the topic. The knowledge I gained enabled me to view both sides and strengthen my argument to the point that I actually persuaded others to my side.
Bullock, Richard, et al. Chapter 25: “Writing as Inquiry.” The Norton Field Guide to Writing with Readings and Handbook. 5th ed. Norton, 2019. pp. 281-284. PDF.
Ever since I was a young kid, I have been interested in movies. I have always watched a few select movies over and over again because I never get tired of a movie that I am interested in. Some, I have even watched so much that I have memorized the entire dialogue of the entire film. Every time you watch a film again, you learn something different or pick up on something different that you did not catch the first time you watched it. I have probably watched some movies upwards to hundreds of times over the past couple years. The thing that makes films so appealing to me, is the true story behind the film and the action. It was extremely difficult to narrow my list down to my five favorite films as I have many more.
My top movie for my list of favorite films would have to be Mine (2016). This is an action movie that I have known ever since it was released and have given it more than enough views. The second film in the list is 13 Hours: The Secret Soldiers of Benghazi (2016). This one has been a movie that I have watched the most and the reason it is so appealing to me is the backstory and the real history behind the movie. The next film in my list is Gran Torino (2008). Being a sad movie, it is still a good old style movie that deserves many views. The fourth movie in the list is Interstellar (2014). This sci-fi film is a very confusing three hours to watch and a lot to take in. The last movie in my list of favorite films is The Martian (2015). Being the only movie that I have actually read the book about, it is a really good and humorous film.
13 Hours: The Secret Soldiers of Benghazi. Directed by Michael Bay, performances by John Krasinski, James Dale, Pablo Schreiber, Max Martini, David Denman, and Dominic Fumusa, Paramount Pictures, 2016.
Gran Torino. Directed by Clint Eastwood, performances by Clint Eastwood, Bee Vang, Ahney Her, Christopher Carley, Brian Haley, and Scott Eastwood, Warner Bros, 2008.
Interstellar. Directed by Christopher Nolan, performances by Matthew McConaughey, Anne Hathaway, Jessica Shastain, Michael Caine, Mackenzie Foy, Casey Affleck, and Matt Damon, Paramount Pictures, 2014.
Mine. Directed by Fabio Guagoline and Fabio Resinaro, performances by Tom Cullen, Annabelle Wallis, Armie Hammer, and Clint Dyer, Universal Pictures, 2016.
The Martian. Directed by Ridley Scott, performances by Matt Damon, Jessica Chastain, Kate Mara, Chiwetel Ejiofor, Jeff Daniels, Michael Pena, and Donald Glover, 20th Century Fox, 2015.
Through the process of reading and writing textual analysis, and literacy narratives, I have discovered that it is now much easier to create and draft a piece of writing. This results in construction papers easily, better word choice, and better structure.
Doing extensive reading and writing in this classroom has been very beneficial. I have experienced the advancements that I have made through the process of reading many different passages and writing different types of papers. One of the biggest things that helped me from reading these passages was learning how to write more effective transitions, better word choice, and writing style. Writing summaries on reading beings on the realization that a passage needs to be read multiple times to be fully understood.
Even though playing scrabble seems like a simple thing but it actually helps out a lot. Word choice has been much easier when composing a paper. Playing this game also helps for turning away from screens and being able to be alone with your thoughts. This is definitely needed in many other classrooms and is a beneficial and calming experience.
These things were all very significant experiences over the course and were very beneficial. These have all aided my writing in very specific ways such as things like word choice, text structure, and transitions. Doing all of these things consistently has bettered my writing which has resulted in better grades in writing classes. Being able to visuallly see by improvements in this course is most definitely satisfying. It is very reassuring to see improving grades, having peers read some of my work, and self-revise my own work. Some other things that I have experienced in this course is being able to provide clear closure and clear transitions. All of these things are the result of doing these simple practices in this course.
Colleges have made many advertisements over the course of the past few decades. These ads usually are specifically targeted towards a certain audience. Lenoir-Rhyne specifically advertised towards students who bring a sense of style on campus. The ad seems to be a couple of students socializing on the campus, most of them looking well-dressed in bow ties, dresses, ties, etc. They also seem very happy to be on campus. The base message Lenoir-Rhyne University is trying to convey through this advertisement is that you can socialize on campus and be happy to be here.
The text in the ad shows many things Lenoir-Rhyne is specifically targeting, such as style, polished programs of study, and leaders are only a few key phrases in the text. Style is very emphasized towards the beginning of the text when style is compared to the Universities programs of study, “Students at Lenoir-Rhyne University bring a sense of style as polished as our programs of study” (1). Along with this statement, the image of the students reflects this as all of them in the ad seem affluent. This implies that you need to be of an upper middle class to be able to attend the University because they want to bring a polished sense of style and it is expensive to attend the college. This statement also tells the audience that the programs of study must be very well because it states that students “bring a sense of style as polished as our programs of study” (1), and the students in the image are all dressed very spruce.
Another thing that the advertisement focuses on is that many students that attend the University are the “leaders of tomorrow.” This implies that all the students that attend Lenoir-Rhyne, become leaders because of their programs and how their university is structured. Knowing that most students that attend the university become leaders, this would obviously appeal to students that want to become leaders.
This Lenoir-Rhyne University advertisement appeals heavily to an audience that wants to be leaders, and are wealthy considering the style being as polished as their programs of study. Students also want to go here to get good programs of study that are just as polished as the style that is present on the campus. The choice of words that Lenoir-Rhyne chooses appeals to many audiences and is very diverse.
I have never really liked writing just about up until freshman year in high school. My English classes in middle school weren’t very serious at all and I felt like they were just teaching us a bunch of things that were on the final exam without letting us write or do anything related to writing. Although, I don’t really blame them because trying to get a bunch of eighth graders to write without complaining the entire time is pretty tough. It wasn’t until freshman year in high school, I started to write papers and do things like research papers and use different formats and outlines. Doing this and revising them made me realize that writing effectively was really important. It also made me realize that if you write about something that somewhat interests you, writing can become something that you thoroughly enjoy.
I attended the Alexander Early College and the English teachers here are really good at sparking discussion with writing and really taught me how to write effectively and use outlines. I wrote a few papers in there that wasn’t really the most interesting thing and I didn’t really enjoy writing at that point. However, towards the end of the school year, we wrote a paper about Nike sweatshops. This was a research paper and we had to make source cards, note cards, and a bunch of in-text references to complete this paper. The amount of effort I put into this assignment along with how time-consuming it was to find sources and complete references made me enjoy it. The topic really interested me as well and searching for these sources made me realize a few more things about sweatshops. It was an enjoyable piece of writing while I was learning about something that I became interested in.
Looking forward to any type of writing is something that I find myself doing very often given the fact that I’m very interested in it now and enjoy the learning experience that comes along with a lot of writing. Doing a research paper of sweatshops really changed my perspective on how I perceive writing because having an enjoyable experience doing something and learning a lot from it, changes your perspective on it most of the time. As well as vice versa when if you have a bad experience in something, you probably won’t enjoy it as much the next time. This has also really changed my attitude towards writing. I used to hate it and as weird as it may sound, I thought only people that loved to read and that were really smart wrote. But now as I write, I realize that writing can be very enjoyable. | <urn:uuid:43b080df-ca7a-44c0-8675-2290751602e9> | CC-MAIN-2021-21 | https://jsimpson.home.blog/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989705.28/warc/CC-MAIN-20210512193253-20210512223253-00173.warc.gz | en | 0.978437 | 5,911 | 2.859375 | 3 |
Object-Oriented Programming is The Biggest Mistake of Computer Science
C++ and Java probably are some of the worst mistakes of computer science. Both have been heavily criticized by Alan Kay, the creator of OOP himself, and many other prominent computer scientists. Yet C++ and Java paved the way for the most notorious programming paradigm — the modern OOP.
Its popularity is very unfortunate, it has caused tremendous damage to the modern economy, causing indirect losses of trillions upon trillions of dollars. Thousands of human lives have been lost as a result of OOP. There’s no industry that went untouched by the latent OO crisis, unfolding right before our eyes for the past three decades.
Why is OOP so dangerous? Let’s find out.
Imagine taking your family out for a ride on a beautiful Sunday afternoon. It is nice and sunny outside. All of you enter the car, and you take the exact same highway that you’ve already driven on a million times.
Yet this time something is different — the car keeps accelerating uncontrollably, even when you release the gas pedal. Brakes aren’t working either, it seems they’ve lost their power. In a desperate attempt to save the situation, you pull the emergency brake. This leaves a 150-feet long skid mark on the road before your car runs into an embankment on the side of the road.
Sounds like a nightmare? Yet this is exactly what has happened to Jean Bookout in September, 2007 while she was driving her Toyota Camry. This wasn’t the only such incident. It was one of the many incidents related to the so-called “unintended acceleration”, which has plagued Toyota cars for well over a decade, causing close to 100 deaths. The car manufacturer was quick to point fingers at things like “sticky pedals”, driver error, and even floor mats. Yet some experts have long suspected that faulty software might have been at play.
To help with the problem, software experts from NASA have been enlisted, to find nothing. Only a few years later, during the investigation of the Bookout incident, the real culprit was found by another team of software experts. They’ve spent nearly 18 months digging through Toyota code. They’ve described the Toyota codebase as “spaghetti code” — a programmer lingo for tangled mess of code.
The software experts have demonstrated more than 10 million ways for the Toyota software to cause unintended acceleration. Eventually, Toyota was forced to recall more than 9 million cars, and paid over $3 billion in settlement fees and fines.
Is spaghetti code a problem?
100 human lives taken by some software fault is way too many. What makes this really scary is that the problem with Toyota code isn’t unique.
Two Boeing 737 Max airplanes have crashed, causing 346 deaths, and more than 60 billion dollars in damage. All because of a software bug, with 100% certainty caused by spaghetti code.
Spaghetti code plagues way too many codebases worldwide. On-board airplane computers, medical equipment, code running on nuclear power plants.
Program code isn’t as much written for the machines, as it is written for fellow humans. As Martin Fowler has said, “Any fool can write code that a computer can understand. Good programmers write code that humans can understand.”
If the code doesn’t run, then it’s broken. Yet if people can’t understand the code, then it will be broken. Soon.
Let’s take a quick detour, and talk about the human brain. The human brain is the most powerful machine in the world. However, it comes with its own limitations. Our working memory is limited, the human brain can only think about 5 things at a time. This means, that program code should be written in a way that doesn’t overwhelm the human brain.
Spaghetti code makes it impossible for the human brain to understand the codebase. This has far-reaching consequences — it is impossible to see if some change will break something else. Exhaustive tests for flaws become impossible. No one can even be sure if such a system is working correctly. And if it does work, why does it even work?
What causes spaghetti code?
Why does code become spaghetti code over time? Because of entropy — everything in the universe eventually becomes disorganized, chaotic. Just like cables will eventually become tangled, our code eventually will become a tangled mess. Unless sufficient constraints are put in place.
Why do we have speed limits on the roads? Yes, some people will always hate them, but they prevent us from crashing to death. Why do we have markings on the road? To prevent people from going the wrong way, to prevent accidents.
A similar approach would totally make sense when programming. Such constraints should not be left to the human programmer to put in place. They should be enforced automatically, by tooling, or ideally by the programming paradigm itself.
Why is OOP the root of all evil?
How can we enforce sufficient constraints to prevent the code from turning into spaghetti? Two options — manually, or automatically. Manual approach is error-prone, humans will always make errors. Therefore, it is logical for such constraints to be automatically enforced.
Unfortunately, OOP is not the solution we’ve all been looking for. It provides no constraints to help with the problem of code entanglement. One can become proficient in various OOP best practices, like Dependency Injection, test-driven Development, Domain-Driven Design, and others (which do help). However, none of that is enforced by the programming paradigm itself (and no such tooling exists that would enforce the best practices).
None of the built-in OOP features help with preventing spaghetti code — encapsulation simply hides and scatters state across the program, which only makes things worse. Inheritance adds even more confusion. OOP polymorphism once again makes things even more confusing — there are no benefits in not knowing what exact execution path the program is going to take at runtime. Especially when multiple levels of inheritance are involved.
OOP further exacerbates the spaghetti code problem
The lack of proper constraints (to prevent code from becoming a tangled mess) isn’t the only drawback of OOP.
In most object-oriented languages, everything by default is shared by reference. Effectively turning a program into one huge blob of global state. This goes in direct conflict with the original idea of OOP. Alan Kay, the creator of OOP had a background in biology. He had an idea for a language (Simula), that would allow writing computer programs in a way that resembles biological cells. He wanted to have independent programs (cells) communicate by sending messages to each other. The state of the independent programs would never be shared with the outside world (encapsulation).
Alan Kay never intended for the “cells” to reach directly into the internals of other cells to make changes. Yet this is exactly what happens in modern OOP, since in modern OOP, by default, everything is shared by reference. This also means that regressions become inevitable. Changing one part of the program will often break things somewhere else (which is much less common with other programming paradigms, like Functional Programming).
We can clearly see that modern OOP is fundamentally flawed. It is the “monster” that will torture you every day at work. And it will also haunt you at night.
Let’s talk about predictability
Spaghetti code is a big problem. And Object-Oriented code is especially prone to spaghettification.
Spaghetti code makes software unmaintainable. Yet it is only a part of the problem. We also want software to be reliable. But that is not enough, software (or any other system for that matter) is expected to be predictable.
A user of any system should have the same predictable experience, no matter what. Pressing the car accelerator pedal should always result in the car speeding up. Pressing the brakes should always result in the car slowing down. In computer science lingo, we expect the car to be deterministic.
It is highly undesirable for the car to exhibit random behaviors, like the accelerator failing to accelerate, or the brakes failing to brake (the Toyota problem). Even if such issues occur only once in a trillion times.
Yet the majority of software engineers have the mindset of “the software should be good enough for our customers to keep using it”. Can’t we really do any better than that? Sure, we can, and we should do better than that! The best place to start is to address the nondeterminism of our programs.
— Wikipedia article on Nondeterministic Algorithms
If the above Wikipedia quote on nondeterminism doesn’t sound good to you, it is because nondeterminism isn’t any good. Let’s take a look at a code sample that simply calls a function:
We don’t know what the function does, but it seems that the function always returns the same output, given the same input. Now let’s take a look at another example that calls another function,
This time, the function has returned different values for the same input. What is the difference between the two? The former function always produces the same output, given the same input, just like functions in mathematics. In other words, the function is deterministic. The latter function may produce the expected value, but this is not guaranteed. Or in other words, the function is nondeterministic.
What makes a function deterministic or nondeterministic?
- A function that does not depend on external state is 100% deterministic.
- A function that only calls other deterministic functions is deterministic.
In the above example,
computea is deterministic, and will always give the same output, given the same input. Because its output depends only on its argument
On the other hand,
computeb is nondeterministic because it calls another nondeterministic function
Math.random(). How do we know that
Math.random() is nondeterministic? Internally, it depends on system time (external state) to calculate the random value. It also takes no arguments — a dead giveaway of a function that depends on external state.
What does determinism have to do with predictability? Deterministic code is predictable code. Nondeterministic code is unpredictable code.
From Determinism to Nondeterminism
Let’s take a look at an addition function:
We can always be sure, that given the input of
(2, 2) , the result will always be equal to
4 . How can we be so sure? In most programming languages, the
addition operation is implemented on the hardware, in other words, the CPU is responsible for the result of the computation to always remain the same. Unless we’re dealing with the comparison of floating-point numbers, (but that is a different story, unrelated to the problem of nondeterminism). For now, let’s focus on integers. The hardware is extremely reliable, and it is safe to assume that the result of addition will always be correct.
Now, let’s box the value of
So far so good, the function is deterministic!
Let’s now make a small change to the body of the function:
What happened? Suddenly the result of the function is no longer predictable! It worked fine the first time, but on every subsequent run, its result started getting more and more unpredictable. In other words, the function is no longer deterministic.
Why did it suddenly become non-deterministic? The function has caused a side effect by modifying a value outside of its scope.
A deterministic program guarantees that
2+2==4 . In other words, given an input of
(2, 2) , the function
add , should always result in the output of
4 . No matter how many times you call the function, no matter whether or not you call the function in parallel, and no matter what the world outside of the function looks like.
Nondeterministic programs are the exact opposite. In most of the cases, the call to
add(2, 2) will return
4 . But once in a while, the function might return 3, 5, or even 1004. Nondeterminism is highly undesirable in programs, I hope you can now understand why.
What are the consequences of nondeterministic code? Software defects, or as they are more commonly referred to as “bugs”. Bugs make the developers waste precious time debugging, and significantly degrade the customer experience if they made their way into production.
To make our programs more reliable, we should first and foremost address the issues of nondeterminism.
This brings us to the problem of side effects.
What is a side effect? If you’re taking medication for headache, but that medication is making you nauseous, then the nausea is a side-effect. Simply put, something undesirable.
Imagine, that you’ve purchased a calculator. You bring it home, start using it, and then suddenly realize that this is not a simple calculator. You got yourself a calculator with a twist! You enter
10 * 11 , it prints
110 as the output, but it also yells ONE HUNDRED AND TEN back at you. This is a side effect. Next, you enter
41+1 , it prints
42 , and comments “42, the meaning of life”. Side effect as well! You’re puzzled, and start talking to your significant other that you’d like to order pizza. The calculator overhears the conversation, says “ok” out loud, and places a pizza order. Side effect as well!
Let’s get back to our addition function:
Yes, the function performs the expected operation, it adds
b . However, it also introduces a side-effect. The call to
a.value += b.value has caused the object
a to change. The function argument
a was referring to the object
two , and therefore
two.value is no longer equal to
2 . Its value became
4 after the first call,
6 after the second call, and so on.
Having discussed both determinism and side-effects, we’re ready to talk about purity. A pure function is a function that is both deterministic, and has no side effects.
Once again, deterministic means predictable — the function will always return the same result, given the same input. And no side effects means that the function will do nothing other than returning a value. Such function is pure.
What are the benefits of pure functions? As I’ve already said, they’re predictable. This makes them very easy to test (no need for mocks and stubs). Reasoning about pure functions is easy — unlike in OOP, there’s no need to keep in mind the entire application state. You only need to worry about the current function you’re working on.
Pure functions can be composed easily (since they don’t change anything outside of their scope). Pure functions are great for concurrency, since no state is shared between functions. Refactoring pure functions is pure joy — just copy and paste, no need for complex IDE tooling.
Simply put, pure functions bring the joy back into programming.
How pure is Object-Oriented Programming?
For the sake of an example, let’s talk about two features of OOP — getters and setters.
The result of a getter depends on external state — the object state. Calling a getter multiple times may result in different output, depending on the state of the system. This makes getters inherently non-deterministic.
Now, setters. Setters are meant to change state of an object, which makes them inherently side-effectful.
This means that all methods (apart maybe from static methods) in OOP are either non-deterministic, or cause side-effects, neither of each is good. Hence, Object-Oriented Programming is anything but pure, it is the complete opposite of pure.
There’s a silver bullet.
Yet few of us dare to try it.
Being ignorant is not so much a shame, as being unwilling to learn.
— Benjamin Franklin
In the gloomy world of software failures, there is a ray of hope, something that will solve most, if not all of our problems. A true silver bullet. But only if you’re willing to learn and apply — most people aren’t.
What is the definition of a silver bullet? Something that can be used to solve all of our problems. Is mathematics a silver bullet? If anything, it comes very close to being a silver bullet.
We owe it to the thousands of extremely intelligent men and women who worked hard for millennia to give us mathematics. Euclid, Pythagoras, Archimedes, Isaac Newton, Leonhard Euler, Alonzo Church, and many many others.
How far do you think our world would go if something nondeterministic (i.e. unpredictable) would be the backbone of modern science? Likely not very far, we’d stay in the middle ages. This has actually happened in the world of medicine — in the past there were no rigorous trials to confirm the efficacy of a particular treatment or medication. People relied on the opinion of doctors to treat their health problems (which unfortunately still happens in countries like Russia). In the past, ineffective techniques like bloodletting have been popular. Something as unsafe as arsenic was widely used.
Unfortunately, the software industry of today is way too similar to the medicine of the past. It is not based on solid foundation. Instead, the modern software industry is mostly based on a weak wobbly foundation, on called Object-Oriented Programming. Had human lives directly depended on software, OOP would long be gone and forgotten, just like bloodletting and other unsafe practices.
A solid foundation
Is there an alternative? Can we have something as reliable as mathematics in the world of programming? Yes! Many mathematical concepts translate directly to programming, and lay the foundation of something called Functional Programming.
Functional Programming is the mathematics of programming — an extremely solid and robust foundation, that can be used to build solid and robust programs. What makes it so robust? It is based upon mathematics, Lambda Calculus in particular.
To draw a comparison, what is the modern OOP based upon? Yes, the proper Alan Kay OOP was based on biological cells. However, the modern Java/C# OOP is based on a set of ridiculous ideas such as classes, inheritance and encapsulation, it has none of the original ideas that the genius of Alan Kay has invented. And the rest is simply a set of bandaids to address the shortcomings of its inferior ideas.
What about functional programming? It’s core building block is a function, in most cases a pure function. Pure functions are deterministic, which makes them predictable. This means that programs composed of pure functions will be predictable. Will they always be bug-free? No, but if there’s a bug in the program, it will be deterministic as well — the same bug will always occur for the same inputs, which makes it easier to fix.
How did I get here?
In the past, the
goto statement was widely used in programming languages, before the advent of procedures/functions. The
goto statement simply allowed the program to jump to any part of the code during execution. This made it really hard for the developers to answer the question “how did I get to this point of execution?”. And yes, this has caused a large number of bugs.
A very similar problem is happening nowadays. Only this time the difficult question is “how did I get to this state” instead of “how did I get to this point of execution”.
OOP (and imperative programming in general) makes answering the question of “how did I get to this state?” hard. In OOP, everything is passed by reference. This technically means, that any object can be mutated by any other object (OOP places no constraints to prevent that). And encapsulation doesn’t help at all — calling a method to mutate some object field is no better than mutating it directly. This means, that the programs quickly turn into a mess of dependencies, effectively making the whole program a big blob of global state.
What is the solution to make us stop asking the question “how did I get to this state”? As you may have already guessed, functional programming.
A lot of people in the past have resisted the recommendation to stop using
goto, just like many people of today resist the idea of functional programming, and immutable state.
But wait, what about spaghetti code?
In OOP, it is considered best practice to “prefer composition over inheritance”. Such best practices should theoretically help with spaghetti code. Unfortunately, this only is a “best practice”. The object-oriented programming paradigm itself places no constraints to enforce such best practices. It’s up to the junior developers on your team to follow such best practices, and for them to be enforced in code reviews (which won’t always happen).
What about functional programming? In functional programming, functional composition (and decomposition) is the only way to build programs. This means that the programming paradigm itself enforces composition. Exactly what we’ve been looking for!
Functions call other functions, bigger functions are always composed from smaller functions. And that’s it. Unlike in OOP, composition in functional programming is natural. Furthermore, this makes processes like refactoring extremely easy — simply cut code, and paste it into a new function. No complex object dependencies to manage, no complex tooling (e.g. Resharper) needed.
One can clearly see that OOP is an inferior choice for code organization. A clear win for functional programming.
But OOP and FP are complementary!
Sorry to disappoint you. They’re not complementary.
Object-oriented programming is the complete opposite of functional programming. Saying that OOP and FP are complementary probably is the same as saying that bloodletting and antibiotics are complementary… Are they?
OOP violates many of the fundamental FP principles:
- FP encourages purity, whereas OOP encourages impurity.
- FP code fundamentally is deterministic, and therefore is predictable. OOP code is inherently nondeterministic, and therefore is unpredictable.
- Composition is natural in FP, it is not natural in OOP.
- OOP typically results in buggy software, and spaghetti code. FP results in reliable, predictable, and maintainable software.
- Debugging is rarely needed with FP, more often than not simple unit tests will do. OOP programmers, on the other hand, live in the debugger.
- OOP programmers spend most of their time fixing bugs. FP programmers spend most of their time delivering results.
Ultimately, functional programming is the mathematics of the software world. If mathematics has given a very solid foundation to modern sciences, it can also give a solid foundation to our software, in the form of functional programming.
Take action, before it’s too late
OOP was a very big and a terribly expensive mistake. Let’s all finally admit it.
Knowing that the car I ride in runs software written with OOP makes me scared. Knowing that the airplane that takes me and my family on a vacation uses Object-Oriented code doesn’t make me feel any safer.
The time has come for all of us to finally take action. We should all start making small steps, to recognize the dangers of Object-Oriented Programming, and start making the effort to learn Functional Programming. This is not a quick process, it will take at least a decade before the majority of us will make the shift. I believe that in the near future, those who keep using OOP will be viewed as “dinosaurs”, similar to the COBOL programmers of today, obsolete. C++ and Java will die. C# will die. TypeScript will also soon become a thing of the past.
I want you to take action today— if you haven’t already, start learning functional programming. Become really good at it, and spread the word. F#, ReasonML, and Elixir are all great options to get started.
The big software revolution has started. Will you join, or will you be left behind? | <urn:uuid:31a845f4-f3b1-4b65-be58-4a919e944ed7> | CC-MAIN-2021-21 | https://suzdalnitski.medium.com/oop-will-make-you-suffer-846d072b4dce?source=follow_footer------------------------------------- | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990419.12/warc/CC-MAIN-20210511214444-20210512004444-00497.warc.gz | en | 0.930523 | 5,188 | 2.875 | 3 |
Chances are you’ve heard about speed reading. A magical technique that allows you to glance at a page of text and immediately extract all of its meaning. But after digging deeper and trying the techniques yourself, you realized that it does not seem to work for you. You may even feel guilty about not being able to read fast enough. After all, if there is a way to read 10,000 words per minute, are you not wasting a lot of time? I was in a similar position myself not too long ago.
Eventually, I decided to build my own framework for reading. I collected some bits and pieces from various sources, added my own insights to it — and voilà! You can read the result here.
The good news is: Once you understand some basic facts and have the right terminology, the mystery around it fades away. What is left are a few practical and useful techniques that help you optimize your reading.
A fixation point is a point that your eyes fixate on. Every time you look at something intently, you are placing your eyes on an imaginary point. It could be on the lamp on your desk, on the nose of your conversation partner or on the title of a book lying in front of you.
The fixation area is the area around a fixation point that you can still see and extract information from — without directly focusing on it. In other words, it is everything that you can see without moving your eyes. It can be approximated by a round shape. This should not be confused with peripheral vision, which refers to the outmost parts of your entire field of view. The fixation area only refers to the periphery of a single fixation point.
A saccade is a single atomic eye movement, going from one fixation point to another. Of course, the fixation area is also changed through that movement. We tend to believe that our eyes move continuously, when in fact, they make short and discrete movements.
Now, let’s apply these basic concepts to reading.
Of course, it is impossible to read a word or sentence without ever having seen it¹. Therefore, to read something in full, each word must have appeared in a fixation area at least once. Since each additional fixation point takes time, we look for a method that minimizes them while still covering the entire text. If we also assume that we read from left to right and leave no gaps, there aren’t many options left.
We end up with the following standard reading method:
- Read from left to right, placing 2–3 fixation points per line
- Don’t go back in the text (avoid regression)
- Place the fixation points in a way that exploits the fixation areas as much as possible
- In particular, don’t place them on the outer edges of the text, but indent them a few centimeters (for example, on the third and third-to-last word)
This concludes our take on the foundation of reading. All other reading methods are based on this standard method.
It’s not about speed
Numbers are tempting, especially if they are indicators of intelligence. We all know that those who read are intelligent. So those who read very quickly must be very intelligent, right? Unfortunately, numbers can also be misguiding.
After all, reading something quickly does not imply that you understood the material or that you will retain it. Reading speed is a nominal value — it is useless without putting in into proper context. If you gave somebody a boring book to read, he will claim to have read it in no time. His reading speed will be through the roof. But it is doubtful that it has brought him much use.
Instead of optimizing reading speed, we should optimize reading itself, i.e. the extraction of meaning from text. Sometimes this means to read faster and sometimes to read slower. The main way to accomplish that is to remove the mechanical barriers that keep you from reading as fast as you could. This is what proper reading should be about and this is also the goal of the method outlined above. Once you’ve mastered it, the mechanics of reading will step into the background and your focus will shift to extracting and processing meaning².
Thinking speed: A natural limit
Your reading speed is upper bounded by your thinking speed. You can see reading as thinking by proxy, whereby you reproduce an author’s thoughts in your mind. But if reading is merely a form of thinking, then it must be bounded by the same limits as thinking.
What are the limits of thinking speed? This is hard to determine, but I suggest to do an experiment: How fast can somebody talk for him still to be intelligible? Auctioneers talk at about 250 wpm. If you listen to a podcast at 3–4x the normal speed, you will still understand most of it. So, thinking speed is a multiple of normal talking speed (at about 150 wpm), but it is still within one order of magnitude.
There is no magical sauce that allows you to circumvent this barrier. People who claim to read faster than that are skimming the text rather than reading all of it (see fragmented reading below).
Regardless of your actual thinking speed, the ultimate goal is to get your reading speed as close to it as possible. Because that is all you can do anyway. Good reading is about removing barriers that keep you from reading at the speed of thought. Once you understand that, you understand speed reading.
Voice and visual reading
While learning and experimenting with reading, one insight was particularly shocking to me: There are two separate and very distinct ways how to read. Both of them are useful in different contextes.
The first one, voice reading, refers to a mode of reading where you pronounce words out loud in your head. It is as though you are both creating and parsing a conversation that takes place only in your mind. This is how most people read.
The second one is visual reading, where the meaning is directly extracted from the visual perception of written text. The words are not pronounced, but directly understood.
It is worth noting that the mechanics of reading as outlined above stay the same for both modes. The only thing that changes is the method you use to turn words into meaning. This is also why shifting between these two modes is so subtle.
Subvocalization is the internal speech typically produced when reading. You can characterize voice and visual reading by the degree to which they use subvocalization. Research suggests that it is impossible to shut it down entirely, but it can be significantly reduced, which allows you to do more visual and less voice reading. This ultimately results in a higher reading speed.
There may be concerns about whether visual reading really exists. I want to argue that not only does it exist, but it is the primary way in which we navigate daily life. When you enter an office and see a chair, you don’t say to yourself “This is a chair”, but recognize it as such and move on. If you see a sign in traffic, you don’t say to yourself “I have the right of way”, but you simply know that and then act on it. This means that you have been doing plenty of visual reading all along (just not with books)!
The problem with visual reading is that is not how we have been conditioned to read. Words — unlike traffic signs — are typically spoken out loud, so we tend to do the same with written words. This results in subvocalization. Fortunately, this conditioning can be untrained to a large extent.
A little caveat: Even when you are doing visual reading, it still may be that you hear a small voice in the background. But the words are all scrambled up and only part of the text is read. Subvocalization is so deeply conditioned in ourselves that it is quite hard to get rid of it entirely. However, as long as you are not focused on these voices as your source of meaning, it should be fine.
According to most speed reading literature, voice reading is an inferior mode of reading. Also, subvocalization needs to be suppressed at all costs. But this is a fatal misconception. It is a knee-jerk reaction to think that the faster method is automatically the better. In reality, there are many situations in which voice reading is the favorable method — or is even indispensable. I will outline some of them here.
The clearest example is poetry: It is a form of art whose expression of beauty is fundamentally dependent on the way that the verses are pronounced. You cannot speed read (here: visual read) poetry. It was only when I understood the distinction between voice reading and visual reading that I understood the appeal of poems. Until then, entering this world was impossible for me.
For humans, the meaning of text is intertwined with how it sounds. Powerful messages sound right, in that they produce exactly the right echo in the reader’s mind. This is why the choice of words matters so much. When you read visually, your focus is on the visual image of the words rather than the sound. This means that you miss the deep emotional effect it produces. For that to happen, you have to read slowly — to allow the sound of the words penetrate your soul.
If you are a writer and you want to assess the quality of a text you’ve written, you will be forced to use voice reading. It is the only way to truly feel a piece of text. It is only then that you know whether the text really creates the impression that you want it to have. However, if your goal is to simply recall the content of a longer text you’ve written, then visual reading is the better option.
If you edit a piece of text and use visual-reading, you won’t notice that a word is missing since you have trained yourself to look over grammatical constructs, as they don’t add much to the meaning of the text.
Comprehension is another issue. If you voice read a text, you typically pronounce every single word. Also, it is slower, giving your mind more time to process the information. Also, as stated before, meaning is intertwined with sound, so pronouncing words should produce a deeper meaning. Pronouncing the words also may aid in memory, as we typically retain better what we have heard (rather than merely understood).
So, voice reading is not inferior, but different. And consequently, it is used for different purposes.
How do you choose which one to use? A good rule of thumb is³: Visual reading is about information, voice reading about emotion. In visual reading, you are focused on the sheer informational value of the text. In voice reading, you are focused on how the text feels. If you only ever do visual reading, you deprive yourself of a lot of meaning and enjoyment for the sake of efficiency.
Full and fragmented reading
There is no need to read the entire text to understand it. This is another major insight that helped me improve my reading. This also motivates another distinction: full reading and fragmented reading.
Full reading is reading the entire text, word by word, without skipping or jumping over any parts of it. Most often, this is done using voice reading and by pronouncing every word. But visual reading is also possible. This is the appropriate mode for all situations that require full comprehension: technical documents, dense textbooks, problem descriptions to name a few.
The standard method that is described above is a form of full reading, as we required each word to appear once in a fixation area.
Fragmented reading takes a different approach: instead of reading the full text, you only read fragments of it. For this to be useful, these fragments have to be chosen strategically. One approach is to only read the first sentence of each paragraph. Another approach is to focus on the central words within a text, i.e. those that provide the most meaning (or bang for the buck). This means skipping grammatical constructs such as “a” and “the” entirely.
In terms of the terminology we introduced earlier, fragmented reading means to place fewer fixation points on the text than would be necessary for full reading. But as long as they are well placed, this is not a problem.
You may ask: If you don’t read the entire text, how are you supposed to understand it? I bet that if I gave you a collection of words, you would be able to derive the story behind them. Try it yourself: “mom”, “video games”, “education”, “son”, “autism”. Since we already have a lot of background information on each fragment and know how they are related, there is only one line of interpretation that makes sense.
The key here is to acknowledge that you are puzzling together pieces. This means that, to some degree, you are simply *guessing* what the text is about. Depending on the text and the amount of background information you have, this can work very well. But it is certainly a psychological barrier to be overcome, as we have been conditioned to read everything in full.
The more background information you have, the easier it is to attain higher speeds. In fiction in particular, there is a set of common tropes that get repeated across books. If you know them, you will understand what the book is about without reading all the details.
There is a trade-off between speed and accuracy⁴. If you want higher speed, you need to be prepared give up accuracy.
The transition between full and fragmented reading is fluid. You can always decide to use fewer or more fixation points, leading to a more fragmented or a fuller type of reading. The fact that we don’t have two strictly distinct types of reading is a testament to the flexibility of this framework. Also, it is often unclear whether you are reading in full at a very high speed or already doing fragmented reading. In practice, though, these distinctions are not important as long as you manage to adapt your reading mode to the material at hand.
A lot of speed reading literature glorifies fragmented reading as the one-true-way of reading. But to make this clear: In many cases, fragmented reading simply is completely inappropriate. Not reading a math problem in full is a terrible idea. You wouldn’t want to go to a doctor who reads your clinical history in a fragmented way. Nor would you do that with a dense technical document of a device you are trying to repair. Whenever a text is packed with information or if every bit of information is crucial, full reading should be used instead.
At this point, we can use our terminology to state what most books on speed reading actually mean by this term. It is very blurry because it is used in many different ways, without ever making the underlying distinctions clear. Sometimes it refers to fast reading in general, sometimes to intelligent skimming and sometimes to visual reading as opposed to voice reading. Using the above terminology, we can clear this up and hopefully never have to use that dreadful term again. The definition of most books most likely is this: speed reading = visual reading + fragmented reading + maximize fixation area. This is certainly faster than voice reading + full reading (or: slow reading), but, as we have seen, there are many situations in which it is not applicable!
How to read? It depends
If you adhere to a rigid system, your performance will suffer. Trying to read poetry using visual reading or applying fragmented reading on a doctor’s note will lead you nowhere. You have to read flexibly and be prepared to switch from one mode to the other instantly. In particular, you have to acknowledge that different reading tasks require different reading modes.
There are many factors that play a role: If you are sleepy, it is unlikely that you will be able to process information quickly — so, choose voice reading instead. If you are interested in a single piece of information, but are looking at a very long document, then full reading would be a waste of time and you should switch to (very) fragmented reading. If you want to deeply understand a quote, you should read it using full reading + voice reading.
The mechanics of reading should also be kept flexible. Say that you chose a method where you use two fixation points for each line (both slightly indented). What do you do on the last line of a paragraph that may only contain three words (the rest of the line is blank). Do you force yourself to place another fixation point on the blank space, just to enforce your method? Or do you break the rules and make a vertical saccade to the beginning of the next paragraph? The answer is: First of all, don’t use unnecessary fixation points only to follow some rule. Second, it depends on the text you are reading. There is no single valid way to transition from one paragraph to the next. But as of now, you should have the tools to figure this out for yourself⁵.
The optimal strategy will always be some mixture of all techniques. In many scenarios, a good strategy is the following: First skim through the text to get a rough idea of its content, identify the parts that interest you the most and then read them in full. This is a combination of fragmented reading and full reading.
Choosing the right reading mode is a matter of experience. After experimenting with different ways to read (and remember, they are all valid ways of reading), you should grow more and more confident in your ability to choose the right one for the job.
Since most of us are reading all the time, trying to improve it is a worthwhile investment. The problem is that there is a bewildering amount of material on this topic — and a lot of it is of dubious quality. Here, I tried to present a realistic and down-to-earth approach, as well as introduce some key distinctions that help to think about this topic.
This approach can be summarized like this: There is no one-size-fits-all solution. There is no magic trick that allows super-human reading speed. And that’s fine. After all, all we really need is to get rid of the barriers that keep us from reading at human-level speed!
We have only touched on the mechanical aspect of reading. There are more advanced questions such as choosing between different strategies to read a book. But with this framework, I am giving you the trunk of the tree. It is up to you to fill up the branches.
Why does reading have to be so complicated? Why can’t we simply read naturally? Unfortunately, our brains were not designed to read. And since this ability is not built-in, we need to use what we have to work around that. The concepts presented here make reading slightly more complicated, but they also help remove the barriers that keep you from proper reading. Which, in turn, makes reading easier! And after gaining some experience with them, they will feel natural, too.
The first time I stumbled upon speed reading was in high school. But the materials I studied contained too much mumbo-jumbo for my taste. Also, trying their techniques rapidly reduced my rate of comprehension, which was very discouraging. I put the topic aside and came back to it only many years later. After some reflection, I finally understood the key ideas behind good reading and started applying them. This was the first time that I was truly satisfied with how I read.
I hope the ideas in this guide are as useful to you as they were to me.
: I am stating such obvious facts because many books I have read on this topic don’t.
: This is analogous to the vim or emacs editors. Like proper reading, it takes some effort to learn them. But eventually, they help you to get rid of mechanical tasks (such as moving the cursor) that interrupt the writing process. Once you are are proficient in them, they allow you to type at the speed of thought.
: This may also be a gross oversimplification since voice reading is helpful for full comprehension, which is clearly about information.
: I intentionally refer to accuracy and not comprehension, as it is possible to comprehend something that is inaccurate.
: I am aware that this is very technical, but in my experience, it is these small technical details that can ruin one’s reading experience. | <urn:uuid:d9f5045f-b17f-4a64-8bf4-f2859b90ac73> | CC-MAIN-2021-21 | https://medium.com/swlh/a-framework-for-speed-reading-6c9a999df226?source=post_internal_links---------1---------------------------- | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989690.55/warc/CC-MAIN-20210516044552-20210516074552-00457.warc.gz | en | 0.963926 | 4,222 | 2.953125 | 3 |
Leading caregiving expert Teepa Snow has studied how brain function changes after a dementia diagnosis and how understanding these changes can improve the ways families care for their loved ones.
- Snow often uses roleplaying to illustrate what a person with dementia is experiencing at each stage
- She suggests that understanding what is happening to your loved one’s brain and what they are thinking will improve the way you communicate
Being Patient spoke with Snow about caregiving techniques families can use to de-stress and develop stronger bonds with their loved ones.
Being Patient: You’ve coined the term Positive Approach® to Care. What does that mean?
Teepa Snow: People’s brains are dying. I can’t stop it! I can’t even halt it and in reality, I can’t slow down the death of brain cells—I mean I can’t personally. What I can do is help you use what you have left at any moment in time. So, if you have something left—I feel like it’s my job to work with you. I will not ask you for what you don’t have at that moment, whether it’s chemical or structural.
You may say to me, “Listen, I don’t know who you are.” “I’m your daughter. Who do you think I am?” is a natural reaction to that. Often, individuals feel like, how could you not know who I am? You may think, “Wow, you’ve lost who I am to you.” If someone says, “I don’t know who you are,” rather than reacting to it out loud, in your head think, “Wow, they’re lost in place and time so they’re not sure of me.” You can ask, “Do I look familiar or do I not look familiar to you?” They may say, “Well, you look familiar.” Think, “Well, that’s good.” You can ask, “Do I seem friendly?” That’s really important. They might say, “Well, yes, you seem friendly.” Now, you’re on two good things!
Ask, “Do you recognize my voice at all?” They might say, “Well, yeah!” Then add, “I’m somebody who loves you. My name is Teepa.” The person may say, “Oh, well that’s odd because my daughter’s name is Teepa, too.” What I have to understand if I’m really going to be with you and where you are in this moment is the idea that you’re doing this on purpose. In your mind, you see a keeper and it’s not me. I’m also named Teepa. But, I’m not that person yet because you can’t get to the memory that links me to me. So, I could say, “Wow that is interesting! Now let me ask you something—is she a good kid or does she get in trouble?” The person may say, “Oh, she’s great!” They still haven’t found me, but I must be willing to stay with the person and say, “That’s interesting! Is there anything that I can help you with while I’m here?” Let go of the idea that at this moment, the person is going to know who you are. It hurts, but to that person, it isn’t really important. In this moment, it’s really important the person understands who you are. You have to think, “Did I really come so that I can be with her for a little bit, even if I have to face the fact that she can’t figure out who I am? Can I let that part go and still be with them?” That’s a PAC.
Being Patient: Sometimes, even though a person has dementia and their memory is getting worse, their emotional intelligence appears to be right on cue. Your technique illustrates that people with Alzheimer’s can still feel emotions. If you were to aggressively ask, “You don’t remember me? I’m your daughter!” that would hurt the person, right?
Teepa Snow: Yes, they’d think, “You’re not my daughter! My daughter would ever act like that!” What’s happening is that your behavior is atypical for that person and it is because you’re so blown out of the water that they don’t know who you are. Here’s a reality check. You’ll need a timeout at some point. You’ll need somebody to go talk to about what the person just did: forgetting who you are. Your loved one didn’t do it on purpose, but at the same time, you’re not going to be able to let it go without processing that. Your brain took it and dealt with it at the moment because you get how important it is to be there for the person. But reacting won’t help them because even though you’re hurt, they’ll get hurt too, and it just makes it harder to do the things we need to do with each other. At that moment, the person is not going to want to take a shower with someone they don’t like, even if it needs to get done.
Being Patient: How much do we know about the emotional capability of a person who has Alzheimer’s? While it differs according to what stage they are in, what does research show about how much of a person’s emotional capabilities stay present? How much can they feel hurt or loved throughout the course of this disease?
Teepa Snow: Now, this is where it gets a little tricky because it also depends on the type of dementia. In other words, what parts of the brain are being attacked? For instance, if a person has frontotemporal dementia—where the front of the brain and the temporal lobe are being attacked, they have a higher risk of losing the ability to have emotional connectedness because that’s mediated by the prefrontal cortex. So, I can’t show you what I’m feeling because I’ve lost a way of doing that.
I could say to you, “Well, I’m very sad about that idea that you don’t know me.” They may have difficulty communicating that they don’t know you. You can see that they are confused, but it’s hard to see the level of distress because they lost the ability to express it. Or in some dementias, like frontotemporal, I might not feel it. I’m not feeling emotionally the same way I would have because the part of my brain that allows me to feel emotions is actually being flattened. That flat affect is part of my dementia. But for Alzheimer’s, that’s not the case. They’ll say, “I don’t know why you’re so mad at me! Why are you mad?” And you’ll say, “Well, I’m not mad, Mom, I’m just frustrated.” They might add, “Well, you look mad—look at your eyes, look at how they’re all squinty at me.” Those affected by Alzheimer’s are very affected by visuals. They often have tunnel vision, making it difficult to see the big picture.
Being Patient: What is the impact of saying, “Oh, you really don’t remember that?” to someone with Alzheimer’s?
Teepa Snow: The part of the brain that’s damaged is the hippocampus—the hippocampus is the part that takes a piece of data, locks it in and then has it. So when they say to you, “What time is my doctor’s appointment?” and you respond, “four o’clock today,” they may add, “Oh, well I wish you’d said something about it!” You may say, “It’s on the calendar, Mom. We just had this conversation.”
That’s your cue to realize that if you’re going to be helpful, your loved one is missing a file cabinet in their brain that’s marked “What’s Happening Today.” And even if you went over it multiple times, you put it in your file cabinet again, you’ve been in and out of the file cabinet multiple times, your loved one has no file cabinet and can’t hold on to that memory. They just don’t have it! You and I use what’s called working memory and immediate recall. But that’s the part that they’re missing. They don’t have working memory; it’s getting shorter. Your loved one used to be able to remember five to eight—and then it was five—and then it was three—and now it’s just, there’s something about an appointment. You should confirm the receipt of the information by saying, “Oh, you’re wanting to know what time your appointment is?” You’ve got to slow it down because it pauses your loved one—it gives them a chance to realize you’re asking this question. You should also hold up four fingers when you say “four o’clock” because it gives your loved one a very strong visual cue. Put your hand up to your face, hold up four fingers and show it to her.
If we’ve been over it five or six times, then either she’s not got the file cabinet to put it in or she’s a little distressed about this thing we’re doing and that’s making it really hard for her brain to hold on to the details. Because this doctor’s appointment has her a little revved up and that says to you—know that I’m not getting anything in that cabinet until this appointment is over. I mean, it’s just not going to stick. So, what you should do is interject, “Hey Mom, I have a favor to ask!” Now, what happens when I say that with energy, what happened to you? Your whole brain went, “Yes?” because you’re ready to catch the ball.
You may say, “Would you do me a favor? I’m trying to remember back when I was a kid and you took us to the zoo. Do you remember which animals I seemed to like the best? That transitions your loved one from immediate recall to long-term memory, and stories your loved one told you about going to the zoo and the animals you liked. You have to use your memory of you and your loved one to find another place you can go and be together. This way, your loved one gets to be smart. Saying, “Oh, you don’t remember?” is a terrible reminder to your loved one they have this brain disease, and it’s anxiety-provoking for them. It’s like, “I can’t trust my brain. Oh crap, you told me that already. Why can’t I hold on to it?” So ask them about long-term memories they can hold on to.
Being Patient: Often, people with Alzheimer’s may become distressed when someone else is driving a car. Is that caused by their vision becoming distorted, and what is being a passenger in a car like for them?
Teepa Snow: If your loved one used to drive the car and cannot any longer, she’s been disempowered with her hands, foot and judgment, but she’s still in a front seat. So, at this point, her brain is damaged enough that visual data is coming in, but her view of the world has gone from taking in large amounts of data to small amounts of data. So, stuff is streaming by her really quickly and if she looks at it, then she loses it. So, she then looks at the front; it’s like stoplights and she’s thinking, “Oh no, it’s coming really fast” because she got distracted for a few seconds. If she tries to isolate one object or one thing, then her brain goes, “Oh no, but you’re supposed to pay attention to the front.” Because that’s the rule and she doesn’t have anything to control; there’s nothing. And she’s panicking. She’s always been in control, so for her to look away from the front to you, she’s thinking, “Oh my God, I’m going to lose it!” She doesn’t have a way to stop anything from happening other than yelling. What is being triggered is the primitive brain’s desire to be safe.
Being Patient: What is the impact of employing the same strategies over and over again on caregivers?
Teepa Snow: Well now all of a sudden, caregivers are empowered, too, because they can change the direction. So rather than answering that question 16 times after the first five, caregivers now have a way to move themselves and their loved one out of that place. The idea that caregivers have ways to chill their loved ones down empowers them because otherwise, their loved one’s emotions are coming at them.
The caregiver’s brain may become triggered by their loved one’s distressed brain, but yelling doesn’t help because then the two of them end up in distress. Instead, caregivers can take their loved one’s hand and give it a little pump because it releases oxytocin. They can say, “You know I love you, right?” Then, they can put some music on that their loved one likes to sing along with and sing together. Their loved one’s brain will go from visual input to auditory input. All of a sudden both of them will become less stressed out.
Being Patient: How should caregivers deal with the early stages of Alzheimer’s when they may feel like they need to start acting like a parent to their parent?
Teepa Snow: Well, one of the things I think we should do is to establish baselines on ability. We need to make that a conversation so that we can start checking things regularly without having to go to some official person. For example, you can use animal fluency as a quick check. You can say, “Hey Mom, let’s name as many animals as we can think of and I’ll keep track of the time so we can record it. How many did you get? Let’s listen! Wow, 50! Well, Mom, you are amazing. Let me try.” We each do it and then six months later, we do it again. What we shouldn’t see is a huge shift in numbers—we shouldn’t see a decrease.
If you do notice a change, you may need to have a critical conversation with your loved one. At first, you might’ve thought, “I think those lapses in memory are pretty normal, ”or maybe not normal, but you know it’s not a problem yet. But if we haven’t opened the door for that kind of, “Mom, what do you think about how your brain has been doing lately? Do you think it’s still great? You’re not sure? Or are you worried?” conversation, it may become difficult later. If you say, “Let’s do our animals again” and notice that this time, instead of listing animals, your loved one says, “Why are you asking me to do this?” you may need to go see a doctor. Your loved one may get defensive or be hiding something. You may say, “So, Mom, you think you should go see a doctor? We shouldn’t do this anymore? Wow, this is really different for me. You know to tell me about what’s going on for you because I’m concerned that our relationship has shifted a little bit because you’re seeing me as getting bossy. I didn’t mean to be bossy. I was trying to do the things we’ve done before.” Then your loved one may add, “I know, but I don’t want to do it anymore.” You can say, “OK, you don’t want to do it. Talk to me about that.” So, in other words, I think we skip the step of talking about the first shifts when your loved one either does or doesn’t have awareness of the changes in their brain, and that’s in the prefrontal cortex. Either, your loved one does have awareness of their shift and is anxious because they notice it and want to talk about it, or they don’t.
Being Patient: What is the best way to handle a loved one with dementia becoming paranoid that someone is in their home while they are sleeping or stealing items?
Teepa Snow: Say, “They stole your clothes? Well, that’s not good!” Notice, the first thing I did was validating your concern. The first thing I need to do is hear the words and the concern you’re expressing. Say, “Wow! Somebody came in the house and stole things? Well, that’s not okay. Tell me a little more about that.” So, the next phrase is “tell me more about it” because I actually want them to let out fear or the anger or whatever it is. I need to hear a little more of what their brain thinks happened. I want to know how far away I am from what my loved one is thinking in that moment.
Say, “So, you think somebody came in and took all these clothes out of the closet and put them in suitcases? Wow, that is really weird that somebody would do that. I wonder what they were thinking about taking everything and putting it in the suitcases? Well, Mom, I am so sorry that happened. That should have not happened. I’m going to hang these clothes back up.” You might add, “Let me see what I can do about it. Could you give me a hanger? Great! Here, let’s get these hung back up now.” What I didn’t do is say, “Mom, let’s be real, you’re the one who took them. I don’t know why you’re taking everything out of the closet. I want you to explain that to me. Why are you taking things out and putting them in suitcases? Don’t do that!” I’m letting go of that idea because her brain doesn’t remember packing those clothes, hiding those clothes, taking those clothes and putting them somewhere. Her brain thinks clothes have been stolen because she thinks she’s in another place and time. She’s in a different part of her life, where I had different clothes in the closet, and somebody came in here and replaced all my good clothes with these clothes. Her brain actually thinks somebody’s trying to trick her.
Being Patient: If you had a camera installed and you could prove that nobody entered the house, is that good visual reinforcement or would that be upsetting?
Teepa Snow: It’s history and history you can try. But then what will tend to happen is, “So, nobody came in here? Why did I think someone came in here? What’s wrong with me? Why am I thinking this stuff? Am I losing it? Are you going to put me somewhere because there were not people in my house?” Because some people with Alzheimer’s do have awareness. When I realize I cannot actually remember something, then I see it on camera it’s like, “How could I have done that and not remembered it? I can’t trust myself.” Now, if the person can’t trust their own brain, they think, “Whose brain can I trust?”
Being Patient: How can you help a loved one who is nonverbal and seems sad and distressed find some peace?
Teepa Snow: Let’s go over to the side of the brain that still has auditory input, but it won’t be language. So, I want to start where she is. If she’s making sounds like, “Oh, oh,” then the first thing I want to do is take her hand in a hold that’s called hand-under-hand. It’s really powerful because it takes it and puts pressure on it. Our hands are both free, but we’ve clasped our hands and it causes stress reduction. You might mimic the sounds, positively singing, “Oh, oh, oh Mom.” Now it really helps to know some of the songs that gave her comfort and rhythms that were comfortable and comforting to her. So, for my mom, it would have been, “Oh, oh, oh, dear what could the matter be?” And I pick up the rhythm. My mom literally would open her eyes, smile and start humming along. Even though she was “nonverbal,” I had to find the right way to open the window because the door didn’t work anymore. Finding my way in requires both patience and assurance. I can come along, pick up a rhythm and then transform it into something that’s upbeat up energy, rather than downbeat down energy. But I have to start with whatever she gives me, and I’ve got to be willing to be where she is to help her come out of that place where she’s stuck.
Being Patient: What do you think about the permanent treatment of dementia patients with neuroleptics?
Teepa Snow: I think what we have done is create prisons for people and we can either chemically restrain them or physically restrain them. Sometimes we do both, because quite honestly there are a lot of locations that claim to be dementia care facilities or organizations, but they actually don’t do much training. They don’t have much skill and they don’t employ much skill. I think we as a society or a nation need to say, “You know what? People living with dementia deserve better. They actually deserve to be cared for by people who have been trained to do this thing called dementia care.” It’s not rocket science, but there is a science and an art to it. People deserve better than what they’re getting.
We often use medications to dull people down so that they feel less and respond less, so that they react less. So, we can get away with stuff that we shouldn’t be doing. Because you or I wouldn’t tolerate it. But, because someone has dementia we indicate, “It’s because they have dementia, they act like that.” If I just walked up to you and started to pull your dress up saying, “I’m going to help you get changed,” you’d feel embarrassed and try to push the person away or your dress back down. And then I’d tug your dress up and you’d pull your dress down. And then I’d call a second person to hold you so that I can get this done. In any other situation, that’s an unacceptable way to treat a human being. Except when people have dementia, we allow these things because, well, “I can’t get her to do it.” I get that, but there is a way to get her to do this that does not require her to feel like we’re abusing her.
If I have my hands with your hands and we’re both pulling it up, your brain goes, “Oh, pull my dress up. Well sure, I know how to do that.” Because I’m not right in front of you blocking you, it doesn’t feel like an attack. Instead, it feels like you’re doing it and I’m here to support you with additional help. Because they have ocular vision, they can’t take in this much data. Then, I’ve eliminated the threat and the person can do it. So, there’s a lot of potentials; we just aren’t realizing potential.
Being Patient: What should caregivers do to maintain their own health?
Teepa Snow: I try to do things in sets of five because people can hold on to five things. The first is, you’ll need enough sleep. If you’re not getting decent sleep, not getting enough sleep or you’re finding your sleep is being interrupted, that’s a very high risk for emotional distress. Getting sleep really matters. The second is don’t be a lone ranger. We have to get out of this lone ranger business, and we’ve got to form connections. We’ve got to reach out and find people who will support us. I’m going to tell you that out of every five families, four of them fall apart before this disease is over. Very frequently, the people you think are going to be your supports are not your supports. So, you’re going to need to reach out for somebody who does support you. You need support, you need somebody to go to. When I said not to react to me when I say, “I don’t know who you are? What are you doing here?” you can respond to me, but then you need a place to go and grieve the loss of your mother. You deserve that support. You’ve got to get it so when you come back to me, you’re not carrying the baggage of that loss of me because I’m right here.
The third is that you’ve got to look at what’s stressing you out. Either learn some new things that are helpful or let go of some things that you’re not able to do and ship those off to somebody else. Let go of things you’re not feeling particularly skilled at.
Then focus on what you’re eating and drinking. Because invariably, we start to shortchange ourselves and we get empty and we eat for comfort. Or we don’t eat at all and we drink because, I’ve got to stay awake, so I’m drinking more caffeine. Or, I’m drinking something with sugar in it. I’m really getting into bad habits of eating stuff that’s not good for me.
The last is I need to get out and exercise. I don’t mean a hardcore wear myself out, but I need to get out and see things that give me pleasure. Hear things that give me joy. I need to give myself permission to get recharged. And it needs to be in an active way. Not in a passive way. Yes, that’s fine, rest a little bit, but we need to get out and get moving. We need to engage in a way that gives us that boost that brings us back energized for the work you do. | <urn:uuid:828c9232-6a4a-44ef-bb5f-28c0a1c4f898> | CC-MAIN-2021-21 | https://www.beingpatient.com/teepa-snow-how-to-communicate/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989874.84/warc/CC-MAIN-20210518222121-20210519012121-00414.warc.gz | en | 0.968157 | 5,933 | 2.65625 | 3 |
Sclerotic jaw lesions are not rare and are frequently encountered on radiographs and computed tomography (CT). However, these lesions are often underreported, mainly because the subject is not well known to general radiologists who struggle with the imaging approach and disease entities.
In this paper, an overview is given of the most important types of lesions. Emphasis is on the distinction between odontogenic and non-odontogenic lesions. When a sclerotic lesion is observed in the jaw, attention should first be focused on whether and how the lesion is related to a tooth [1, 2, 3, 4, 5]. If a lesion is intimately associated with a tooth, the lesion is most likely odontogenic, and the number of possible diagnoses is limited. Moreover, the specific location of the lesion (e.g., periapical or pericoronal) may suggest a specific diagnosis. Lesions that are clearly not tooth related, usually indicate a lesion of osseous origin.
The following conditions are discussed in this review: (1) odontogenic sclerotic lesions (cemento-osseous dysplasia, condensing osteitis, odontoma, cementoblastoma, cemento-ossifying fibroma), (2) non-odontogenic sclerotic lesions (idiopathic osteosclerosis, torus mandibularis, fibrous dysplasia, osteoma/osteoid osteoma, metastases), and (3) mixed lytic-sclerotic lesions (osteoradionecrosis, biphosphonate-related osteonecrosis, osteomyelitis).
Cemento-osseous dysplasia (COD) is the most common fibro-osseous lesion found in the jaw. It represents a group of benign lesions of unknown etiology characterized by the substitution of normal bone by fibrous tissue with newly formed mineralized structures [6, 7]. Three subtypes have been described: (1) focal, (2) periapical, and (3) florid. Except for the focal variant, which is more common in white women, most lesions occur in black women and women of Asian descent, usually in the fourth or fifth decade of life.
The lesions typically do not cause any symptoms. In the early stages of the disease, a vascular fibrous stroma with osteoid and some basophilic cementoid structures can be observed. In the later stages, the stroma becomes more fibrotic. A more distinct osteoid trabeculae formation is observed with the presence of thicker curvilinear bony trabeculae, and a possible occurrence of prominent cementoid masses.
Radiographically, all three subtypes appear initially lytic and show progression to a heterogenous sclerotic lesion, often with a radiolucent rim. The radiolucent rim may in turn be surrounded by a thin rim of sclerosis, giving the lesion a quite specific appearance (Figure 1). Two additional characteristics that should be noted are: (1) the lesions are typically associated with a vital-nonrestored tooth, and (2) unlike cementoblastomas, cemento-osseous dysplasia lesions do not fuse directly to the tooth root (Figure 1). Focal COD is typically located in the posterior quadrant of the jaw. Periapical COD is seen adjacent to a tooth-bearing area associated with one or more vital mandibular anterior teeth (Figure 1a, b). In florid COD, multiple lesions are observed that involve the mandible and/or maxilla bilaterally (Figure 1c, d). The differential diagnosis includes fibrous dysplasia, Gardner’s syndrome, and chronic osteomyelitis (see below). Asymptomatic COD needs no surgical treatment.
Condensing osteitis (also known as focal sclerosing osteomyelitis) represents an asymptomatic change in osseous structure presumed to be the response to a long-standing and low-grade inflammatory stimulus from an inflamed or necrotic pulp. Histologically, dense layers of compact bone are replacing fibrofatty bone marrow and cancellous bone. On imaging, condensing osteitis is seen as a periapical, poorly marginated, nonexpansile, sclerotic lesion in the posterior mandible at the apices of the premolar or molar teeth (Figure 2), often associated with a carious tooth or with antecedents of root canal therapy, periodontal disease, or tooth extraction .
Odontomas are the most common odontogenic “tumors” and actually represent a developmental-anomaly (hamartoma) [5, 8]. They are most commonly seen in children and adolescents and may obstruct tooth eruption. Approximately one half of all odontomas are associated with an impacted tooth. According to the WHO 2017 Classification of Odontogenic Tumors, odontomas belong to the category of benign odontogenetic tumors, mixed epithelial and mesenchymal subtype [8, 9]. They are composed of the tissues native to teeth: enamel, dentin, cementum, and pulp tissue. Odontomas are characterized by slow growth and non-aggressive behavior. Usually they are seen as an incidental radiographic finding. Subtypes of odontomas are compound odontomas and complex odontomas. Compound odontomas consist of multiple small toothlike structures called denticles and most commonly arise in the anterior maxilla. Complex odontomas appear as an amorphous hyperattenuating conglomerate mass of enamel and dentin most commonly in the molar regions the jaws. They are typically pericoronal, sharply marginated, and sclerotic, with a low-attenuation halo (Figure 3). The pericoronal rather than periapical position allows differentiation of odontomas from cementoblastomas. Treatment of complex odontomas consists of surgical excision.
Cementoblastoma is a rare benign odontogenic tumor of mesenchymal-ectomesenchymal origin with a typical periapical location . This tumor typically occurs in patients under 25 years of age and may cause pain and or swelling of the alveolar ridges. According to the WHO 2017 Classification of Odontogenic Tumors, they belong to the category of benign odontogenetic tumors, mesenchymal subtype [8, 9]. Histologically, cementoblastoma is characterized by masses of hypocellular cementum embedded in a fibrovascular stroma. At the periphery of the lesion, there is a rim of connective tissue and commonly radiating columns of cellular unmineralized tissue that accounts for the radiographic-radiolucent zone. 90% of cementoblastomas develop in the molar or premolar mandibular region. On imaging, cementoblastomas appear as a periapical, sclerotic, sharply marginated lesion with a low-attenuation halo. They directly fuse to the root of the tooth (Figure 4), which is an important differential diagnostic feature not seen in other periapical lesions such as cemento-osseous dysplasia and condensing osteitis. Additional radiographic features include root resorption, loss of the root outline, invasion of the root canal, bony expansion, displacement and involvement of adjacent teeth, cortical erosion, and obliteration of the periodontal-ligament space. Surgical excision and related tooth extraction are warranted.
Cemento-ossifying fibroma is a benign slow-growing fibro-osseous tumor. These tumors are most often seen in women between the third and fourth decades of life and are thought to arise from the periodontal ligament. The growth of the tumor over time may lead to facial asymmetry and the mass causing discomfort or mandibular expansion [11, 12]. According to the WHO 2017 Classification of Odontogenic Tumors, they belong to the category of benign odontogenetic tumors, mesenchymal subtype [8, 9]. Cemento-ossifying fibromas are composed of varying amounts of cementum, bone, and fibrous tissue. Early stages are radiolucent while lesions containing large amounts of cementum or bone exhibit markedly increased density, typically with the presence of sclerotic components within an expansile lytic lesion (Figure 5). The mandibular canal may be displaced inferiorly, whereas in fibrous dysplasia the canal may be displaced in any direction [13, 14]. Therapy consists of curettage and enucleation.
Idiopathic osteosclerosis is an asymptomatic, nonexpansile, localized radiopaque lesion observed in the alveolar process in posterior regions without any obvious etiological agent . Being asymptomatic, it is not associated with inflammation and may remain static or demonstrate slow growth that usually stops when the patient reaches skeletal maturity. Histologically, the lesions consist of dense lamellar bone. Imaging features consist of a well-defined radiopaque, round, elliptical or spiculated mass, usually located in the alveolar process of posterior regions (Figure 6). A low-attenuation rim is not seen. No treatment is required.
Dense bone islands, or enostomas, are common incidental findings, consisting of failure of resorption of secondary spongiosa within the trabecular bone [14, 16]. Unlike idiopathic osteosclerosis, a bone island has no specific relationship with the dentition. A typical dense bone island has spiculated margins. It is interesting to note that CT attenuation measurements can be useful to distinguish untreated osteoblastic metastases from enostomas: A mean attenuation of 885 HU and a maximum attenuation of 1,060 HU provide reliable thresholds below which a metastatic lesion is the favored diagnosis .
Tori are protuberances of dense cortical bone that most commonly arise in adults. Occasionally, they contain a small amount of marrow. They are characterized by slow growth that usually arrests spontaneously.
A torus palatinus is a similar nodular bony overgrowth at the midline of the hard palate .
A torus maxillaris arises at the lingual surface of the posterior maxilla .
Fibrous dysplasia is a benign developmental disorder characterized by a dysplastic process of altered osteogenesis with subsequent substitution of normal bone by fibrous tissue that undergoes abnormal mineralization. It typically occurs in the first to third decades of life and is clinically characterized as an asymptomatic bone swelling. The maxilla is involved more frequently than the mandible. Fibrous dysplasia is typically unilateral and may occur within a single bone (monostotic form) or, more often in multiple bones (polyostotic form). Typically, the lesion is radiopaque with ground glass appearance (Figure 8) [17, 18]. Its cortex remains intact and is often thickened and sclerotic.
Osteoma is a benign neoplastic lesion, characterized by persistent slow growth. It is composed of mature bone structures with characteristics of cancellous or compact bone.
Osteomas most commonly arise in the craniofacial bones. The most common location in the jaw is the posterior mandibular body or condyle. Multiple osteomas may be associated with Gardner syndrome. Radiologically, osteomas present as a well-defined round radiopaque mass with no radiolucent halo . Osteomas can be either solitary or multiple as found in the Gardner Syndrome, where they are associated with soft tissue tumors, gastro-intestinal polyps and multiple surnumerary impacted teeth.
Osteoid osteoma is very rare in the jaw. It is often painful, most pronounced at night, and mitigated by treatment with salicylates. It presents as a lytic lesion with a radiopaque central nidus surrounded by a sclerotic bony margin .
Osteochondromas are rarely seen at the craniofacial bones. These tumors have a predilection for the coronoid and condylar processes of the mandible that may exhibit a focal mushroom-shaped enlargement. Larger lesions may cause a decreased mouth opening. Continuity of the cortex and the medullary bone of the lesion and the underlying host bone is a characteristic imaging finding .
The jaws and mouth are uncommon sites for metastatic dissemination. Nevertheless, the incidence of metastatic tumors to the jaws is probably higher than suggested; micrometastatic foci in the jaws were found in 16% of autopsied carcinoma cases despite the absence of radiologic findings . All types of tumors may metastasize to the mandible. Tumors that prefer the jawbone as their metastatic target include prostatic cancer, breast cancer, adrenal cancer, and thyroid cancer. Sclerotic mandibular metastases may be either ill- or well-defined as is the case elsewhere in the body (Figure 9).
Osteosarcoma of the jawbones account for up to 10% of all osteosarcomas. They tend to occur in an older age group usually between 30 and 39 years and have an overall better prognosis than conventional osteosarcoma. Early-stage osteosarcoma may be osteolytic. More advanced lesions are typically osteoblastic or more commonly mixed on radiographs and CBCT due to the presence of mineralized osteoid matrix. Typical features are poorly defined margins, cortex destruction, and aggressive periosteal reaction .
Mandibular osteoradionecrosis (ORN) is a serious complication of radiation therapy for neoplasms of the head and neck area. It has a widely varying reported incidence of 5% to 22%. Bone necrosis may occur secondary to hypoxia, hypovascularity, hypocellularity, and fibrosis.
The diagnosis of mandibular ORN is primarily based on clinical symptoms in the appropriate therapeutic setting. The mandible is involved more frequently than the maxilla, probably because of its less robust blood supply. The buccal cortex is more vulnerable than the lingual cortex, although the mandibular body is most commonly affected. At imaging, osteoradionecrosis appears as an area of marked sclerosis with a loss of trabeculation in spongiosa, and cortical interruptions (Figure 10a, b) . Other optional imaging features include bone fragmentation or sequestration and areas of gas attenuation in bone with poorly marginated adjacent fluid collections or areas of soft-tissue attenuation. The osseous changes may be associated with significant soft-tissue thickening.
Bisphosphonate-related osteonecrosis of the jaw (BRONJ) is associated with the use of oral or intravenous bisphosphonates to treat various bone conditions such as osteoporosis, multiple myeloma, metastasis, and Paget disease. Bisphosphonates inhibit endothelial proliferation and interrupt intraosseous circulation. Osteonecrosis may be spontaneous or may be precipitated by a traumatic procedure such as tooth extraction or dental surgery. BRONJ should be considered in patients undergoing bisphosphonate therapy with findings of bone necrosis and no history of radiation therapy. BRONJ is typically painful, but some patients may be asymptomatic. Imaging features are comparable to those seen in mandibular osteoradionecrosis (Figure 1c, d) . According to Obinata et al. , BRONJ typically presents with more pronounced osteosclerosis. On the other hand, osteolysis and spreading of soft tissue inflammation around the jaws may be more pronounced in osteoradionecrosis.
Osteomyelitis is much more common in the mandible than the maxilla, which is involved in only 1%–6% of cases because of its rich blood supply. Most patients with mandibular osteomyelitis have a history of antecedent dental caries or dental extractions. Other causes of osteomyelitis include dental or mandibular fractures, osteoradionecrosis and rarely, hematogenous spread of infection. Chronic osteomyelitis, which is characterized by a duration longer than one month, may be complicated by sinuses, fistulae, osseous sequestra, or pathologic fractures. Imaging findings of mandibular osteomyelitis include cortical interruption, sclerotic sequestra in low-attenuation zones, periosteal new bone formation, and areas of gas attenuation (Figure 10e, f) . In chronic sclerosing osteomyelitis, periosteal new bone formation may be striking.
Imaging has an important role in detection of lesions of the jaws. For characterization purposes, a systematic analysis is mandatory.
When a sclerotic lesion is observed in the jaw, attention should first be focused on whether and how the lesion is related to a tooth. If a lesion is intimately associated with a tooth, the lesion is most likely odontogenic and the number of possible diagnoses is limited. Moreover, the specific location of the lesion (e.g., periapical or pericoronal) may suggest a specific diagnosis. Cementoblastoma for instance, invariably has a periapical location, while complex odontoma has similar morphological features, and it is typically seen in a pericoronal position.
Besides density and relationship with dentition, other patient and imaging related features may assist in formulating a differential diagnosis or specific diagnosis. These features include the patient’s age, sex, and race, location of the lesion in the jaw (mandible versus maxilla, anterior versus posterior quadrant), and imaging features such as demarcation, cortical involvement, periosteal reaction, and soft tissue changes.
In some cases, lesion characterization may be obtained by imaging alone, particularly in benign lesions.
Even in cases where such characterization is not possible based on imaging features, the radiologist plays an important role and has to determine whether the lesion can be regarded as a “do not touch lesion” or be referred for biopsy.
The authors have no competing interests to declare.
Curé J, Vattoth S, Shah R. Radiopaque jaw lesions: An approach to the differential diagnosis. RadioGraphics. 2012; 32: 1909–1925. DOI: https://doi.org/10.1148/rg.327125003
Silva B, Bueno M, Yamamoto-Silva F, Gomez R, Peters O, Estrala C. Differential diagnosis and clinical management of periapical radiopaque-hyperdense jaw lesions. Braz. Oral Res. 2017; 31: 1–21. DOI: https://doi.org/10.1590/1807-3107bor-2017.vol31.0052
Dunfee B, Sakai O, Pistey R, Gohel A. Radiologic and pathologic characteristics of benign and malignant lesions of the mandible. RadioGraphics. 2006; 26: 1751–1768. DOI: https://doi.org/10.1148/rg.266055189
Siozopoulou V, Vanhoenacker FM. World Health Organization classification of odontogenic tumors and imaging approach of jaw lesions. Semin Musculoskelet Radiol. 2020; 24(5): 535–548. DOI: https://doi.org/10.1055/s-0040-1710357
Wright JM, Vered M. Update from the 4th edition of the World Health Organization classification of head and neck tumours: Odontogenic and maxillofacial bone tumors. Head Neck Pathol. 2017; 11(1): 68–77. DOI: https://doi.org/10.1007/s12105-017-0794-1
Huber A, Folk G. Cementoblastoma. Head and Neck Pathol. 2009; 3: 133–135. DOI: https://doi.org/10.1007/s12105-008-0099-5
Ram R, Singhal A, Singhal P. Cemento-ossifying fibroma. Contemp Clin Dent. 2012; 3(1): 83–85. DOI: https://doi.org/10.4103/0976-237X.94553
MacDonald-Jankowski D. Ossifying fibroma: A systematic review. Dentomaxillofacial Radiology. 2009; 38: 495–513. DOI: https://doi.org/10.1259/dmfr/70933621
Sontakke SA, Karjodkar FR, Umarji HR. Computed tomographic features of fibrous dysplasia of maxillofacial region. Imaging Sci Dent. 2011; 41: 23–28. DOI: https://doi.org/10.5624/isd.2011.41.1.23
Vanhoenacker FM, Bosmans F, Vanhoenacker C, Bernaerts A. Tumor and tumor-like conditions of the jaws: Imaging of mixed and radiopaque jaw lesions. Semin Musculoskelet Radiol. 2020; 24(5): 558–569. DOI: https://doi.org/10.1055/s-0039-3402766
Sismana Y, Ertasb E, Ertasc H, Sekerci A. The frequency and distribution of idiopathic osteosclerosis of the jaw. European Journal of Dentistry. 2011; 5: 409–414. DOI: https://doi.org/10.1055/s-0039-1698913
Eversole R, Su L, El Mofty S. Benign fibro-osseous lesions of the craniofacial complex: A review. Head and Neck Pathol. 2008; 2: 177–202. DOI: https://doi.org/10.1007/s12105-008-0057-2
MacDonald-Jankowski D. Fibrous dysplasia: A systematic review. Dentomaxillofacial Radiology. 2009; 38: 196–215. DOI: https://doi.org/10.1259/dmfr/16645318
Kumar S, Chandran S, Menon V, Mony V. Osteoma of mandible: A rare existence. J Dent Oro Surg. 2015; 1: 103. DOI: https://doi.org/10.19104/jdos.2015.103
Singh A, Solomon MC. Osteoid osteoma of the mandible: A case report with review of the literature. J Dent Sci. 2017; 12: 185–189. DOI: https://doi.org/10.1016/j.jds.2012.10.002
Hirshberg A, Berger R, Allon I, Kaplan I. Metastatic tumors to the jaws, mouth, head and neck. Pathol. 2014; 8: 463–474. DOI: https://doi.org/10.1007/s12105-014-0591-z
Phal P, Myall R, Assael L, Weissman J. Imaging findings of bisphosphonate-associated osteonecrosis of the jaws. Am J Neuroradiol. 2007; 28: 1139–45. DOI: https://doi.org/10.3174/ajnr.A0518
Obinata K, Shirai S, Ito H, et al. Image findings of bisphosphonate related osteonecrosis of jaws comparing with osteoradionecrosis. Dentomaxillofac Radiol. 2017; 46: 20160281. DOI: https://doi.org/10.1259/dmfr.20160281 | <urn:uuid:5468c6d6-f948-4947-a1d9-7f86cff07cdb> | CC-MAIN-2021-21 | https://www.jbsr.be/articles/10.5334/jbsr.2208/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989812.47/warc/CC-MAIN-20210515035645-20210515065645-00615.warc.gz | en | 0.872695 | 5,036 | 3.015625 | 3 |
ORIGINAL RESEARCH article
Impact of COVID-19 Lockdown on Physical Activity Among the Chinese Youths: The COVID-19 Impact on Lifestyle Change Survey (COINLICS)
- 1West China School of Public Health and West China Fourth Hospital, Sichuan University, Chengdu, China
- 2School of Health Caring Industry, Sichuan University of Arts and Science, Dazhou, China
- 3International Institute of Spatial Lifecourse Epidemiology (ISLE), Hong Kong, China
- 4Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong, China
Background: The study sought to assess the changes in physical activity (PA) and sedentary time among Chinese youths at different stages after the COVID-19 outbreak.
Methods: It was based on a retrospective online survey conducted in May 2020. More than 10,000 youths voluntarily recalled their PA-related information at three stages: before COVID-19 (January), during lockdown (February), and after lockdown (May). χ2 tests were conducted to evaluate the significance of the differences in participants' characteristics between sexes, and Wilcoxon Rank Sum tests were performed to examine the significance of differences in changes in PA and sedentary behavior levels between sexes.
Results: A total of 8,115 participants were included, with a mean age of 20. The percentage of no PA per week increased significantly and then slightly fell, and that of ≥150 min/week substantially decreased and then rebounded partially (all p < 0.001) (for instance, the percentage of ≥150 min/week of PA total decreased from 38.6 to 19.4%, then rebounded back to 25.3%). Means hours per day spent in sedentary behaviors had significantly increased during lockdown comparing to pre-COVID-19 (all p < 0.001). There were more participants reported reduced PA level than those indicated increased, and more participating youths had their sedentary behavior level increased than those who had it decreased.
Conclusions: The study found COVID-19 had both immediate and longer-term impacts on self-reported physical activities and sedentary behaviors among Chinese youths. Relevant efforts should be strengthened to get youths physically moving again.
The modern post-industrial life to date has featured physical inactivity and sedentary behavior, which has been a global pandemic resulting in huge costs for human beings (1). Physical inactivity and sedentary behavior have been associated with a large number of mental and physical chronic diseases, including but not limited to depression, heart diseases, stroke, diabetes, and cancers (2–4). A lot of efforts have been made to address the physical inactivity and sedentary behavior for years. However, this issue appears to persist (5, 6), as physical activity (PA) patterns did not improve and time spent on sedentary behavior has significantly increased over time (6). At the current pace, the 2025 global PA target (reducing 10% of insufficient PA) set by the World Health Organization member states would not be met (5).
However, the current situation may have been worsened by the stay-at-home order, issued by many governments around the world aiming at containing the spread of the coronavirus disease 2019 (COVID-19) pandemic, which has brought a significant impact on human lives. Millions of people are thus affected globally. Although some comments and reviews have indicated that PA will be essential for mental and physical health during the lockdown period (7–10), a few studies examining impacts of COVID-19 pandemic on PA and sedentary behavior have revealed that PA had substantially decreased while sedentary time had significantly increased during the lockdown in children and adolescents (11–13). However, it appears that such evidence is missing in youth, who is particularly vulnerable to lifestyle changes (14). Moreover, another important question we should answer is about the longer-term impact of COVID-19 pandemic on PA and sedentary behavior patterns; e.g., could the PA level, if affected by lockdown measures, return to normal after lockdown is lifted (9)? If the immediate impacts last and become new social norms, the impacts of COVID-19 on PA and sedentary behavior reduction could be catastrophic, and further aggressive efforts then need to be taken.
Therefore, the aim of this study was to assess the immediate and longer-term changes in PA and sedentary behavior in youths by using a large sample from all provinces of China. Our timely results would draw special attention from a wide array of stakeholders, from clinical practitioners to policy-makers, to the changed PA and sedentary behaviors among youths, so that these influences could be considered and hopefully remedied through clinical practice and policy interventions during this unusual period.
This study was based on the COVID-19 Impact on Lifestyle Change Survey (COINLICS), a retrospective online survey using a self-administered questionnaire distributed via social media platforms in May 2020. Recalled information was collected from three stages: before COVID-19 (January 2020), lockdown (February 2020), and 3 months after lockdown lifted (May 2020). The inclusion criteria were: (1) at post-mandatory education level (high school students, junior college students, undergraduate and postgraduate students); (2) residents of China and speak Chinese; (3) own a smartphone with internet access.
A web-based questionnaire was initially distributed among three Tencent QQ groups and three WeChat groups of educators at three education levels (high school, college, and graduate school). At least two educators from each province had shared the online questionnaire with their students through Tencent QQ and WeChat groups and/or moments. Those who completed the questionnaire were also encouraged to forward it to others. Informed consent of all participants was collected on the online questionnaire. Only those who agreed to participate and clicked the “agree” button could continue the questionnaire. Three commonsensical questions were used in the questionnaire to check the validity of the questionnaire (e.g., where is the capital of China). If any of the three questions were answered incorrectly, the questionnaire was considered as invalid. The questionnaire was required to be completed online anonymously. It usually took 10–20 min to complete the questionnaire. The study was approved by Sichuan University Medical Ethical Review Board (KS2020414).
A total of 10,082 participants were recruited from all Chinese provinces using snowball sampling from May 9 to May 24, 2020. In the study, 1,967 of them were excluded due to the exclusion criteria we applied (i.e., PA time + sedentary behavior time + sleep time ≤ 24 h/days), so the final sample size was 8,115. Since all questions were required, there was no missing data.
International Physical Activity Questionnaire (IPAQ)—long form was used to measure the physical activities and sedentary activities (15). Since all participants were full-time students, the occupational PA domain was removed. For each domain of PA (i.e., transportation, housework, and leisure-time), a score was summed up separately. Specifically, the number of minutes of moderate activity (including walking and riding) and two times of the number of minutes of vigorous activity were added up in each domain (16). Then, a total PA score was calculated by adding the scores of all domains. Moderate activities are defined as those performed for at least 10 min that produce a moderate increase in respiration and heart rate, and cause sweating, while vigorous activities are those producing a greater increase in respiration, heart rate, and sweating. According to the guidelines of the US Department of Health and Human Services (17), sufficient PA was defined as ≥150 min/week. The PA was categorized into three levels: none, 1–149 min/weeks, and ≥150 min/weeks (18). Changs in PA levels were grouped into: increased, constant, and decreased, while increased indicates moving from lower levels to higher levels, decreased represents moving from higher levels to lower levels, and constant means staying in the same levels. The same rule applied to changes in sedentary behavior levels.
Information regarding sedentary behavior was collected by the IPAQ, too. Average time spent in sedentary behaviors (e.g., watching TV) during the three stages was recorded for workdays and weekends separately.
In addition to the IPAQ-long form, the variables that were also collected were: sex, age, ethnicity, urbanicity, region, household income, major and current education level.
Descriptive statistics were calculated as mean and standard deviation (SD) for continuous variables and percentages for categorical variables. χ2 tests were conducted to evaluate the significance of differences in participants' characteristics between sexes, and Wilcoxon Rank Sum tests were performed to examine the significance of differences in changes in PA and sedentary behavior levels (under lockdown vs. pre-COVID-19 and lockdown lifted vs. pre-COVID-19) between sexes. In addition, Cuzick's tests for trend were carried out to determine the associations between PA and sedentary behavior levels and the different stages of COVID-19. Furthermore, to understand the association between participants' characteristics and changes in PA total levels (decreased vs. constant), multivariable logistic regressions were conducted. All statistical analyses were performed using R 3.6.2 and statistical significance was declared if p < 0.05 (19).
In the study, a total of 8,115 participants were included, with an average age of 20 (ranging from 15 to 33). As shown in Table 1, there were more females than males (5,688 vs. 2,427). Han people dominated our sample (over 95% were Han ethnicity). A slightly more non-urban sample than urban sample was collected. The most majority were from western China. Males were more likely to be majoring in science or engineering (50.8%), while females tended to be majoring in medical science (39.4%) and social science (43.1%) (p < 0.001). In terms of the current education level, undergraduate students were the majority (76.7% in males and 72.6% in females).
As shown in Figure 1, PA pattern was significantly associated with stages of COVID-19 in four domains (PA total, PA leisure time, PA household, and PA transportation) for both males and females. Specifically, the percentage of none PA per week increased significantly and then slightly fell, and that of ≥150 min/weeks substantially decreased and then rose partially (all p < 0.001, see Supplementary Table 1). For example, the percentage of reporting ≥150 min/weeks in males was 38.6% in Pre-COVID-19, then dropped almost half during lockdown (19.4%), before it recovered slightly (25.3%) after lockdown lifted (p < 0.001).
Figure 1. Physical activity in different stages of COVID-19 (pre-COVID-19, under lockdown, and 3 months after lockdown lifted) by sex among participating youths. Physical activity (none, 1–149 min/weeks, and ≥150 min/weeks) was significantly associated with stages of COVID-19 in four domains (PA total, PA leisure time, PA household, and PA transportation) for both male and female (p < 0.001). See Supplementary Table 1 for more details. PA, physical activity.
Figure 2 shows time spent in sedentary behaviors before COVID-19, during lockdown, and after 3 months of lockdown lifted for both males and females. On both weekends and workdays, means of hours per day spent in sedentary behaviors had significantly increased during lockdown comparing to pre-COVID-19. Nevertheless, it did not fall back in May 2020 and even slightly increased (all p < 0.001, see Supplementary Table 1). For example, the average time spent in sedentary behaviors on a workday in women was 4.3 h, then increased to 5.1 h, before it finally jumped to 5.5 (p < 0.001).
Figure 2. Time spent in sedentary behaviors in different stages of COVID-19 (pre-COVID-19, under lockdown, and 3 months after lockdown lifted) by sex among participating youths. Time spent in sedentary behaviors in different stages of COVID-19 was different on weekend and workday for both male and female (p < 0.001). See Supplementary Table 1 for more details.
Table 2 presents the changes in PA and sedentary behavior levels during lockdown vs. pre-COVID-19, and changes in PA and sedentary behavior after lockdown lifted vs. pre-COVID-19. While most of the participants remained constant in terms of PA level and sedentary behavior level between during lockdown and pre-COVID-19 and between lockdown lifted and pre-COVID-19, there were more participants reported reduced PA level than those indicated increased PA level (for all PA domains and PA total), and more participating youths had their sedentary behavior level increased than those who had it decreased. For instance, 24.7% of males reported decreased PA total level between during lockdown and pre-COVID-19, but the percentage of reported increased PA total level was as low as 3.5%. The changes in PA total after lockdown lifted vs. pre-COVID-19, and changes in sedentary time for weekend and workday during lockdown vs. pre-COVID-19 and after lockdown lifted vs. pre-COVID-19 were significantly different between males and females (p < 0.05).
Table 2. Changes in physical activity and sedentary behavior levels during lockdown vs. pre-COVID-19, and changes in PA and sedentary behavior after lockdown lifted vs. pre-COVID-19 among participating youths.
Our multivariable logistic regression (see Supplementary Table 2) shows that older participants (OR, 1.06; 95% CI, 1.02–1.09), those with higher household income (e.g., OR, 1.46; 95% CI, 1.23–1.73 for ≥12,000–20,000 group), and undergraduates (OR, 1.53; 95% CI, 1.25–1.87) comparing to high school students were more likely to report decreased PA level under lockdown; non-urban participants (OR, 0.80; 95% CI, 0.72–0.90) were less likely to have decreased PA level under lockdown. After lockdown lifted, higher household income groups (OR, 1.37; 95% CI, 1.16–1.63 for ≥12,000–20,000 group; OR, 1.26; 95% CI, 1.03–1.54 for ≥60,000–10,0000 group; OR, 1.54; 95% CI, 1.24–1.93 for ≥100,000–200,000 group) were more likely to report decreased PA level.
China, as the first country to be impacted by COVID-19 and also first to implement and lift lockdown measures, provides valuable evidence and references to other countries. This study, by using a large sample of Chinese, revealed that COVID-19 brought significant and unfavorable impact on youths' self-reported physical activities and sedentary behaviors, and such impacts were maintained for at least 3 months. Specifically, PA had substantially decreased yet sedentary time had significantly increased during the lockdown. After lockdown lifted, PA had rebounded slightly but sedentary time remained.
Our study showed that before COVID-19 the percentage of participants who reported sufficient physical activity (≥150 min/weeks) was low (38.6% in males and 50.9% in females), and the sedentary time was ~4 h. This is in line with existing literature (20). A small number of studies have examined the impacts of COVID-19 on PA and sedentary behavior. They consistently found that the COVID-19 had resulted in substantial and negative changes to physical activities and sedentary behaviors among children and adolescents (11–13, 21) and in adults (22–25). Our findings add to the existent literature that this trend was also found in youth and that the impact brought by COVID-19 was not only immediate also lasted. Based on the current trend, we believe that it will continue to sustain.
Although PA had rebounded slightly after lifting the lockdown, it appeared that lockdown lifting did not decrease sedentary time. As the youth become more cautious of infections of COVID-19 (26), outdoor activities might not be widely adopted despite the encouragement of governments and the resumption of school. They might tend to continue to minimize unnecessary activities as long as the perceived COVID-19 risk exists. Under such circumstances, special attention should be given to indoor PA interventions. Nevertheless, a previous study estimated that if the current patterns persist and become a new social norm, more efforts would be needed to reverse this alarming trend (9). This is especially true for sedentary behavior, which is difficult to change by health interventions (27). It is evidenced by our findings, that lockdown lift only slightly increased PA time but did not have an impact on sedentary behavior time.
To our knowledge, no studies had investigated the lasting impacts of pandemics on PA and sedentary behavior. However, a previous study examined the PA and sedentary behavior among children and adolescents affected by the 2011 earthquake and tsunami in Japan and found that PA had significantly decreased even after 3 years of the earthquake (28). This may be partially in line with our findings and suggests that particular monumental events could have lasting impacts on people's behaviors. Therefore, future studies may need to corroborate this and explore ways to mitigate such impacts.
The study has several limitations. First, recall bias might be introduced. To minimize such bias, questions on PA and sedentary behavior in different stages of COVID-19 were placed next to each other for participating youths to better compare. Since the study focused more on changes in PA and sedentary behavior instead of absolute values on certain time points, their answers were likely to be valid and reliable. Second, the use of the snowball sampling method through social media platforms does not allow us to generalize the findings to the entire youth group in China, not to mention their counterparts in other countries. However, it could be the most feasible way to reach as many as possible youths under such unusual circumstances. Third, participants might disengage from the survey before they complete, but such data was not recorded. So we would not know such impacts to our findings.
Despite these limitations, this study is unique in exploring the immediate and longer-term impacts of COVID-19 on PA and sedentary behavior among Chinese youths. It is the first scientific attempt in its kind to examine such impacts, and information from this timely and large-scale survey could inform multiple stakeholders in decision-making (29). Based on results from Supplementary Table 2, special attention may be given to students majoring in Science or Engineering and those who reported higher household income, as they were more likely to indicate decreased PA level after COVID-19.
The study found that COVID-19 had both immediate and longer-term impacts on physical activities and sedentary behaviors among Chinese youths. Although physical activities had rebounded back slightly after 3 months of lockdown lift, it appeared that lockdown lift did not improve the situation of time spent in sedentary behaviors. Relevant efforts should be supported and strengthened to get youths physically moving again.
Data Availability Statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
The studies involving human participants were reviewed and approved by Sichuan University Medical Ethical Review Board. The ethics committee waived the requirement of written informed consent for participation.
JZ: conceptualization and writing—original draft preparation. JZ and SY: methodology, investigation, and visualization. BG and XX: software. BG: validation. XX: formal analysis. RP and SY: resources. XX: Data curation. XX, BG, RP, XP, SY, and PJ: writing—review & editing. PJ: supervision and project administration. JZ: funding acquisition. All authors contributed to the article and approved the submitted version.
We thank Sichuan Science and Technology Program (2019YFS0274) and the International Institute of Spatial Lifecourse Epidemiology (ISLE) for research support.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpubh.2021.592795/full#supplementary-material
1. Ding D, Lawson KD, Kolbe-Alexander TL, Finkelstein EA, Katzmarzyk PT, Van Mechelen W, et al. The economic burden of physical inactivity: a global analysis of major non-communicable diseases. Lancet. (2016) 388:1311–24. doi: 10.1016/S0140-6736(16)30383-X
3. Moore SC, Lee IM, Weiderpass E, Campbell PT, Sampson JN, Kitahara CM, et al. Association of leisure-time physical activity with risk of 26 types of cancer in 1.44 million adults. JAMA Internal Med. (2016) 176:816–25. doi: 10.1001/jamainternmed.2016.1548
4. Kivimaki M, Singh-Manoux A, Pentti J, Sabia S, Nyberg ST, Alfredsson L, et al. Physical inactivity, cardiometabolic disease, and risk of dementia: an individual-participant meta-analysis. Bmj-Br Med J. (2019) 365:l1495. doi: 10.1136/bmj.l1495
5. Guthold R, Stevens GA, Riley LM, Bull FC. Worldwide trends in insufficient physical activity from 2001 to 2016: a pooled analysis of 358 population-based surveys with 1·9 million participants. Lancet Global Health. (2018) 6:e1077–86. doi: 10.1016/S2214-109X(18)30357-7
6. Du Y, Liu B, Sun Y, Snetselaar LG, Wallace RB, Bao W. Trends in adherence to the physical activity guidelines for americans for aerobic activity and time spent on sedentary behavior among US adults, 2007 to 2016. JAMA Netw Open. (2019) 2:e197597. doi: 10.1001/jamanetworkopen.2019.7597
9. Grenita H, Deepika RL, Shane AP, Carl JL, Ross A. A tale of two pandemics: how will COVID-19 and global trends in physical inactivity and sedentary behavior affect one another? Prog Cardiovasc Dis. (2020). doi: 10.1016/j.pcad.2020.04.005. [Epub ahead of print].
10. Lippi G, Henry BM, Sanchis-Gomar F. Physical inactivity and cardiovascular disease at the time of coronavirus disease 2019 (COVID-19). Eur J Prev Cardiol. (2020) 27:906–8. doi: 10.1177/2047487320916823
11. Jacob Meyer CM. Changes in Physical Activity and Sedentary Behaviour Due to the COVID-19 Outbreak and Associations With Mental Health in 3,052 US Adults. Cambridge: Cambridge Open Engage (2020). doi: 10.33774/coe-2020-h0b8g
13. Xiang M, Zhang Z, Kuwahara K. Impact of COVID-19 pandemic on children and adolescents' lifestyle behavior larger than expected. Prog Cardiovasc Dis. (2020) 63:531–2. doi: 10.1016/j.pcad.2020.04.013
14. Poobalan AS, Aucott LS, Precious E, Crombie IK, Smith WC. Weight loss interventions in young people (18 to 25 year olds): a systematic review. Obes Rev. (2010) 11:580–92. doi: 10.1111/j.1467-789X.2009.00673.x
15. Craig CL, Marshall AL, Sjöström M, Bauman AE, Booth ML, Ainsworth BE, et al. International Physical Activity Questionnaire: 12-country reliability and validity. Med Sci Sports Exerc. (2003) 35:1381–95. doi: 10.1249/01.MSS.0000078924.61453.FB
16. Hallal PC, Victora CG, Wells JC, Lima RC. Physical inactivity: prevalence and associated variables in Brazilian adults. Med Sci Sports Exerc. (2003) 35:1894–900. doi: 10.1249/01.MSS.0000093615.33774.0E
18. Sebastiao E, Gobbi S, Chodzkozajko W, Schwingel A, Papini CB, Nakamura PM, et al. The international physical activity questionnaire-long form overestimates self-reported physical activity of Brazilian adults. Public Health. (2012) 126:967–75. doi: 10.1016/j.puhe.2012.07.004
19. Team RC. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria (2020). Available online at: http://www.R-project.org/
20. Li F, Mao L, Chen PJJOS, Science H. Physical activity and prevention of chronic disease in Chinese youth: a public health approach. J Sport Health Sci. (2019) 8:512. doi: 10.1016/j.jshs.2019.06.008
21. Gilic B, Ostojic L, Corluka M, Volaric T, Sekulic DJC. Contextualizing Parental/Familial influence on physical activity in adolescents before and during COVID-19 pandemic: a prospective Analysis. (2020) 7:125. doi: 10.3390/children7090125
22. Maugeri G, Castrogiovanni P, Battaglia G, Pippi R, D'agata V, Palma A, et al. The impact of physical activity on psychological health during Covid-19 pandemic in Italy. Heliyon. (2020) 6:e04315. doi: 10.1016/j.heliyon.2020.e04315
23. Pépin JL, Bruno RM, Yang R-Y, Vercamer V, Jouhaud P, Boutouyrie P, et al. Wearable activity trackers for monitoring adherence to home confinement during the COVID-19 pandemic worldwide: data aggregation and analysis. J Med Internet Res. (2020) 22:e19787. doi: 10.2196/19787
24. Smith L, Jacob L, Butler L, Schuch F, Barnett Y, Grabovac I, et al. Prevalence and correlates of physical activity in a sample of UK adults observing social distancing during the COVID-19 pandemic. BMJ Open Sport Exerc Med. (2020) 6:e000850. doi: 10.1136/bmjsem-2020-000850
26. Zhong B-L, Luo W, Li H-M, Zhang Q-Q, Liu X-G, Li W-T, et al. Knowledge, attitudes, and practices towards COVID-19 among Chinese residents during the rapid rise period of the COVID-19 outbreak: a quick online cross-sectional survey. Int J Biol Sci. (2020) 16:1745. doi: 10.7150/ijbs.45221
28. Okazaki K, Suzuki K, Sakamoto Y, Sasaki K. Physical activity and sedentary behavior among children and adolescents living in an area affected by the 2011 Great East Japan earthquake and tsunami for 3 years. Prev Med Rep. (2015) 2:720–4. doi: 10.1016/j.pmedr.2015.08.010
Keywords: COVID-19, physical activity, sedentary behavior, youth, China
Citation: Zhou J, Xie X, Guo B, Pei R, Pei X, Yang S and Jia P (2021) Impact of COVID-19 Lockdown on Physical Activity Among the Chinese Youths: The COVID-19 Impact on Lifestyle Change Survey (COINLICS). Front. Public Health 9:592795. doi: 10.3389/fpubh.2021.592795
Received: 08 August 2020; Accepted: 07 January 2021;
Published: 04 February 2021.
Edited by:Noel C. Barengo, Florida International University, United States
Reviewed by:Raheem Paxton, University of Alabama, United States
Karen Florez, Universidad del Norte, Colombia
Copyright © 2021 Zhou, Xie, Guo, Pei, Pei, Yang and Jia. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. | <urn:uuid:fd6cf6cd-fbb6-4296-8d5d-614bf874e9b6> | CC-MAIN-2021-21 | https://www.frontiersin.org/articles/10.3389/fpubh.2021.592795/full | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989874.84/warc/CC-MAIN-20210518222121-20210519012121-00417.warc.gz | en | 0.940139 | 6,209 | 2.640625 | 3 |
The weighty problem of Gravity
We investigate the flaws in the scientific theories of gravity and the reasons why some scientists are looking for alternative theories to explain the workings of this mysterious force
In this follow-up to our previous article on gravitational waves, published in February 2016, we investigate the flaws in the scientific theories about gravity and compare and contrast them with the largely unknown facts of Occult Science. In our customary afterword we explain how and why the phenomena of levitation and anti-gravity seem to defy the so-called laws of physics.
Most people are familiar with the charming but fanciful anecdote that an apple falling from a tree gave Sir Isaac Newton the idea of gravity. Had this spiritual and deeply religious man anticipated the warped ideas his researches would engender in the minds of future generations of scientists, he would probably have eaten the apple and kept quiet! The theory for which he is justifiably famous tells us that every particle of matter attracts every other particle in the universe with a force that is directly proportional to the product of their masses and inversely proportional to the square of the distance between their centres.
This, with some modifications, continues to be used as the generally accepted approximation of the effects of gravity. But, as we saw in our article on 'Why matter matters', Newton was unhappy with this theory. In a private letter sent to his friend, Richard Bentley in 1693, he wrote: "It is inconceivable that inanimate brute matter should, without the mediation of something else which is not material, operate upon and affect other matter, without mutual contact, as it must do if gravitation be essential and inherent in it....that gravity should be innate, inherent and essential to matter, so that one body may act upon another at a distance, through a vacuum, without the mediation of anything else by and through which their action may be conveyed from one to another, is to me so great an absurdity that I believe no man, who has in philosophical matters a competent faculty of thinking, can ever fall into it. Gravity must be caused by an agent acting constantly according to certain laws; but whether this agent be material or immaterial I have left to the consideration of my readers."
Contemporary philosophers Andrew Janiak and Hylarie Kochiras interpret this as a clear affirmation of the existence of an immaterial agent acting upon material bodies. We agree and said as much in our aforementioned article on matter. We—or rather Occult Science—long ago gave this agent a name: Fohat, or Cosmic Electricity. It is Fohat that imparts all motion and awakens all into life in the Solar System. It is the life-blood of the Universe and the agent of attraction and repulsion within it. Not as understood by modern physics and according to the so-called law of gravity; but in harmony with the laws of Occult Science. The aim of this article is to expose the many flaws in the prevailing scientific ideas about gravity and why this is leading thinking scientists to seek alternative theories to explain the workings of this mysterious force.
We begin our survey of the weighty problem of gravity with a brief survey of the ideas of a leading 20th American astronomer—Thomas Jefferson Jackson See (1866-1962)—about this mysterious force. Although the good professor's theories have long since been consigned to the wastebasket of scientific pipe-dreams they are well worth examining for the clarifying light they shed on the woolly-headed speculation of so many scientists. We use the word 'speculation' rather than thinking as the latter is a step too far for many scientists, past or present!
Thomas See told his fellow scientists that almost infinitely long waves of gravity, travelling from the sun to the earth hold all the planets of the 'universe' in their orbits. He meant our Solar System of course, but such little slips do not seem to bother scientists! This idea was taken up in 1972 by a group of Israeli scientists who put forward a yet more sophisticated but similar theory based upon the speculative findings of 'gravitational waves' emitted by pulsars. At that time the majority of scientists agreed that the main source of these theoretical waves seemed to come from the centre of our galaxy. Nowadays, it is thought they come from binary neutron star systems and so-called 'black holes.' In February 2016, the 'discovery' of 'gravitational waves' by the LIGO (Laser Interferometer Gravitational-Wave Observatory), was widely and enthusiastically reported by the press. However, scientists are by no means in agreement about what was detected, or even if such 'waves' exist as we discussed in our article on the discovery.
One of the more humorous claims of science on the subject of gravity was published in Scientific American in 1970; we quote: "It is also conceivable that the mass at the galactic centre is acting as a giant lens, focusing gravitational radiation from an earlier epoch of the universe." At least they agree with Occult Science that there is something very powerful at the heart of the Universe! But then they go and spoil it all by adding: "Since gravitational radiation is not appreciably absorbed by matter, it should have been accumulating since the beginning of time. The relatively large intensity apparently being observed may be telling us when time began." We ask you—what on earth could any thinking person possibly add to that piece of priceless nonsense?
If this theory sounds familiar, it should be, as it remains the prevailing view of most scientists today as we may read in an article published in Nature in 2018 which confidently assures us that: "Astronomers have now used a single gravitational wave event originating from the shell elliptical galaxy NGC 4993 to measure the age of the universe." Well, well, well! The article continues: "An analysis of the gravitational waves from this event infers their intrinsic strength. NGC4993 has an outward velocity due to the expansion of the universe that can be measured from its spectral lines. Knowing how far away it is and how fast the galaxy is moving from us allows scientists to calculate the time since the expansion began—the age of the universe: between about 11.9 and 15.7 billion years given the experimental uncertainties." We especially like the phrase 'experimental uncertainties.' How very modest these scientists are! Recent estimates of the 'age' of the 'universe' range from 2 billion years (The Inflationary Universe, Guth, Alan H. 1997) to 20 billion years (University of Chicago, 1977). So which is it? 2 billion years or 20 billion years, or 11.9 to 15.7 billion years? The 'experimental uncertainty' between these three estimates is only a trifling 18 billion years! This reminds us of the estimates of the Sun's temperature we quoted in our article on the Occult Sun published in 2016. The difference in those figures was as 1,461 against 9,000,000. It is gratifying to know that the experimental uncertainties of science have become less even if the guesses of scientists are no better!
In a series of lectures delivered at the beginning of the last century Thomas See sought to convince his colleagues of the truth of his theory of gravitational waves. Even the most cursory perusal of these lectures shows that he did not know what Newton meant by either gravity or ether. We would add here that Newton's work on the ether has been distorted by subsequent generations of scientists, and much of it suppressed in various ways, because it did not, and does not suit science to have its erroneous theories demolished by that great man.
Sir Isaac Newton was well aware of the real nature of ether which we discussed in our article on the Occult Aether published in 2018. We would interject here that Aether prefixed with an 'a' is not the same as the 'ether' of Space Newton wrestled with. The former is the noumenon of the latter in the same way that electricity is the noumenon of the phenomena of light, heat and motion. Neither have any connection with the flammable liquid called 'ether' formerly used as an anaesthetic. By Aether, Occult Science designates an extremely rarefied material element which pervades the entire Universe, of which Cosmic electricity or Fohat is the progeny. The belief in Aether is a very ancient one. To the ancient Greeks, Aether was the rarefied atmosphere breathed by the gods on Mount Olympus.
Despite the various experiments conducted during the last few years there is no conclusive evidence that 'gravitational waves', long or short, exist. What holds everything in the solar systems of our Universe in balance is the law of attraction and repulsion we told you about in our article on 'why matter matters'. If science wishes to call attraction 'gravity', then it is welcome to do so, but then it should also recognise its opposite, which is repulsion, which it does not, thus losing that balance which is so necessary in understanding anything in the microcosm or macrocosm. Whilst some thinking scientists are beginning to give serious consideration to notion that gravity might be a repulsive force, they seem stubbornly incapable of embracing the concept of two opposing forces producing balance. Repulsive gravity which denies its attractive opposite is a complete negation of the Hermetic Law of Polarity and, we might add, of common sense, which seems sadly lacking among so many scientists.
In researching this article we came across the following statement: "Observation noted electrons streaming towards protons and the obvious conclusion is that protons attract. However, if protons attract electrons why are they not ultimately absorbed? What is not well defined is how this proton attraction somehow reverses into repulsion in close proximity to the nucleus and directs electrons into orbiting protons to create hydrogen." We do not know who formulated this idea as the source we consulted provided no citation for this statement. What makes this proposition particularly interesting from an occult perspective is the use (probably unintentional) of the word 'directs.' This implies the activity of some kind of controlling intelligence—dare we call it Fohat? If so, then some scientists are getting a little warmer in their search for alternative theories to explain the workings of gravity.
Another scientist posits: "Just as magnetism has two charges, in which particles of like-charge repulse and particles of dissimilar charge attract, might gravity have two charges in which particles of like-charge attract and particles of dissimilar charge repulse?" Occult Science answers in the affirmative, but material science fails to make this obvious connection. Whilst there is no shortage of scientific data on attraction and repulsion in connection with electromagnetism, you will search in vain for any mention of these opposing forces in connection with gravity. Why is this? Why the strange reluctance on the part of science to recognise that the same electromagnetic laws apply to all things, including so-called 'gravity'? It is all very puzzling, especially when we are told by science that attraction is the power or force through which particles of matter are attracted or drawn to one another, and that such attraction is a mutual action which in some form all bodies, whether at rest or in motion, exert upon one another. But when bodies come together from 'sensible distances', the force being directly proportional to the square of the distance between them as Newton theorised, then, says science, it is called gravity. What a strange, illogical place the minds of some scientists must be!
The only logical conclusion to be drawn from this antithetical aberration is that science does not understand the true nature of either magnetism or electricity, which we discussed in our article on the Occult Sun. All that science is able to tell us about 'repulsion' is that it is the act of repelling or driving back; specifically cited in physics, as in the case of repulsion between two magnetic poles or similarly 'electrified bodies.' Here again, science shows its ignorance of the difference between electricity and magnetism, muddling up the two terms. It also seems to forget that ordinary physics apply, in a larger sense, to what it also calls 'astrophysics'. Before any physicists among our readers take us to task for playing fast and loose with scientific terms, let us make it clear that we are fully conscious of the fact that scientists utilise terms like 'electromagnetic radiation' to identify certain forces, energies, electric and magnetic fields, etc., but as we have said, this does not mean they understand the true nature of these forces and which they have labelled, to our mind, indiscriminately.
However, as we said earlier, some scientists are approaching—albeit on tiptoe—the truths that Occult Science has always known about the balanced nature of the force they call 'gravity.' As long ago as 1972, an article was published in New Scientist headed "Gravity appears in a repulsive light." The authors went on to say: "Gravitation, the oldest established of the fundamental physical interactions, remains full of surprises...a recent report from the University of Texas examines the possibility that the force may exhibit repulsive features." Much more recently, in 2012, National Geographic News asked: "Is Dark Energy Really Repulsive Gravity?" We quote: "The leading theory to explain the accelerating expansion of the Universe is the existence of a hypothetical repulsive force called dark energy. But in a new study, Massimo Villata, an astrophysicist at the Observatory of Turin in Italy, suggests the effects attributed to dark energy are actually due to a kind of 'anti-gravity' created when normal matter and antimatter repel one another. 'Usually this repulsion is ascribed to a mysterious dark energy that would uniformly permeate the cosmos, but nobody knows what it is nor why it behaves this way,' Villata said."
These are all positive steps—albeit small ones—in the right general direction and we do not belittle them. More and more thinking physicists are now postulating the heretical notion (to science) that gravity is repulsive as well as attractive. One scientist has gone even further in suggesting that: "it's possible that the phenomenon we know of as gravity may be due to the interaction between moving subatomic particles within nucleons." This confirms the statement we quoted earlier from another scientist who deduced the same conclusion from observing the motion of electrons.
Electricity and Magnetism
Science has long regarded electricity and magnetism as a unity. In the 19th century Michael Faraday found the last 'link' between them. But the very fact that the two had to be 'linked' together shows that they are two separate forces, though working in conjunction. Scottish physicist James Clerk Maxwell then linked them with light and with other waves, and in the process foresaw waves which were then unknown—among them radio waves. If now electromagnetism and 'gravitation' could be shown to act on one another, all physical forces could be united in one grand theory. This was and remains Holy Grail of Science which we mentioned in our article on the Occult Aether. Faraday conducted many experiments to find a possible correlation between gravity and electricity, and after failing time and time again, declared: "they do not shake my strong feeling of the existence of a relation." At this juncture we would like to interject that the latest scientific dictionaries do not attempt to define what 'electricity' actually is; one says "Science is unable to offer any explanation regarding the nature of an electric charge," while under 'magnetism' we are told that it is: "The branch of physics concerned with magnets and magnetic fields."
If all these great scientists—and Faraday, Clerk Maxwell and Einstein were great men indeed—had only known that electricity is Fohat, and that it links all things together whilst that Force which Occult Science calls 'Spirit' holds all things together, the rest would have been easy, though, perhaps, this truth cannot be stated in mathematical formulae. Einstein fused his equations, which describe the non-existing gravitational field, with the equations in which Clerk Maxwell described the electromagnetic field. If this were only a formal combination, it would not have been interesting—or even new. Einstein believed that he had found the reality within the equations, by fusing together the two different kinds of symmetry within them. As always he went behind the equations to their structure, and what he proposed was both deep and original, but alas—not true! Unhappily, he was not able to propose an experiment which would test his theory. Scientists the world over are still debating how Space can be used as a 'laboratory' to test some of these theories, for the conditions required cannot be reproduced on earth. Meanwhile we still stand where Faraday stood two hundred years ago.
Science considers 'gravitation' is so pervasive and so powerful that wherever we are on earth we come under its influence. As we shall see in our afterword, there are conditions under which gravity is seemingly negated. Science does not know why, the reason being that it is dealing with a only one part (attraction) of a half-understood Principle in Nature, the complete Law being Attraction and Repulsion, producing BALANCE as we said earlier. The mathematics of Einstein's general relativity is so 'general' that it gives no practical hint of what might be looked for—again for the same reason just stated: his theory is incomplete.
Faraday invented the concept of the field to describe the forces between magnets; he pictured these forces not as physical emanations from the magnets (though Occult Science affirms they are no less physical than electricity or light), but as a field of strain through the space between them. Clerk Maxwell gave this concept mathematical form, and it runs through physics to this day, helping in this manner to conceal rather than reveal the hidden Laws of Nature.
Einstein's picture of 'gravitation' is also a field: gravity does not stream out of the earth or the sun as Thomas See thought, but is conceived as a distortion of the space around them. This is hopelessly wrong, as we pointed out in the afterword to our article on the equally wrong theory of gravitational waves. According to this wrong concept, a field is therefore essentially continuous; it undulates smoothly everywhere. Yet we know that the minute properties of matter and energy are discontinuous. An electron jumps, a photon of light 'blinks'. How can these leaps and blinks be smoothed out to give a well-mannered field? Most scientists believe that the continuity is only a statistical effect in which the large numbers involved conceal the individual leaps.
But Einstein believed that the reality lies in the continuous field, not in the quantum leap. Leaps are unpredictable, and if it is fundamental to physics that there should be an orderly march of cause and effect. Indeed, some physicists gave up looking for cause and effect in the 'minute' behaviour within the atom, not knowing that this same behaviour occurs in our Solar System, but remains undetected because of the huge difference in size between an atom and a solar system. We explained this behaviour when we discussed atoms and solar systems in our article on why matter matters. No planet has ever been observed to 'jump' but this doesn't mean it is impossible given the enormous time periods involved, as we pointed out in our investigation of the great Cycles of Sleep and Waking to which our Solar System is subject. If Einstein's unified field theory was to reach into the atom the mathematics required must somehow conjure the quantum jumps out of the continuous equations, and fix their very size—which can never happen in any case, for our human ideas of size will prevent this for ever as we said just now.
Nor can mathematics reach out to measure the 'minute' dimensions within the various types of atoms. There are simply no formulae to deal with them adequately. Not that this has deterred physicists from continually churning out figures of some kind, as well as increasingly bizarre ideas, such as string theory, just to keep pace with all their 'new discoveries' in quantum mechanics. As we have said more than once in our articles about the material world, new philosophies and theories have had to be produced in an attempt to explain the various properties of particles which are otherwise quite inexplicable. No one, not even another Einstein, could accomplish such a miracle, and in his last words this peer among scientists called it "a gigantic task", as well he might, if the term 'gigantic' can be applied to the infinitely minute world of the atom.
The Field of Life
Let us sum up all that we have said about the weighty problem of gravity, and so try to come to a sane conclusion about Einstein's theories, and the possibility of finding a still wider theory which will fit and embrace all the fields of physics. We said that there are so many factors at work in physics that Einstein's hope was impossible of fulfilment in this respect, especially as his main theory was based upon false premises, as we have pointed out. But let us remember also that there is one Universal Law—not a mere theory—which postulates that opposites complete one another, and that no material phenomena are possible if either of the opposites is absent. That Law, as our regular readers will remember, is the Hermetic Law of Polarity, discussed and explained in the ninth part of our series of articles on the teachings of Hermes—Spiritus Hermeticum. In other words, attraction is impossible without the presence of its opposite principle—repulsion, the two together creating balance.
If the Hermetic Law of Polarity can be called a Quantum, then we have a Law which fits all physical phenomena; we shall mention this word or term again presently. Occult Science teaches that Time is the reverse of Space, as free Electricity is the reverse of Magnetism, and Attraction the reverse of Repulsion. Each is also the opposite of the other, and both complete one another. Without Space—no Time; without Electricity—no Magnetism; without Attraction—no Repulsion, and vice versa and ad infinitum in each case. This being so, Einstein's 'Field', whether continuous or not, may be compared to and replaced by the magnetic emanations of any thing or body, which emanations stream or undulate out of and surround all things and bodies. These emanations are nothing more nor less than the Life-blood of the Universe, the Field of Life itself which Occult Science calls Fohat or Cosmic Electricity. Fohat, we would remind you, "is the noumenon of the seven primary forces of Cosmic Electricity, whose purely phenomenal, and hence grossest effects are alone cognisable by physicists on the cosmic and especially on the terrestrial plane. These include, among other things, Sound, Light, Colour, etc." Thus speaks H. P. Blavatsky in The Secret Doctrine.
In this connection we would also remind you of a rather curious statement in the Bible: "Thou canst not see my face: for there shall no man see me, and live" (Exodus 33:20). This reprises the ancient belief—common among all the nations of the world long before the advent of Moses, that if any being lower than the Supreme Deity should behold Him in His Full Glory he would immediately be burnt to ashes. This statement becomes perfectly explicable in the light of the fact that any living body can be destroyed and burnt by the appliance of ordinary electricity in a sufficiently high voltage. The magnetic Field of Life is a continuous equation in perfect equilibrium with the live thing or body, filled with that same electricity and emanating magnetism, forming that protective Field.
A 'living' body 'dies' when the electric-natured 'Self' departs from it for good, together with its magnetic, protecting 'Field.' Lifeless things do not die in the same sense, but retain their electric content, or life-blood, as well as their magnetic field for an indefinite period of time. When, however, the time comes, as it must, when their 'life-blood' is also withdrawn, they too perish, fall to pieces, the 'spirit' having departed from them, that which held them together, and the remains are dispersed in various ways. Their magnetic fields are also dispersed, for there is no living, eternal 'Self' in what we call inanimate things, such as there is in plants, animals and men. In lower life-forms this is our old friend the lower self or mind, be it ever so rudimentary, such as we may find in bacteria. In man it is the Higher Self. Hence, all material things come to an end when the 'spirit' leaves them, and their 'Continuous equation' ceases to exist any longer.
Some readers may be unfamiliar with the term 'Quantum' used earlier. Do not allow yourself to be overawed by this scientific term. In physics, a quantum (plural: quanta) is the minimum amount of any physical entity involved in an interaction. From this we may say that the objects we perceive with our outer senses are all quanta in so far as they occupy Space; and so also are the objects of our inner senses, in so far as they occupy Time. This, again, illustrates the Occult Law that Space is the reverse of Time, and Time the reverse, or opposite, of Space. Einstein's Field undulates smoothly only round living things, or around living bodies. Elsewhere it is absent. If electrons leap through these Fields, or photons of light blink through them, that does not affect the smooth flow of the magnetic emanations which form those Fields. Nor was the unified field theory a theory which Einstein believed would reach right into the atom as we said earlier.
The Field emanates from within and surrounds the thing or body: it does not enter them, either in fact or as a theory. But within the atom these same Fields—after emanation from within—surround all things. Occult Science affirms there are fields within fields without end, both at the smallest magnitudes and at the greatest. There is only one Theory—which is a Law—that can prove to us that all this must be so, and that is the Hermetic Law or axiom of "As above, so below; as below, so above," discussed and explained in the eighth part of Spiritus Hermeticum.
Meanwhile, the magnetic emanations from the electrical within surround all things. They act as magnets and attract their opposites, repulsing similarly constituted things and entities endowed with personal Selves, or without the same; causing formative stresses when living beings are thus combined, leading to the birth or the formation of new beings of every kind and gender. But the forces between magnets should not be regarded as fields of creative or even formative strain, though these magnetic emanations are just as physical as electricity in its manifold manifestations. Thus these Fields help to keep that Balance without which a Universe would change into Chaos in Space and Time. The various bodies, objects of every sort, alive or dead, or large or small according to our limited conceptions, move in Space and Time; all partaking of the divine electric Essence of the Maker and the Ruler who controls His Universe in Majesty and Wisdom.
Gravity then is the operation of the Law of Attraction and Repulsion which keeps all things and beings in balance. No Einstein, Clerk Maxwell, Faraday or Stephen Hawking, nor any other human scientist or sage can ever hope to penetrate, much less understand the concealed Wisdom which orders and controls this Law, or any other of the Laws of God. To do so one must needs be the equal of God, and though many scientists may think themselves so, or even deny Him altogether, what God thinks of them is best left unsaid....Yet, the Higher Self of Man, once it has sought and found the path to the Light may obtain a glimpse of the Holy Truths we have endeavoured to bring before your inner sight in this investigation, and hence comprehend something of the wondrous tapestry woven by the God of Gods for the instruction of those that love Him.
It is He, the Supreme Deity, who attracts and gathers unto himself the Sons of Light in time, and gives them strength to share with Him a particle of that vast Light and Wisdom in perfect equilibrium and Peace. And the lesser Gods, Lords of Suns and Planets, seated upon their Thrones of Splendour among the starry hosts of Heaven, utter Wisdom to one another in the midnight circuit of the planets and the suns. And their holy whisper may at times reach the within of a few earnest Seekers who consider in deepest silence the message received about the Laws of God, imbibing Truth and being blest with Understanding beyond the ken of the dry-as-dust scientist who sees no further than the end of his superior nose—if that far. Such, dear reader are the mathematics of the Soul attuned with God in everlasting union, which is the Portal and the Way to liberation from earthly theories of error.
© Copyright occult-mysteries.org. Article published 31 January 2021. | <urn:uuid:5a83bfbf-e4e2-4b38-a402-123ea9f9cbf8> | CC-MAIN-2021-21 | http://www.occult-mysteries.org/gravity.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988759.29/warc/CC-MAIN-20210506175146-20210506205146-00574.warc.gz | en | 0.956761 | 5,980 | 2.984375 | 3 |
Networking Concepts Overview
What is a network? Ans: Computers (a.k.a. hosts) able to share information and resources. A network may be small (one geographical location) or may cover the globe. Different technologies are used in each case. For a LAN we use technology such as Ethernet, which broadcasts packets so every station sees them. For a WAN we use slower, error-prone serial links that connect one LAN to another (point-to-point connections).
Qu: What is needed physically? Media such as a cable between each pair of computers (called a mesh), or in a ring, or in a bus, or in a star. (Network topologies.) (Show bus/star with 4 hosts.)
Qu: What would happen if two or more hosts transmit at the same time? Ans: a collision. To permit communications all parties must agree to a set of rules, or protocols. Today the most common set of protocols for a LAN is called Ethernet.
In addition to media and protocols, you need some hardware to connect the host to the media. Called a NIC (Network Interface Card). In Linux these have names such as “eth#”, where “#”=0,1,2,... In Solaris, they have strange names such as “elx#” depending on the manufacturer/chipset of the NIC, but many modern NICs are simply known as “hme#”.
Your operating system needs the correct driver software to allow applications to send messages to the NIC. Then some application can send a message to another host by invoking the proper API function. The data will be passed to the NIC and then sent on its way.
Qu: (point to diagram) if this host wants to send a message to that host, how does it do that? Ans: each computer needs a unique address, so when one computer sends data to another, the intended recipient knows the data was meant for it. The other computers on the network are supposed (!) to ignore the message. In the old days, the administrator manually set each NIC with a unique number between 1 and 255. Today, NICs come configured with an address already, known as the MAC Address (or BIA, data-link address, ...). For Ethernet NICs, this address is 6 bytes (48 bits). The first three bytes uniquely identify the manufacturer, assigned by the IEEE. The last three bytes are a unique serial number.
For host A to send data to host B, host A must build a packet containing the data plus a packet header which contains the destination MAC address and other information (e.g., packet length, type of data, ...). Besides the header and data, a checksum (FCS or frame check sequence) is appended to the end. This header and checksum are sometimes called framing, and these packets are sometimes called frames:
Overview of communication: Application invokes API function, passing it the address of the recipient and the data to send. API function builds the packet from this info and sends the packet to the NIC. The NIC sends out the packet onto the media, one bit at a time, according to the network protocol. If the data is very large, the API function will split it into several packets, which get reassembled at the destination. (Example: FTP a large file; Ethernet max packet size is 1522 bytes, including 22-bytes of header.)
Most NICs will examine only enough of the packet to see if it was intended for them or not. If not, they stop looking at it. The intended destination NIC will read in the whole packet, compute a checksum, and compare it with the checksum at the end of the packet. If they don’t match, the packet is corrupted and must be sent again.
With media, NICs, protocols, addresses, and software, a network can function. But there are problems: What if one host on your network wants to send a message (a packet) to a host on a different network? Sending data between networks is called internetworking, and a collection of networks that are connected are called an internet. (The “Internet”, with a capital “I”, refers to the global internet that connects nearly every network with every other. Lately, the popular press has stopped capitalizing it for some reason.)
The obvious solution of connecting both networks into one large network doesn’t work. The technology for LANs has strict size (number of hosts) and distance (meters not kilometers) limitations.
Instead a more complicated solution is used. A device called a router (sometimes called a gateway) with two or more NICs is used to connect the LANs.
The sending host on the first LAN sends the packet to the router (to the NIC connected to its network). The router then resends the packet out a different NIC that connects to a second LAN. Now all hosts on that LAN, including the destination host, see the packet.
For this to work, every host must know the address of the router interface connected to its network. And when the packet is received by the router, it must somehow determine through which NIC to send the packet out. The router then needs a list of all addresses on all networks. The situation is made worse since not all networks connect to the same router. It is often necessary for the first router to forward the packet to another, and then another, etc., until the packet reaches the final network.
All hosts need to know the IP (internet protocol) address of the router. This can be set with the old route command or the newer ip command on Linux. This IP address can be stored in a file and used with the route command automatically when the network is brought up. With Fedora, use the file /etc/default-route, and in Solaris use /etc/defaultrouter. (This information is often stored in different files, even on Solaris and Linux!)
The most common internet protocol suite in use today for this is IPv4. A newer version called IPv6 is available, as are competitors such as IPX. The common name for the IP suite is TCP/IP.
For one host to send a packet to another, it must know the address of the destination host. This leads to a problem of finding out the addresses of all the hosts in the world. Why not use MAC (Ethernet) address? Huge unmanageable routing tables, administration problems (e.g., firewall configuration).
The most popular answer (there are others) is to assign an IP address to each NIC. IP addresses have two parts: network number and a host number. The idea is that routers only need to keep track of the various networks in the world, not all the host numbers. So, for one host to send a packet to another, it must have the IP address of the destination host. If the network number is the same for both hosts, the packet is just broadcast on the LAN as normal. If the destination is in a different network than the source, the sending host sends the packet to a router instead, trusting the router to forward the packet on its way. Once a packet is delivered to the correct network, Ethernet (MAC) addresses are used to deliver the packet to the correct host, as discussed previously.
IPv4 addresses are 32 bits long. (This isn’t a lot of addresses! This changed to 128 bits for IPv6; see RFC-3513.) They are most commonly written in dotted-decimal notation: 10.3.200.42. The 32 bits are divided into two parts: the network number and the host number. Each LAN must have a unique network number, assigned by your local ISP, who bought a block of them from a regional provider (ARIN), who in turn gets huge blocks of numbers from the IANA. ISPs lease them out to you and me.
A single host may have several NICs (so do routers). It is important to remember that it isn’t the host that has an address, it is the NIC. So a host with two NICs has two addresses. Also, a single NIC may have multiple addresses. This is known as IP aliasing. (e.g., eth0:0, eth0:1, ...)
The LANs still use Ethernet to sent packets locally. But hosts on a network only know the MAC address of NICs on that network. So how does the sending host lookup the MAC address of some other host, given only its IP address? One way is to keep a file of IP to MAC addresses on each host, and to update it regularly. For every host in the world.
A better way is to use ARP (address resolution protocol). The ARP protocol is used to map IPv4 addresses to MAC addresses, so sending hosts do not need to know the MAC address, only the destination IP address.
Illustrate this protocol: (1) source (local) host determines if destination (remote) host is on same network. If so, then (2) broadcast ARP request for destination host MAC address. If the destination host is on a different network, then broadcast an ARP request for the MAC address of the gateway (a router that connects one network to another). (3) wait for ARP reply. (4) now send packet to destination (or gateway).
Hosts maintain an ARP cache to save a lookup. To view the ARP cache, use the command arp -an.
RARP (which is related to BOOTP and DHCP protocols) does the reverse: Given a host’s MAC address (which is all the host typically knows when it boots up) it asks a server for its IP address. This is useful when you don’t wish to configure each and every host in your organization individually. You can do this of course, by putting the host’s IP address, the gateway router IP address, and other information in configuration files the host can use at boot time. But it is easier to have a single DHCP server on the LAN. When the host boots up it broadcasts a DHCP request packet containing its MAC address. The DHCP reply contains all the required network parameters.
Note your own computer has a virtual NIC with the address 127.0.0.1 (the loopback address, usually referred to by the name localhost).
Port numbers and Sockets
Sending a packet to a host isn’t enough. When the destination host gets the packet, what program should it send it to? (Web server? Email server? Telnet?) Part of the layer 4 header includes a port number to identify which program should receive the packet and which one sent the packet. These are 16-bit values. (Example: a web browser with two windows open. You click a line on one, switch to the other and click a different link. Each browser window’s HTTP request packet will use a different source port number so the replies will be sent to the correct window.)
When a host receives a packet, the kernel will check the port number to see to which process to send it.
So how does a client (say a web browser) know which port number corresponds to a server? The servers listen for a particular port number that all agree on (IANA). The standard servers use well known Port numbers in the range 0–1023. Which service (and its application level protocol) uses which port number is documented in the /etc/services file. These Ports are reserved for public services such as FTP (20 and 21), telnet (23), SMTP (25), and HTTP (80), HTTPS (443). This makes it easy for clients; to contact your web server the client will send the request packet to your IP address and destination port 80. Note that on a Unix system root-privileges are needed to listen in on a well-known Port. (This prevents a user from crashing your web server and then starting their own, fooling people who visit your web site!)
The range 1024-49151 are User (Registered) Ports, used for other public services (such as Unix rlogin or the w3c SSL services). These are also registered by IANA (as a public service.)
The Dynamic and/or Private Ports are those from 49152 through 65535. Clients will use any available port number higher than 1024; the kernel keeps track of which are in use. (Note: you can use a telnet application to connect to any port: debugging.)
A socket is the combination of an IP address and a port number. A pair of sockets will uniquely identify a network connection from a client application on one host to a server on another host.
Many servers are not started at boot time (ftp) although some are (httpd). (Q: Why?). Instead a “super-server” known as inetd or xinetd (or systemd on modern Linux systems) is started at boot time that listens for incoming packets with a variety of port numbers. Inetd (or whatever) then checks its configuration file to determine which service daemon should get that packet, starts the server, and hands off the packet to it. Such network servers are often referred to as network daemons. Most spawn child processes for each incoming request. This important service is configured either by editing a file /etc/inetd.conf, editing files in a directory /etc/xinetd.d, or enabling and then starting a systemd socket.
To see what is listening on a given port, use (as root) fuser [-v] port/proto (for example: fuser ssh/tcp or fuser 22/tcp). lsof port/proto works too. For all listening ports, use (as root) lsof -i -sTCP:LISTEN or netstat -tl.
Network sockets have proven so useful and easy to work with, that many types now exist. Besides the ones for TCP, UDP, and IP (“raw”), there are sockets that support other network protocols, sockets for kernel-to-process communications (netlink sockets, used for example by udev), and process-to-process communications (unix sockets, similar to named pipes). Sockets have been references throughout this course; now you know what they are.
TCP/IP Transports: Connection and Connectionless
The IP addresses are sufficient to route a packet from one computer to the destination. However, two issues remain: How to identify the source process (client) and destination process (server)? The answer to this is to include another header (the transport layer header) that includes the source and destination port numbers as described above.
The other issue is dealing with errors that can occur. One common approach is to have the sending process expect an acknowledgment from the receiving process. If no such reply is found after a time-out period then the sender can resend the packet. Implementing this correctly for every client and server gets old fast!
Another solution is to have the network itself guarantee delivery of the packets. In this scheme, the two hosts set up a session, send the data, and tear down the session when done. The TCP/IP system handles the time-outs and other issues.
The first scheme is known as datagram or connectionless service and is called the UDP in TCP/IP protocol suite. The second scheme is known as a virtual circuit, or more commonly a connection-oriented service and is called TCP.
Sun developed a different scheme for connecting a server to a port number. Instead of using a well-known port number for each service, a single well-known port number (111) is used for the program portmapper (or portmap). This program assigns each RPC service a unique port number at each boot. A client wanting to use some RPC service sends a query to the portmapper, requesting the port number for that service. This scheme is full of security holes, and should be turned off on your server unless you are using RPC services. These include NFS (<v4) and rlogin.
The OSI Networking Model
Due to the complexity of networking, the various functions and terms have been standardized by the ISO. Known as the open systems interconnect (OSI) this model of networking is standard knowledge for all IT workers. Here’s a picture showing how TCP/IP networking compares (from fiberbit.com.tw):
Configuring Basic Networking
The NIC must be configured at boot time with many parameters, such as an IP address and mask. In addition, your host will need a default gateway address to configure its routing table. To use DNS, your computer must be assigned a hostname, a default domain name, and must be configured with the IP address of a DNS server to use to translate names to IP addresses.
The easiest way to configure TCP/IP networking is to let someone else do it. One way to achieve this is to configure your system to use DHCP (dynamic host configuration protocol) for each NIC. When the system brings up the NIC (usually at boot time), it will send a broadcast DHCP request packet. If there is a DHCP server listening on that LAN, it responds with all the required networking parameters. Your system uses that data to configure networking.
The other way to configure networking parameters is by manually editing various configuration files (or using some tool to edit those files). This is called static addressing.
For wireless laptops, Fedora Linux systems come with a newer networking system called NetworkManager. This software was poorly documented and didn’t work well for static, wired networking, but is much better now. You can use chkconfig and service (or systemctl) to turn this daemon off and make sure it stays off, and then use those tools to turn on the older network service (you may have to install that). However, Red Hat has modified NetworkManager to use the standard (for Red hat) config files; see /etc/NetworkManager/NetworkManager.conf. (Debian has done something similar.) Thus, there is no real need to switch network services. Even the GUI config utility, nm-tool, will use the config standard files.
The configuration of NICs on Red Hat systems is controlled by the file /etc/sysconfig/network-scripts/ifcfg-NameOfNIC. In many cases, NameOfNIC is eth0. On some systems, NICs are named differently (e.g., “p7p1”).
By convention, the ifcfg file’s suffix is the same as the string given by the DEVICE directive in the configuration file itself. (Some versions of Fedora at least depend on that.) System-wide settings go in /etc/sysconfig/network. Note that a setting in the ifcfg file will override the same system-wide setting, for that interface.
In the directions that follow, be sure to change eth0 to your NIC’s actual name. (You can use dmesg to see what name the kernel gave your NIC, or the “ip link” command.) To configure the system for DHCP, this file should look something like this (bold lines are the ones you might need to change):
Intel Corporation 82557/8/9 [Ethernet Pro 100]
This will cause the network system to configure everything using DHCP. To enable a normal user to set the interface up or down, add “USERCTL=yes”.
If using DHCP, you can add additional entries to control the configuration: Add “PEERDNS=no” to prevent DHCP from updating /etc/resolv.conf.
For a static setup, this file should look like this (only the bold lines should be edited):
Intel Corporation 82557/8/9 [Ethernet Pro 100]
(The IPADDR and PREFIX can be followed by a number, as modern NICS and Linux support multiple addresses/prefixes per NIC.) That will configure the IP address and mask, and the default route. But not DNS. To configure the DNS system when not using DHCP, you can edit the file /etc/resolv.conf, which should look something like the following:
NetworkManager added new entries to the config file to also configure DNS and other aspects of networking, see below. But I still just edit /etc/resolv.conf for that.
If using PEERDNS=no (or if not using DHCP), you can instead add entries such as “DNS[1|2|3]=ip-address” and “SEARCH="gcaw.org hccfl.edu"” to update resolv.conf with that information. Thus you don’t need to edit resolv.conf.
Finally, you may need to set the hostname. The default of “localhost.localdomain” is fine for most purposes. If you do need to set a different hostname, use the hostname or hostnamectl command. (There is no standard file to edit on a modern Linux system to set this, although many systems will pay attention to /etc/hostname. You should also add an entry to /etc/hosts with your static IP address and hostname.)
Configuring Service Daemons Review
Stand-alone services are simple: you start some systemd service unit or run some Sys-V init script. To have the service start at boot time, you enable it. With systemd, on-demand services create a new service unit file for each incoming request, stored on a RAM disk. These unit files are created from a template service unit, “nameOfService@.service”. (The “@” identifies this as a template unit file.)
While all systemd managed daemons have sockets, for stand-alone services you can generally ignore them. However, for on-demand services you need to start the socket (since there is no service unit yet to start and starting the template does nothing). To enable on-demand services at boot time, you enable the socket.
(Do not confuse systemd socket unit files with the general networking concept of a socket.)
At the host level, you have network security in several sub-systems: a packet filtering firewall (iptables or filewalld on Linux, and similar ones for other Unixes) is the first line of defense. This can be used to allow or deny incoming or outgoing packets, and to collect various statistics. TCP Wrappers can be used to examine incoming service requests and either allow or deny them (and log them). Note the packet filter can also be configured for this; however, TCP Wrappers can allow or deny access on information not in the packet (time of day, availability of some resource, the username making the request, etc.)
The various services that listen for incoming requests can generally also be configured for security. Additionally, any network services that authenticate users will likely use PAM or other security systems; you must remember to configure those too.
Incoming data can such as FTP and WebDAV uploads, email, etc., can be scanned for viruses and other malware. A malware scanner such as clamAV is used for this.
You can run network monitoring tools that look for known attacks or any suspicious activity, and block access to attacking hosts (as well as log and alert the SA).
Perfect security is an illusion! Even with all this security, attackers can get in. By setting up file permissions carefully and using service isolation techniques, you can limit or prevent a successful penetration from doing (much) harm to your server.
Network Trouble-Shooting Tools and Techniques
Common tools are ping, traceroute, mtr (a modern Linux replacement for those two tools), top, ipcalc and ethtool or mii-tool (Linux), host, dig, ip (Linux; ifconfig for Unix), and various log files.
Before using any tools, verify the problem exists and is network related (and not a turned off server).
Test with ping (or mtr) first, which tests layers <3 (i.e., basic IP connectivity). Then if that works, try nc (netcat) or telnet next (to say port 80 on a web server, or port 25 on a mail server). If that fails, the problem is usually a firewall between client and server blocking access.
If ping fails, try traceroute to localize the fault. The problem is almost always a bad cable, connector, or NIC, or some joker unplugged something.
If the problem appears to be with a host, use the command ifconfig to examine the NIC parameters and route to examine the routing table.
Also, examine the resolver files to check for DNS errors. Use nslookup and/or dig to check for DNS server errors (discussed in more detail below).
To examine the firewall setup (including masquerading) you use iptables -L [-v]; iptables -t nat -L [-v] to list all rules (if you use iptables and not firewalld). For firewalld, use firewalld-cmd --list-all.)
Often the best way to troubleshoot networking issues is to examine the IP packets. This is a useful technique for security monitoring as well. Various tools exist that can show and store in a file network packets. You can show all or, using a filter, only those packets of interest.
The tool wireshark is the most common and powerful tool for this. Out of the box wireshark knows hundreds of protocols and can dissect them for you. It has an easy GUI interface for basic tasks, although such a powerful tool can be challenging to master.
Sometime you need a command line tool in order to use it from a timer service (capture packets at a certain time of day) or for use in a shell script. tcpdump is a command line tool for this that is very mature. But if you don’t want to learn it, you can use tshark, the command line version of wireshark. | <urn:uuid:199a4be8-a892-4378-a339-ac1e6160d17a> | CC-MAIN-2021-21 | http://wpollock.com/AUnix1/NetworkingBasics.htm | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989916.34/warc/CC-MAIN-20210513111525-20210513141525-00137.warc.gz | en | 0.915338 | 5,459 | 4.21875 | 4 |
Art history majors study and analyze famous works of art, and art history programs teach skills related to research, critical thinking, and communication. Graduates can pursue a variety of positions, including roles as curators, archivists, and professors.
On this page, readers can find information about art history careers and salaries, resources for art history majors, and answers to frequently asked questions.
SeventyFour / Shutterstock
Why Pursue a Career in Art History?
Graduates can pursue many different careers with an art history degree, including jobs in museums, archives, and schools. Art history students gain analytical thinking and communication skills. Students also learn how to research and evaluate artwork and historical documents.
Art history careers often require workers with strong organizational, planning, and creative skills. Most of these careers take place predominantly indoors or behind a desk. Research positions require long periods of focused, independent work, while curator and educator jobs require candidates with excellent interpersonal and communication skills.
Art History Career Outlook
The Bureau of Labor Statistics (BLS) projects 9% growth for archivists, curators, and museum workers between 2018 and 2028. These professionals make an annual median salary of $49,850. Job applicants usually need at least a bachelor's degree, although some positions require a master's degree.
The table below displays the median annual salaries for a few common art history careers. This table explores salaries for different levels of experience, from entry-level workers to experienced professionals.
|Art Gallery Curator||N/A||$43,380||$49,480||$50,180|
Skills Gained With an Art History Degree
Art history programs help students develop skills that are applicable to art history careers and roles in many other fields. Analyzing art pieces by giving presentations and writing research papers builds research, communication, and analytical abilities, while also introducing students to artistic techniques, historic eras, and cultures from around the world.
Exhibit designers, museum curators, art buyers, and tour guides can apply this knowledge in museum and gallery positions.
Students learn to determine the artist, era, and medium of a piece of art. Learners also examine pieces for meaning. These tasks develop analytical abilities that help art buyers and consultants estimate the value of art. Students also gain problem-solving skills they can use in management positions.
- Attention to Detail
To analyze a piece, students must consider small details of the work. For example, color variations can impact a work's meaning and architecture depicted in a painting can reveal the work's timeframe. This attention to detail can also help museum workers and art buyers inspect pieces for damage. This ability also benefits event planners, managers, and photographers.
Art history students often write papers and deliver presentations on specific pieces, eras, artists, and regions. These assignments prepare learners for positions that require strong verbal and written communication skills, including roles as tour guides, art history writers, and museum curators. These skills also apply to careers in other disciplines, such as public relations, speech writing, and news reporting.
Students perform research to determine cultural and historical elements in artwork. Conducting research can also help students better understand artists, connections between eras, and stories behind pieces, such as Greek myths in Renaissance works. Research skills are useful in many fields.
- Cultural Awareness
Art history programs include explorations of art from various geographic areas, such as Greece, Rome, Spain, China, and the United States. Students explore global history and cultures, gaining knowledge they can apply to positions in art museums. This global awareness is also beneficial for public relations specialists and politicians.
Art History Career Paths
Art history programs may offer concentrations related to a single artistic method, such as architecture, or an era, like the Middle Ages. Each concentration prepares students for specific art history career opportunities.
For example, students who plan to oversee museums can pursue a museum studies concentration. Aspiring art consultants, on the other hand, can look for programs that focus on certain artistic eras.
Upon graduating, art history students can pursue careers as museum archivists, educators, and self-employed artists, among many other positions. Read on to learn more about how to begin a career in this field.
How to Start Your Career in Art History
Many art history careers require a postsecondary degree. Some jobs, like museum technicians, only require a bachelor's degree. Other professionals in the field, including archivists, curators, and conservators, need a master's degree in art history or a related field. Additionally, most positions in postsecondary education require a master's degree or higher.
Internships and volunteering can help young professionals find their footing as they look for careers with an art history degree. Some employers may accept professionals with a lower degree level if they have lots of practical experience in the field.
Associate Degree in Art History
An associate in art history qualifies graduates for positions as museum tour guides, photographers, and desktop publishers. The curriculum of an associate program also helps learners understand aesthetics related to various types of artistry, such as knitting, sculpting, cake decorating, quilting, and fashion design.
Associate programs typically require about 60 credits of general education and major courses. General education courses provide a solid foundation in the humanities and help learners prepare for careers in many fields.
What Can You Do With an Associate in Art History?
Photographers may take pictures to sell as artwork or contribute images to news pieces. These professionals may also manage portrait sessions at studios. Responsibilities often include designing sets for photo sessions and editing photos with software. This career does not always require a degree, but an associate in art history helps students develop applicable knowledge and skills.
- Desktop Publisher
Desktop publishers develop layouts for print and virtual materials, such as brochures and newspapers. These professionals may work with other designers and writers, and they use software to place images. This position requires an associate degree.
- Tour Guide
Tour guides lead groups through places of cultural or historic significance and provide information on settings and pieces. For example, museums may hire tour guides to explain pieces of art and their historic context to customers. Aspiring art museum tour guides can earn an associate in art history to demonstrate their knowledge of the field.
Bachelor's Degree in Art History
Art history bachelor's programs typically require about 120 credits and include several core art history courses. Learners explore more art history topics, techniques, and eras than students in associate programs, gaining a deeper understanding of the field.
Bachelor's programs typically allow participants to take several elective courses, and they may also offer certifications and minors relevant to certain careers. For example, an aspiring art buyer may take finance electives.
Professionals with a bachelor's degree and sufficient experience often find work as museum technicians, purchasing agents, and exhibit designers. Learners can also use their art history knowledge to plan events with artistic or historic themes.
What Can You Do With a Bachelor's in Art History?
- Museum Technician
Museum technicians help maintain the safety of museum pieces. These professionals tend to insurance matters, assess the risk associated with relocating pieces, and keep records. Technicians may also communicate with museums interested in borrowing pieces. This position often requires a bachelor's degree in a field related to museum work, such as art history.
- Meeting, Convention, and Event Planner
These planners manage events that reflect various consumer objectives. They consider budgets, schedules, locations, and meals for events like weddings and business meetings. Museums may hire planners to launch new programs or exhibits. These positions typically require a bachelor's degree.
- Art Director
Art directors communicate with clients before designing sets and publications. These professionals may build budgets, lead other designers, make decisions about images and layouts, authorize finished pieces, and offer completed products to customers. Organizations often require art directors to hold a bachelor's degree.
- Exhibit Designer
These designers develop exhibits that fulfill specific purposes. For example, an art museum's exhibit may showcase works connected to a certain artist or time period. Designers draft exhibit structures using software, manage installation details, and ensure that exhibits remain in working order. Exhibit designers typically need a bachelor's degree.
- Purchasing Manager, Buyer, and Purchasing Agent
These workers manage the buying and selling of products for companies, such as art pieces for museums. These professionals identify trustworthy sellers, decide on pricing, create contracts, and keep records. A bachelor's in art history helps students develop knowledge about certain artistic periods and works, which prepares them to perform these tasks for art museums.
Master's Degree in Art History
Career options expand for art history majors who earn a master's degree. These programs help learners develop an in-depth understanding of the field, qualifying them for positions as museum curators and archivists. Master's students often complete a capstone, such as a thesis or a project.
Master's degree-holders can also shape art curricula for K-12 school districts, which requires advanced research skills and art history knowledge. Art history master's programs also train students to make connections between art and culture and gain insights into past civilizations, which is a necessary skill for sociologists.
What Can You Do With a Master's in Art History?
- Museum Archivist
Archivists manage and organize records for museums. They may also oversee museum programs, guide museum employees working on exhibits, and create policies regarding museum pieces. These professionals typically need a master's in a relevant field, such as history or archival science.
- Museum Curator
Curators manage museum pieces and objects. Responsibilities include buying new items and coordinating exhibits. Curators may communicate with other museums about lending pieces, oversee museum research, and act as a spokesperson at events. Curators typically need a master's degree in a relevant field, including art history.
- Museum Conservator
Conservators maintain museum records and make decisions on piece conservation. They often work with technology, such as X-ray machines, to determine each piece's needs. Conservators may also research and write on field topics and assist with museum programs. These professionals need a master's in conservation or a similar discipline.
- Instructional Coordinator
Instructional coordinators analyze data to determine learning needs for school districts. Coordinators offer advice on teaching techniques, curriculum alterations, course materials, and new technologies. They may also oversee faculty workshops. Graduates of art history master's programs can perform these tasks for courses related to their field.
Sociologists explore the behavior of individuals and groups through research and experimentation. These professionals publish findings in books, scholarly journals, and online articles. Students earning an art history master's degree prepare for this position by examining the culture, history, and meaning in artwork. Sociologists typically need a master's degree.
Doctoral Degree in Art History
A doctoral degree is the highest level of education and typically requires students to complete a dissertation on an art history topic. Dissertation requirements may involve taking several research classes and giving an oral defense.
A doctoral degree qualifies graduates to teach at universities and oversee college departments. Doctoral degree-holders qualify for some of the most lucrative art history careers, which can be extremely competitive. A doctoral degree can also help applicants stand out when pursuing positions with lower educational requirements.
What Can You Do With a Doctorate in Art History?
- Postsecondary Education Administrator
The responsibilities of postsecondary education administrators vary based on their specific position. Administrators may work as deans and provosts, tending to budgets and policies. Depending on the position, higher education administrators generally require a master's or doctorate.
- Postsecondary Teacher
Postsecondary educators teach college courses within their discipline. These educators create syllabi, provide feedback on student assignments, and offer guidance on course registration. Professors may also suggest curriculum changes to their department and publish scholarly pieces. Colleges and universities typically require postsecondary teachers to hold a doctorate.
How to Advance Your Career in Art History
Employers often consider experience when awarding promotions. However, professionals can take advantage of other methods to move their career forward. For example, earning additional education or certification can make an art history professional more desirable to employers. This can also lead to a higher salary.
In the following sections, readers can find information about how to advance their art history careers. Readers can learn about certification and licensure opportunities, continuing education, and other future steps to take.
The Academy of Certified Archivists (ACA) provides certification for professional archivists, including art historian archivists. ACA requires that candidates take an exam about archival practices and standards. The organization charges an examination fee and a certification fee for candidates who pass the exam.
Certification lasts five years, after which professionals must renew their credentials.
Continuing education offers professionals a chance to advance their skills or learn more about their field without needing to return to school and earn another degree. Art history professionals can find continuing education courses through professional organizations and museums.
Most continuing education programs apply to specific careers. For example, museum curators can find continuing education opportunities with the Center for Curatorial Leadership, while art teachers can find options through online programs, community colleges, and universities.
Art history professionals looking to advance their careers should look for opportunities to improve their skills and make new connections. Some workers join professional organizations, which often offer resources, conferences, continuing education opportunities, and certification.
While professionals typically pay for membership, the benefits of joining a professional organization can far outweigh the cost. In particular, professional organizations provide opportunities for networking with peers and leaders in the field.
How to Switch Your Career to Art History
Some art history professionals, such as art history archivists and museum technicians, must hold degrees in art history. Professionals in these careers need actual art history knowledge to carry out their jobs effectively.
Other positions, such as general archivists, museum curators, and educators, may allow more flexibility when it comes to entering the field. Bachelor's and graduate degrees in the humanities or communication may translate well.
Professionals considering a career change should carefully assess the skills and knowledge they need for their desired job before deciding how to proceed.
Where Can You Work as an Art History Professional?
Art history knowledge is applicable to many careers in museums, and the degree provides an understanding of aesthetics to help graduates develop new art.
Students also gain insights into historic eras, which can prepare them for teaching careers and for positions advising individuals and organizations on art purchases. Graduates can also help theatrical companies choose realistic costumes, settings, and stage props for period plays.
When choosing a school, learners should consider which industry they plan to enter to ensure their program offers relevant preparation.
- Colleges and Universities
Colleges and universities educate students in a variety of fields, including art history. Depending on their education level, graduates can teach art history courses or work as administrative assistants, deans, and provosts.
- Museums, Art Galleries, and Historical Sites
These sites familiarize the public with culture and history by showcasing artistic pieces from different eras and geographic locations. Art history graduates can work as museum curators, technicians, purchasing agents, and tour guides.
- Advertising, Public Relations, and Marketing
These fields focus on developing public awareness of products and services, along with fostering relationships between companies and consumers. Art history graduates can perform these tasks for museums, historical sites, and galleries.
- Performing Arts Companies
Performing arts companies put on productions, including plays, ballets, magic shows, and concerts. These companies may consult art history graduates when designing costumes and sets for historic pieces.
- Freelance and Independent Artists
This industry includes various types of art, such as writing and music. Art history graduates with a solid understanding of aesthetics can excel in these positions. Professionals can also use their art history knowledge to create pieces that explore past cultures and ideas.
Interview With a Professional in Art History
Megan Mahn Miller
Designated as a master personal property appraiser by the National Auctioneers Association, Megan Mahn Miller earned a bachelor's degree in art history from the University of Minnesota. Mahn Miller began her career working for nonprofit organizations supporting the arts. In 2006, Mahn Miller joined Julien's Auctions -- an internationally renowned auction house of rock and roll and Hollywood memorabilia.
Mahn Miller attended the Reppert School of auctioneering in 2009. In 2014, Mahn Miller established Mahn Miller Collective Inc. to provide appraisal and consultation services beyond the auction house.
Mahn Miller continues to work with Julien's as a consulting sales specialist. She has also taught an appraisal report writing course for the National Auctioneers Association and is a contributor to the WorthPoint newsletter.
- Why did you decide to pursue a career in art history? Is it something that you were always interested in?
Initially, I thought I would study anthropology with an emphasis on art. My freshman year, I took classes in both and it turned out my aptitude was in art history. Anthropology became the emphasis.
At that time, in the early 1990s, students were being told that having a liberal arts degree was all that mattered -- that your actual degree subject was secondary. In one way, that was very freeing because I had no idea what I was going to do with my degree, but [it was] very unsupportive in another. I had to make a special effort to find internships and outside projects to build credentials while studying. Art has always been important to me, but it took time to figure out how to make it a career.
- How is an art history program different from other college majors?
How to say this delicately… there are a lot of people who don't take this degree seriously. And that is unfair. The study is rigorous and can incorporate many different areas of study. My personal interest was the intersection of women's studies and art. The connections to other subjects are endless.
The business of art is evolving; how art is sold, what is bought, who is buying. This creates a lot of fear from people who have been in the business and only worked in one way. It creates opportunities for young people entering the field to innovate.
- What was the job search like after completing your degree?
I was very lucky that when I graduated -- jobs were easily obtained. I started at a museum in an entry-level position. Later, I moved into arts programming on a community level. It was helpful for me to discover how I wanted to incorporate my degree into my job. I enjoyed teaching, something I discovered while assisting on a project at our campus museum. Getting art out to the public was a passion. Once I understood that, it informed my job search.
- Is art history a versatile degree? Or one that has a clear career path?
To be honest, I have been called a unicorn. It would be helpful for someone undertaking this degree to have a clear idea of what they want to do with it.
I thought I was going to end up in academia. As a student, that was all I knew. Academia is a great choice, but the reality is there are not that many positions for professors.
A person with an art history degree can work at an auction house or museum, but you should know what capacity you are interested in.
If it is curating, what specifically draws you to that position? If it is business development for an auction house, what courses will you take in order to complement your degree? If you want to represent artists or open a gallery, will you know how to get funding or clients? If you want to restore artwork, where will you study after completing your degree?
I have met very few people whose career paths followed a straight line -- and thank goodness. You will learn so much along the way. My path was winding, and in the end it was luck that brought me to appraising.
At the same time, an art history student needs to take control of their destiny. Their degree is not going to be enough. Internships, informational interviews, and connecting with people whose jobs they think they want will put them in a better position to get hired once they leave school.
- Why did you decide to start your own business? Is this something common for those who pursue a career in art history?
I worked as a property specialist at an auction house for 12 years. I loved it, but it stopped being challenging. I was getting feedback from my peers and mentors that appraising was something I could and should pursue.
You don't need to open your own business to be an appraiser. It was the right choice for me. There are a number of careers in art history that lend themselves to being a solopreneur, but it isn't everyone's fate.
- What is the most enjoyable aspect of your job? The most challenging?
What I primarily want to do every day is research. That is my passion. I enjoy that each assignment I take is different, so I am always learning. When I was working at the auction house, I loved how every day was different and the excitement of the auctions.
As a solopreneur, my job is challenging because I am not just the appraiser but also the receptionist, business developer, and webmaster. At the auction house, the deadlines were a challenge, as were the multiple ways that a consignor or buyer can be disappointed.
- What advice would you give to students considering pursuing a degree and career in art history?
Find out about as many possible careers as possible where you can use your degree. Get internships. Talk to people in the industry or at the corporation where you want to work. Find complementary disciplines to study that will make you more attractive for the position you want. Don't get frustrated if your career trajectory isn't a straight line.
- Any final thoughts for us?
Don't be afraid to ask for help. Find a mentor who will encourage you and help you grow professionally.
Resources for Art History Majors
Art history professionals can take advantage of educational and professional resources to advance their careers.
Below, readers can find information about art history resources for educators, curators, and researchers. These resources feature professional organizations and postsecondary institutions and include archives of art history sources and professional conferences.
- Professional Organizations
College Art Association of America: CAA focuses on the visual arts and delivers an annual conference and international programs. The association provides links to various publications and best practices. Members can browse open careers through an online career center.
Association of Art Museum Curators: AAMC educates the public on the responsibilities of a curator. The group provides several resources, including a forum for educational discussions and information on programs, fellowships, and internships. Members can speak at the association's annual conference and browse career opportunities through the website. The association also offers workshops related to curation and webinars on management, technology, and social media.
Art History Teaching Resources: Through this organization's website, art history educators can find lesson plans on specific eras, including Byzantine, Greek, Roman, medieval, and Renaissance art. Visitors can also browse resources related to museum experiences and book recommendations. AHTR also connects professionals with journals, blogs, and workshops.
Yale University Library: This library provides art history databases that focus on art, architecture, drama and theater studies, and general humanities. However, users may need separate accounts to access articles through certain databases, such as JSTOR. The website also links visitors to WorldCat, where they can find local libraries offering particular books.
JSTOR: Through JSTOR, users can access scholarly e-books and articles on multiple topics. Art history students can find sources on Asian, ancient, and medieval art. Many schools provide students with JSTOR access. Learners without school access can register for an individual account.
Metropolitan Museum of Art: The Met offers classroom resources that cover multiple eras and cultures, such as the ancient Near East, the Renaissance, and Byzantine art. The Met also provides exhibitions and programs for elementary, middle, and high school students. Teachers can participate in museum workshops, attend events, and browse art history publications through the website.
Massachusetts Art Education Association: MAEA provides resources related to art education, advocacy, and art history. The group also delivers exhibits on topics such as art education and photography. Members can request conference workshops, and the website provides information about career opportunities and state licensure.
SECAC: Formerly known as the Southeastern College Art Conference, SECAC focuses on visual arts at the postsecondary level. The organization hosts a conference and publishes a semiannual newsletter and a journal called Art Inquiries. Its website includes reviews of exhibitions and a list of job openings.
Association of Historians of American Art: This association focuses on American art, beginning before colonization. AHAA participates in CAA's annual conference and delivers its own symposium every two years. The organization also publishes a journal. Members benefit from networking opportunities, including a directory and a syllabus-sharing tool. Members can also access online book reviews.
Renaissance Society of America: The RSA focuses on culture and history from the years 1300-1700. The society offers an annual conference and accepts proposals for event seminars. The RSA publishes a journal and works with the Digital Humanities Summer Institute to provide financial assistance to students. The society also delivers fellowships and a mentoring program for individuals who are new to the field.
- Open Courseware
Tangible Things - Harvard: This course explores history through artwork, artifacts, and scientific specimens. Students learn about museum curation through an examination of historical and artistic items collected by Harvard University. The course explains how museum collections can shape academic disciplines and reinforce cultural ideas.
Hollywood: History, Industry, Art - University of Pennsylvania: Students explore the history of film and how the industry of Hollywood has grown and changed over the years. This course demonstrates how world events are reflected through the art of film. Students also learn about the technology that advanced and shaped this art form, from color cinematography to computer-generated special effects.
Inspiring and Motivating Arts and Culture Teams - University of Michigan: The University of Michigan presents this course through its business school in collaboration with National Arts Strategies. The class teaches valuable skills in motivation and communication. Students also learn to effectively present ideas and inspire team members to produce desired results.
Arts and Culture Strategy - University of Pennsylvania: This course helps develop effective leadership among professionals with careers in the arts. Students learn to develop and lead arts organizations in a sustainable manner. Those who finish the course earn a certificate of completion.
Art History - Journal of the Association for Art History: This journal publishes essays about new methods and areas of concern in the field of art history. The journal goes to print five times a year and takes submissions from established and emerging art history scholars. Readers can subscribe online or in print.
Oxford Art Journal: This publication offers critical works about art history topics. The journal includes articles about art across all disciplines and eras. Readers can find pieces that offer political analysis of visual art and essays offering critiques of art and culture across the globe. Readers can subscribe online or in print.
The Art Bulletin: Published by the College Art Association of America, this journal prints scholarly essays about art history practices in museums, universities, and other institutions. The publication also offers peer-reviewed articles about art history topics across time periods and styles. This journal goes to print four times a year and encourages debate about the contemporary practice of art history. Readers can subscribe online or in print.
Frequently Asked Questions
- What kind of jobs can art history majors get?
Art history majors can pursue work as museum curators, researchers, archivists, and educators. Graduates with an art history background can also find careers in business.
- How much do art historians make?
Salaries for art history careers differ depending on a worker's job title, experience level, location. However, the BLS reports that the median annual salary for archivists, curators, and museum workers is $49,850.
- How do I get into art history?
Many community colleges and four-year universities offer on-campus and online art history programs. Students may pursue an associate, bachelor's, master's, or doctoral degree in this field.
- Is an art history degree worth it?
Art history degrees provide essential skills related to creative and analytical thinking, organization and research, management of resources, and communication. Graduates may use these skills to earn a job in art history or branch out to other fields, such as business and education. | <urn:uuid:261c814a-cc3d-491a-8375-e81f41090930> | CC-MAIN-2021-21 | https://www.bestcolleges.com/careers/art-and-design/art-history/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989030.87/warc/CC-MAIN-20210510033850-20210510063850-00295.warc.gz | en | 0.946157 | 5,905 | 3.03125 | 3 |
What are Cannabinoids? | Guide | CBDNerds 2021
Inside the Chemistry of Hemp: A Comprehensive Guide to Cannabinoids
Cannabinoids are a group of natural compounds that can interact with our body’s cannabinoid receptors.
These cannabinoid receptors are a part of a complex signaling network that controls our immune system, metabolism, feelings of pain, anxiety, and much more.
Most cannabinoids come from cannabis (phytocannabinoids) but some are also produced by the human body (endocannabinoids) and can be made synthetically in a lab.
Many people are familiar with THC and CBD. These two cannabinoids have received a lot of attention due to being the most abundant active compounds in cannabis and are widely utilized for their health effects.
In reality, however, cannabis contains over 120 different cannabinoids, including the likes of cannabinol (CBN), cannabidivarin (CBDV), and many others.
These are considered “minor” cannabinoids because of their relatively small concentrations in most cannabis plants. But similar to CBD and THC, these substances also have many potential health benefits and uses. Here is your comprehensive guide to cannabinoids.
How Cannabinoids Work
Cannabinoids produce their health effects primarily through interacting with the body’s endocannabinoid system (ECS). There’s also evidence that many cannabinoids, such as CBD, interact with other, non-cannabinoid receptors in the body as well.
Cannabinoids and the Endocannabinoid System
The ECS is dedicated entirely to interacting with cannabinoids, which is why most of the effects of phytocannabinoids are mediated by this essential system.
Consisting of cannabinoid receptors, endocannabinoids, and metabolic enzymes, the ECS works to maintain homeostasis: a state of balance within the body.
Receptors are protein molecules present on cells that interact with specific substances and produce various effects.
Many cannabinoids can interact directly with the two cannabinoid receptors in the body, CB1 and CB2. They do so by binding to them in a similar manner as our body’s naturally produced endocannabinoids such as anandamide and 2-AG.
These receptors are found in all parts of the body, with CB1 being especially abundant in the brain and CB2 in immune system tissues.
Furthermore, research has shown that cannabinoids can interact with several other receptors that may eventually be considered as part of the ECS, such as GPR119 and GPR55.
On top of that, cannabinoids can also have less direct effects on the ECS. One of the best examples of this is CBD. While it doesn’t bind to cannabinoid receptors directly, it can modulate the function of the CB1 receptor and suppresses enzymes that break down endocannabinoids.
Other Cannabinoid Mechanisms
Aside from the ECS, cannabinoids have also been shown to interact with many other receptors present throughout the body.
Other targets noted by research studies include the central nervous system receptors for neurotransmitters such as serotonin, glycine, and GABA. These receptors control functions including memory, learning, and mood modulation.
Finally, cannabinoids can have other non-receptor effects, such as blocking transporter proteins.
How Do Cannabinoids Differ from Each Other?
Although cannabinoids are quite similar in structure, they have some slight differences that account for their varying effects. As a result, distinct cannabinoids can affect your body quite differently. The best example of this is CBD and THC.
THC is widely recognized for its mind-altering psychotropic effects which can include euphoria, impaired memory, and decreased motor skills. It's most commonly found in the flower of medical marijuana.
Meanwhile, CBD and many other cannabinoids, such as CBN and CBG, are non-psychoactive, which means they don’t alter your mental state.
THC and CBD also have various beneficial health effects some of which are similar, while others are unique to each cannabinoid.
For example, both cannabinoids may treat pain and inflammation, but only THC has shown a positive effect on Tourette’s syndrome, whereas CBD is recognized for its potent anxiety-relieving properties.
Furthermore, cannabinoids can enhance and modify each other’s effects, a phenomenon known as the entourage effect. For example, both CBD and the minor cannabinoid THCV seem to reduce the psychotropic effects of THC.
This may explain why whole-plant cannabis products are more effective and produce fewer side effects than pure THC or CBD on their own.
Cannabinoids vs Cannabinoid Acids
Another important distinction to be aware of is between cannabinoids and cannabinoid acids.
Cannabis plants don’t directly make THC, CBD, or other cannabinoids. Instead, they produce their acidic forms, such as THCA and CBDA, which are precursors to these cannabinoids.
When heat is applied to these cannabinoid acids, they undergo a process called decarboxylation and lose their acid group, becoming THC, CBD, and so forth. That’s why these resulting cannabinoids are sometimes referred to as “activated.”
Although we’re more familiar with the non-acidic form of cannabinoids, their acidic counterparts can also have beneficial health effects. Besides, all acidic cannabinoids are non-psychotropic, so they won’t get you high.
Unsurprisingly, their levels are highest in raw cannabis and decrease when the plant material is dried or exposed to heat.
There’s no official definition for which cannabinoids count as “major.” However, most references to major cannabinoids typically include CBGA, THCA, CBDA, CBCA, and their non-acidic forms. In turn, these cannabinoids can convert into many more “minor” cannabinoids.
CBGA & CBG
Cannabigerolic acid (CBGA) is widely regarded as the “mother of all cannabinoids.” This means that cannabis plants produce CBGA first before it’s converted into other cannabinoids.
CBG is considered the next big thing in the cannabis industry because much like CBD, it’s a non-intoxicating cannabinoid with many potential health benefits. These include:
- Antibacterial properties
- Anti-inflammatory effects in studies of mice with multiple sclerosis and inflammatory bowel disease
- Neuroprotection against Parkinson’s disease in mice
- Inhibition of colon cancer cell growth
- Appetite stimulation, offering a non-intoxicating alternative to THC for wasting caused by diseases such as cancer and HIV
THC & THCA
Tetrahydrocannabinol (THC) is the best-known cannabinoid because it’s largely responsible for the psychoactive effects of cannabis. These include impaired short-term memory and motor skills, euphoria, and, in some susceptible individuals, anxiety, and paranoia.
However, research has shown that THC also has many beneficial effects and is used to help with many symptoms and disorders, including:
- Pain and inflammation
- Nausea and vomiting
- Wasting and loss of appetite
- Tourette’s syndrome
- Multiple sclerosis
- Opioid withdrawal
Like most cannabinoids, THC starts out in its acidic form — tetrahydrocannabinolic acid (THCA). THCA is a non-intoxicating cannabinoid that’s been shown to have some beneficial effects, including anti-inflammatory, neuroprotective, anti-obesity, and anti-cancer qualities.
Raw cannabis plants are high in THCA. This gets converted into THC when exposed to heat.
CBD & CBDA
Cannabidiolic acid (CBDA) is the acid version of cannabidiol (CBD). These non-psychotropic cannabinoids come with a wide variety of health benefits.
CBD is best known as the main active ingredient in CBD oil, which is used by an increasing number of people worldwide to support their overall health and address specific symptoms and conditions.
CBD is one of the most well-studied cannabinoids. Although it’s most recognized for its effectiveness in treatment-resistant epilepsy, research suggests that it may also help with a long list of health issues, including:
- Anxiety disorders and depression
- Drug addiction
- Inflammation and pain
- Sleep issues
- Neurodegenerative conditions
It also seems to have much stronger effects on the serotonin receptor than CBD, which means CBDA may be particularly helpful for neurologic issues linked to serotonin dysfunction, such as seizures.
Most CBD products lack CBDA because they’re decarboxylated (heated) during production, which converts it all into CBD. However, in raw cannabis plants, 95% of the CBD is present in the CBDA form.
CBCA & CBC
Cannabichomenic acid (CBCA) and cannabichromene (CBC) are another pair of non-intoxicating cannabinoids produced from CBGA.
They haven’t seen too much research, but CBC has been reported to relieve pain, inflammation, act as an antidepressant, and even support the functions of neural stem progenitor cells, which are crucial to healthy brain function and have real potential in the treatment of neurodegenerative diseases.
It may also have positive effects on seizures, Huntington’s, and Parkinson’s disease.
Minor cannabinoids get their name from their relatively small concentrations in cannabis plants and the fact that most of them are derived from major cannabinoids.
There are well over 100 minor cannabinoids in cannabis but we will look at the ones that have received the most attention.
Cannabinol (CBN) is a minor, non-intoxicating cannabinoid produced when THC is exposed to oxygen. That’s why CBN concentrations are low in raw cannabis but can reach significant levels in older plants.
Many people believe that CBN is helpful for sleep issues because aged cannabis has stronger sleep-inducing effects.
However, this belief is somewhat misguided. Rather than promoting sleep on its own, research indicates that CBN might enhance the sedating effects of THC.
Much like CBG, CBN is predicted to grow in popularity in the near future thanks to the overwhelming interest in CBD.
Tetrahydrocannabivarin (THCV) is an analog of THC, which means it’s structure is very similar. Despite this, THCV does not appear to share its cousin’s intoxicating effects.
This minor cannabinoid has been shown to have several beneficial properties. Most notably, it can reduce appetite and help regulate blood sugar and insulin sensitivity, making it a serious candidate for the treatment of obesity and diabetes.
Another study also found that THCV displayed the strongest anti-acne effects out of five studied minor cannabinoids.
Cannabicyclol (CBL) is a non-psychotropic cannabinoid produced when CBC degrades from sunlight exposure. Given that it's only been recently discovered, we don’t know too much about CBL’s effects.
Cannabidavarin (CBDV) is a non-intoxicating minor cannabinoid similar in structure to CBD. Much of the interest in this cannabinoid has revolved around its anti-epileptic effects.
The company GW Pharmaceuticals — which recently released the first CBD-only pharmaceutical drug, Epidiolex — is currently testing CBDV in clinical trials of epilepsy.
Cannabichromevarin (CBCV) is a minor, non-intoxicating cannabinoid first identified in 1975.
Like other cannabinoids ending in “V,” it’s an analog of its close cousin CBC, which means it has a similar structure but with a slight difference.
CBCV hasn’t seen much research so we don’t know too much about its effects.
Cannabigerol monomethyl ether (CBGM) is a minor cannabinoid related to CBG. Like other rare cannabinoids, we don’t know too much about its effects due to a lack of research.
Cannabielsoin (CBE) is a minor cannabinoid metabolite of CBD. So far, studies of mice, guinea pigs, and some other animals have shown that CBD can be metabolized (breaks down) into CBE. Presumably, a similar process occurs in humans when we ingest CBD.
Aside from hypothesizing that CBE plays a role in the effects of CBD, there is not much additional research into CBE yet.
Cannabicitran (CBT) is a relatively rare minor cannabinoid. There haven’t been many studies looking at this cannabinoid, so we don’t know much about its health effects.
The cannabis plant contains dozens of phytocannabinoids. Not to mention, it contains an array of terpenes, flavonoids, and phytonutrients. These natural compounds have serious potential in helping with a wide variety of health conditions.
We already know a good deal about the benefits of CBD and THC, which are increasingly utilized by individuals and their doctors to relieve a wide range of conditions from chronic pain to treatment-resistant epilepsy.
But much remains to be discovered about the less-known minor cannabinoids, which represent the next great frontier in cannabis research. We hope our guide to cannabinoids helped shine a light on these minor cannabinoids. And we hope we'll have the ability to share more about them in the near future.
Frequently Asked Questions
Is CBD a cannabinoid?
Yes, cannabidiol (CBD) is one of the two most abundant cannabinoids in cannabis plants.
What do cannabinoids do to the body?
Cannabinoids produce a wide range of health effects by interacting with cannabinoid receptors and other bodily systems.
What are examples of cannabinoids?
THC and CBD are the two best examples of cannabinoids. However, the complete list includes hundreds of chemical compounds, including not only those from cannabis but also endocannabinoids made by the human body and even synthetic cannabinoids.
What type of drug is cannabinoids?
Cannabinoids get their name from their ability to interact with our body’s cannabinoid receptors. Some cannabinoids are psychoactive (any substance that affects the brain) or psychotropic (any substance that changes your mental state, which means affecting how you perceive the world) drugs.
Morales, Paula, Dow P. Hurst, and Patricia H. Reggio. "Molecular targets of the phytocannabinoids: a complex picture." Phytocannabinoids. Springer, Cham, 2017. 103-131.
Brown, A. J. "Novel cannabinoid receptors." British journal of pharmacology 152.5 (2007): 567-575.
Leweke, F. M., et al. "Cannabidiol enhances anandamide signaling and alleviates psychotic symptoms of schizophrenia." Translational psychiatry 2.3 (2012): e94-e94.
Starkus, J., et al. "Diverse TRPV1 responses to cannabinoids." Channels 13.1 (2019): 172-191.
Xiong, Wei, et al. "Cannabinoids suppress inflammatory and neuropathic pain by targeting α3 glycine receptors." Journal of Experimental Medicine 209.6 (2012): 1121-1134.
Bakas, T., et al. "The direct actions of cannabidiol and 2-arachidonoyl glycerol at GABAA receptors." Pharmacological research 119 (2017): 358-370.
Booz, George W. "Cannabidiol as an emergent therapeutic strategy for lessening the impact of inflammation on oxidative stress." Free Radical Biology and Medicine 51.5 (2011): 1054-1061.
Müller-Vahl, Kirsten R., et al. "δ9-tetrahydrocannabinol (THC) is effective in the treatment of tics in Tourette syndrome: a 6-week randomized trial." The Journal of clinical psychiatry (2003).
Blessing, Esther M., et al. "Cannabidiol as a potential treatment for anxiety disorders." Neurotherapeutics 12.4 (2015): 825-836.
Russo, Ethan B. "The case for the entourage effect and conventional breeding of clinical cannabis: no “strain,” no gain." Frontiers in plant science 9 (2019): 1969.
Englund, Amir, et al. "The effect of five day dosing with THCV on THC-induced cognitive, psychological and physiological effects in healthy male human volunteers: a placebo-controlled, double-blind, crossover pilot trial." Journal of Psychopharmacology 30.2 (2016): 140-151.
Pamplona, Fabricio A., Lorenzo Rolim da Silva, and Ana Carolina Coan. "Potential clinical benefits of CBD-rich cannabis extracts over purified CBD in treatment-resistant epilepsy: observational data meta-analysis." Frontiers in neurology 9 (2018): 759.
Citti, Cinzia, et al. "A novel phytocannabinoid isolated from Cannabis sativa L. with an in vivo cannabimimetic activity higher than Δ 9-tetrahydrocannabinol: Δ 9-Tetrahydrocannabiphorol." Scientific reports 9.1 (2019): 1-13.
Appendino, Giovanni, et al. "Antibacterial cannabinoids from Cannabis sativa: a structure− activity study." Journal of natural products 71.8 (2008): 1427-1430.
Granja, Aitor G., et al. "A cannabigerol quinone alleviates neuroinflammation in a chronic model of multiple sclerosis." Journal of Neuroimmune Pharmacology 7.4 (2012): 1002-1016.
Borrelli, Francesca, et al. "Beneficial effect of the non-psychotropic plant cannabinoid cannabigerol on experimental inflammatory bowel disease." Biochemical pharmacology 85.9 (2013): 1306-1316.
Valdeolivas, Sara, et al. "Neuroprotective properties of cannabigerol in Huntington’s disease: studies in R6/2 mice and 3-nitropropionate-lesioned mice." Neurotherapeutics 12.1 (2015): 185-199.
Borrelli, Francesca, et al. "Colon carcinogenesis is inhibited by the TRPM8 antagonist cannabigerol, a Cannabis-derived non-psychotropic cannabinoid." Carcinogenesis 35.12 (2014): 2787-2797.
Brierley, Daniel I., et al. "A cannabigerol-rich Cannabis sativa extract, devoid of [INCREMENT] 9-tetrahydrocannabinol, elicits hyperphagia in rats." Behavioural pharmacology 28.4 (2017): 280-284.
Ruhaak, Lucia Renee, et al. "Evaluation of the cyclooxygenase inhibiting effects of six major cannabinoids isolated from Cannabis sativa." Biological and Pharmaceutical Bulletin 34.5 (2011): 774-778.
Moldzio, Rudolf, et al. "Effects of cannabinoids Δ (9)-tetrahydrocannabinol, Δ (9)-tetrahydrocannabinolic acid and cannabidiol in MPP+ affected murine mesencephalic cultures." Phytomedicine 19.8-9 (2012): 819-824.
Palomares, Belén, et al. "Tetrahydrocannabinolic acid A (THCA-A) reduces adiposity and prevents metabolic disease caused by diet-induced obesity." Biochemical Pharmacology 171 (2020): 113693.
De Petrocellis, Luciano, et al. "Non?THC cannabinoids inhibit prostate carcinoma growth in vitro and in vivo: pro?apoptotic effects and underlying mechanisms." British journal of pharmacology 168.1 (2013): 79-102.
Mechoulam, Raphael, et al. "Cannabidiol–recent advances." Chemistry & biodiversity 4.8 (2007): 1678-1692.
Hen-Shoval, D., et al. "Acute oral cannabidiolic acid methyl ester reduces depression-like behavior in two genetic animal models of depression." Behavioural brain research 351 (2018): 1-3.
Takeda, Shuso, et al. "Cannabidiolic acid as a selective cyclooxygenase-2 inhibitory component in cannabis." Drug metabolism and disposition 36.9 (2008): 1917-1921.
Bolognini, D., et al. "Cannabidiolic acid prevents vomiting in S uncus murinus and nausea?induced behaviour in rats by enhancing 5?HT1A receptor activation." British journal of pharmacology 168.6 (2013): 1456-1470.
Russo, Ethan B. "Cannabis therapeutics and the future of neurology." Frontiers in integrative neuroscience 12 (2018): 51.
Maione, Sabatino, et al. "Non?psychoactive cannabinoids modulate the descending pathway of antinociception in anaesthetized rats through several mechanisms of action." British journal of pharmacology 162.3 (2011): 584-596.
Izzo, Angelo A., et al. "Inhibitory effect of cannabichromene, a major non?psychotropic cannabinoid extracted from Cannabis sativa, on inflammation?induced hypermotility in mice." British journal of pharmacology 166.4 (2012): 1444-1460.
El-Alfy, Abir T., et al. "Antidepressant-like effect of Δ9-tetrahydrocannabinol and other cannabinoids isolated from Cannabis sativa L." Pharmacology Biochemistry and Behavior 95.4 (2010): 434-442.
Shinjyo, Noriko, and Vincenzo Di Marzo. "The effect of cannabichromene on adult neural stem/progenitor cells." Neurochemistry international 63.5 (2013): 432-437.
Stone, Nicole L., et al. "A Systematic Review of Minor Phytocannabinoids with Promising Neuroprotective Potential." British Journal of Pharmacology (2020).
Izzo, Angelo A., et al. "Non-psychotropic plant cannabinoids: new therapeutic opportunities from an ancient herb." Trends in pharmacological sciences 30.10 (2009): 515-527.
Karniol, Isac G., et al. "Effects of Δ9-tetrahydrocannabinol and cannabinol in man." Pharmacology 13.6 (1975): 502-512.
Zurier, Robert B., and Sumner H. Burstein. "Cannabinoids, inflammation, and fibrosis." The FASEB Journal 30.11 (2016): 3682-3689.
Weydt, Patrick, et al. "Cannabinol delays symptom onset in SOD1 (G93A) transgenic mice without affecting survival." Amyotrophic Lateral Sclerosis 6.3 (2005): 182-184.
Farrimond, Jonathan A., Benjamin J. Whalley, and Claire M. Williams. "Cannabinol and cannabidiol exert opposing effects on rat feeding patterns." Psychopharmacology 223.1 (2012): 117-129.
Wong, Hayes, and Brian E. Cairns. "Cannabidiol, cannabinol and their combinations act as peripheral analgesics in a rat model of myofascial pain." Archives of oral biology 104 (2019): 33-39.
Abioye, Amos, et al. "Δ9-Tetrahydrocannabivarin (THCV): a commentary on potential therapeutic benefit for the management of obesity and diabetes." Journal of Cannabis Research 2.1 (2020): 1-6.
Garcia, C., et al. "Symptom?relieving and neuroprotective effects of the phytocannabinoid Δ9?THCV in animal models of Parkinson's disease." British journal of pharmacology 163.7 (2011): 1495-1506.
Scutt, A., and E. M. Williamson. "Cannabinoids stimulate fibroblastic colony formation by bone marrow cells indirectly via CB 2 receptors." Calcified Tissue International 80.1 (2007): 50-59.
Oláh, Attila, et al. "Differential effectiveness of selected non?psychotropic phytocannabinoids on human sebocyte functions implicates their introduction in dry/seborrhoeic skin and acne treatment." Experimental dermatology 25.9 (2016): 701-707.
Vigli, Daniele, et al. "Chronic treatment with the phytocannabinoid Cannabidivarin (CBDV) rescues behavioural alterations and brain atrophy in a mouse model of Rett syndrome." Neuropharmacology 140 (2018): 121-129.
Iannotti, Fabio Arturo, et al. "Effects of non?euphoric plant cannabinoids on muscle quality and performance of dystrophic mdx mice." British Journal of Pharmacology 176.10 (2019): 1568-1584.
Rock, Erin M., et al. "Evaluation of the potential of the phytocannabinoids, cannabidivarin (CBDV) and Δ9?tetrahydrocannabivarin (THCV), to produce CB1 receptor inverse agonism symptoms of nausea in rats." British journal of pharmacology 170.3 (2013): 671-678.
Hollander, Eric. Cannabidivarin (CBDV) Versus Placebo in Children with Autism Spectrum Disorder (ASD). Albert Eintsein College of Medicine, Inc. Bronx United States, 2018.
YAMAMOTO, Ikuo, et al. "Identification of cannabielsoin, a new metabolite of cannabidiol formed by guinea-pig hepatic microsomal enzymes, and its pharmacological activity in mice." Journal of pharmacobio-dynamics 11.12 (1988): 833-838. | <urn:uuid:75a0cedf-6837-4c04-aad5-30252a8a7c6c> | CC-MAIN-2021-21 | https://cbdnerds.com/blog/guide-to-cannabinoids | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988741.20/warc/CC-MAIN-20210506053729-20210506083729-00136.warc.gz | en | 0.893634 | 5,487 | 2.859375 | 3 |
A review of application of nanotechnology within current dental practice and related research is outlined and with reference to the scale of objects encompassed within nanotechnology. While main application areas relate to nanoparticle composition within restorative composite materials, reference is also made to applications within preventive dentistry. Attention is also given to safety factors in all aspects of development and utilisation of nanotechnology.
Nanotechnology is an expression which has several interpretations and connotations. The potential of assembling atoms at the smallest scale of fabrication of matter was first popularly described by Richard Feynman around 1959 in his lecture on the theme of ‘There’s plenty of room at the bottom’. A keynote of this line of thinking was the potential to assemble physical structures at the scale of the atomic level. Eric Drexler 1 in 1986 proposed the possibility of construction by suitable ‘engines’ at the atomic scale. So far this level of massively parallel fabrication at the atomic scale has not been realised.
It was not, however, until the development of devices such as the scanning tunnelling microscope around 1981 that the physical manipulation of atoms could be demonstrated in the laboratory where the physical positions of atoms could be detected by means of electron current conducted from a very small tip onto the sampled surface. The historical provenance of Richard Feynman who had been awarded the Nobel prize for Physics in 1965 was then actively linked with the dramatic opportunities demonstrated by such an imaging technology. This allowed the firm establishment of the branch of science known as nanotechnology. Figure 1 indicates the groundbreaking image of Xenon atoms arranged on Nickel which first demonstrated in 1990 that atoms could be manipulated with precision at the atomic scale. Figure 2 indicates the various stages of creation of the ‘circular corral’ of iron atoms on copper.
Figure 1: IBM logo constructed from Xenon atoms on Nickel atoms using scanning tunnelling microscope at IBM’s Almaden Research Centre (originally created by IBM Corporation).
Figure 2: The Making of the Circular Corral: Iron atoms on Copper atoms imaged using a scanning tunnelling microscope at IBM’s Almaden Research Center. The diameter of the corral is 14.2 nm and comprises 48 atoms (originally created by IBM Corporation).
Today nanotechnology is generally accepted as the branch of science that deals with things between 0.1 nm (10-10 metres) than 100 nanometers (10-7 metres) and includes the manipulation of individual molecules. Individual atoms have sizes in the range 0.1 nm to 0.3 nm. The size of molecules is essentially determined by the summed length of component bonds, so that a relatively simple molecule of 1. 2,4,6-trinitro-toluene (TNT) has a total length of around 1 nm. A useful resource to identify range of size of objects from nanotechnology upwards is provided by the National Nanotechnology Initiative (http://www.nano.gov/html/facts/The_scale_of_things.html).
A level of anticipation of the potential of nanotechnology to contribute significantly to routine dental practice has previously been indicated 2. Since this was published there have been developments across a wide range of fields within nanotechnology. There has progressively emerged, however from 2000 onwards, a greater awareness of the potential hazards of nanotechnology and which is reflected in a general expression of caution with regard to human and environmental exposure for processes of manufacture, research and incorporation of nanomaterials into industrial and consumer products.
Basic Types of Nano Structures
Structures based on carbon feature prominently in research within nanotechnology. Carbon nanotubes can be either single walled or multi-walled where each ‘wall’ is fabricated from a one atom thick of carbon. Carbon nanotubes can be several millimetres long and as small as 0.7 nm. Carbon nanotubes have great potential to produce ultra strong but light weight structures. Fullerenes represent structures where carbon atoms form symmetrical structures involving from 28 atoms to over 100 atoms. In terms of shape they resemble that of a football. The best known form is that of C60 which incorporates 60 carbon atoms. While these structures have been actively investigated for a range of clinical applications, none would appear to be current within routine dentistry, though there are generally identified applications in cancer therapy 3. Graphene can be thought of as flat sheets of single atomic thickness carbon atoms which are packed densely in a honeycomb crystal lattice.
Dendrimers represent a new means of creating three dimensional macromolecules where there is a tree like branching of symmetry in the arrangement of atoms within the molecule. Applications of dendrimers are mainly related to targeted drug delivery. Nanoshell entities are nanoparticles which comprise a dielectric core (such as silica) and which are covered in a thin metallic shell – typically gold. Nanoshells have been developed as a cancer therapy 4 because after their introduction into tumour cells they can be made to absorb infra red laser radiation to result in temperature elevation within the cell leading to its death. Polymeric micelles are molecular ‘vehicles’ used to deliver specific pharmacological agents which in themselves may be poorly absorbed within the body and are widely used in a range of anti-cancer formulations 5.
Applications within Dentistry
Conventional methods of manufacturing of filler materials have used materials such as quartz, melt glasses or ceramics with mechanical techniques to pulverise to powder, though this tends not to reduce particle sizes below 100 nm. ‘Hybrid’ conventional filler particles are typically 1000 nm in size and with ‘micro hybrids’ having a slightly smaller average diameter. Processes have been identified for the manufacture of sub 100 nm nanofiller materials using radically different techniques which bring new characteristics to the performance of restorative materials containing them. The incorporation of larger filler particles in the ‘hybrid’ type filler materials provides for greater mechanical strength compared to that of a material containing only nanofiller materials. The surface profile of a hybrid type composite will tend to demonstrate reduced gloss with mechanical wear – where the wear surface becomes pitted with gaps from missing filler particles and scatters light more strongly. The much smaller size of nanofiller composite will develop an exposed surface which scatters light to a smaller extend and which provides for an improved gloss performance. Even for nanofiller particles of diameter 10 nm, however, these can be estimated to contain of the order of a million individual atoms.
The compound Filtek Supreme XT compound utilises discrete non agglomerated aggregated nanofiller particles in range 20 to 75 nm and also nanoclusters which are loosely bound agglomerates of nano sized particles. The agglomerates allow for a high level of filler loading which provides high bond strength. Figure 3 outlines the characteristics of the wear surface of a ‘Hybrid’ composite and of Filtek Supreme restorative material where additional mechanical strength is provided by the nanoclusters but the small size of component nanofiller material gives improved gloss performance. Mitra and Holmes 6 compare properties of mechanical strength, wear resistance and gloss retention of Filtek Supreme Translucent and Filtek Supreme Standard with five other restorative materials (Filtek 250; TPN Spectrum; Point4; EsthetX; and Filtek A110). Filtek Supreme Standard was found to have best wear resistance and best gloss retention up to 300 brush cycles and mechanical strength characteristics among the top three of the group tested.
Figure 3: Comparison of surface characteristics of ‘Hybrid’ composite and Filtek Supreme restorative showing how surface roughness influences degree of light scattering and associated gloss performance (not to scale).
A specific study has been undertaken by Ernst et al 7 in which comparison was made of the effectiveness of nanofiller resin composite Filtek Supreme (3M ESPE) with conventional fine hybrid resin composite Tetric Ceram (Ivoclar Vivadent) in stress-bearing posterior cavities. A study group of 50 patients received at least one of both types of restorative material. No statistically significant differences were found between the two restorative materials after two years. A more detailed review of nanotechnology developments within composite materials is provided by Saunders 8.
Fioretti et al 9 describe the possibility of using a nano-sized film containing alpha melanocyte stimulating hormone (alpha-MSH) to use its anti-inflammatory properties to fight inflammation in dental pulp fibroblasts. Laboratory experiments indicate the ability of growth of human pulp fibroblasts with such nanolayers This is a component of regenerative endodontics which advocates a broad range of techniques for restoration of tooth viability in which diseased or necrotic pulp tissues are removed and replaced with healthy pulp tissue to revitalize teeth. A broader review of regenerative endodontics is provided by Murray 10.
Caries Inhibition in Fillings
The addition of compounds such as dicalcium phosphate anhydrous, or DCPA to the resin mix in a composite filing is conventionally undertaken to provide a steady release of calcium and phosphate ions to inhibit caries development and strengthen tooth metabolism. The inclusion of such materials, however, can structurally weaken the filling as a whole. Investigators 11,12 have described how nano sized particles of DCPA and similar agents are more effective as ion release agents and that the reduced levels of DCPA lead generally to higher bond strength.
Tooth Surface Conditioning
The potential advantageous use of silica nanoparticles to polish tooth surfaces has been identified by Gaikwad and Sokolov 13 where areas of teeth polished with silica nanoparticles demonstrated reduced adherence of cariogenic bacteria such as Streptococcus mutans.
It is widely accepted that the reduction of bacterial adherence and biofilm formation on tooth surfaces has the potential to reduce the incidence of caries and periodontitis. One group 14 has identified that coating tooth surfaces with a commercially available organic/inorganic nano-composite coating significantly reduced the surface loading of both enamel and titanium samples which were exposed over a 24 hour period within a volunteer group. It is identified that further work in this area is required to determine the effectiveness and intrinsic safety of such nano-composite coatings over clinically relevant time periods. A recent review of the role of nanotechnology in preventive dentistry is outlined by Hannig 15.
Tooth Surface Characteristics: Nanoindentation
The technique of nanoindentation 16 allows characterisation of material surfaces by determining force/displacement curve as force is applied to an indentation probe on the surface of the test material. He and Swain 17 describe the use of a nanoindenter to characterise the response of tooth enamel where nanoindentor tip diameters as small as 5 microns are used and where the penetration depth can be determined with high sub nm accuracy. It was identified that the surface of enamel produces an inelastic response which is associated with a very small protein rich component which exists between the hydroxyapatite nanocrystals and also within the protective structure surrounding the enamel rods.
Developing Safety Frameworks
The extensive report from Health and Safety Executive in 2004 18 made a range of key statements regarding risks and safety assessments of nanoparticles which highlighted major shortcomings within the then current procedures. The risk associated with nanotechnology is identified as two distinct types. The current focus is on the effect of uptake of nanomaterials within biological structures and environmental systems. The other and more perplexing risk is the potential for nano structures in the form of assemblers/dissasemblers at the atomic/molecular level to modify the atomic/molecular structure of tissue. It is this scenario of the potential of nanostructures to reduce living tissue to ‘grey goo’ which was initially described by Drexler 1 . Currently, nanotechnology does not have the potential to create such assemblers/dissasemblers and such a scenario is generally accepted as exceedingly unlikely.
A more recent report 19 confirms anxiety about the range of knowledge of occupational health and safety aspects related to synthetic nanoparticles. There appears to be an indication that the smaller particles of nanoparticles may demonstrate more toxic effects than material present as particles of larger dimensions, but same material. The report identifies a specific concern with carbon nanotubes which appears to show toxicity in animal studies 20 similar to that of asbestos and is causing significant concern in the international scientific community.
Titanium dioxide is a commonly used material within a broad range of cosmetic and household products, including toothpaste. Investigators 21 have demonstrated that at a body loading of 5 g/Kg in mice, nanoparticles of typical size 25 nm and 80 nm could be transported to the liver, spleen, kidneys, and lung tissues after uptake by the gastrointestinal tract. In addition, at the specific dose levels administered, tissue damage was observed in the liver and kidney. Specific evidence of genetic damage in mice with exposure of Titanium dioxide nanoparticles has been identified 22. It has been estimated 23 that the fraction of production of Titanium dioxide in the USA as nanoparticles is progressively increasing from a low base level but will be almost totally in the nanoparticle formulation by 2026.
Particles of silver at the nano scale or ‘nanosilver’ are being incorporated into a range of consumer products such as soap, shampoo and toothpaste to exploit their germ killing properties. A review by Tolemat et al 24 has identified that most production processes produce spherical silver nanoparticles with diameters of less than 20 nm. It is considered important, however, to consider separately the relative safety of colloid silver particles and inonic silver particles. Chi et al. 25 have indicated that while the genotoxicity of nanoAg is weak, obvious genotoxicity can be demonstrated with the agent cetlpyridine bromide (CPB) which is a compound found in some mouthwashes, toothpastes and lozenges. The implication of the study is that nanoAg and CPB are present in combination there is the potential for genotoxic activity. Techniques for evaluation of exposure of nanoparticles from consumer products including toothpaste are described by Benn et al. 26.
It is important, however, to clearly understand the scope of research which may prove some formulations a potential hazard while others are shown to be low risk. In the context of carbon nanotubes, for example, while longer length carbon nanotubes are considered to present a potential hazard, short length tubes may present a significantly lower level of hazard.
Utilisation of nanotechnology within dentistry is largely in evidence through the use of smaller filler particle sizes within restorative composites, though research has also highlighted a range of options to implement nanotechnology within a range of preventive measures. It is relevant to comment that clinical papers referencing applications of nanotechnology infrequently reference safety issues which appear to be taken more seriously by the wider scientific nanotechnology community.
1 K. Eric Drexler Engines of Creation : The Coming Era of Nanotechnology, Anchor Books, 1986
2 Ure D, Harris J. Nanotechnology in dentistry: reduction to practice. Dent Update 2003; 30:10-5.
3 Partha R and Conyers JL. Biomedical applications of functionalized fullerene-based nanomaterials, Int J Nanomedicine 2009; 4: 261–275.
4 Mody VV, Siwale R, Singh A, Mody HR. Introduction to metallic nanoparticles. J Pharm Bioallied Sci. 2010;2:282-289.
5 Benny O, Fainaru O, Adini et al. An orally delivered small-molecule formulation with antiangiogenic and anticancer activity. Nat Biotechnol. 2008;26:799-807.
6 Mitra SB, Wu D, Holmes BN. An application of nanotechnology in advanced dental materials.J Am Dent Assoc. 2003;134:1382-1390.
7 Ernst CP, Brandenbusch M, Meyer G, Canbek K, Gottschalk F, Willershausen B. Two-year clinical performance of a nanofiller vs a fine-particle hybrid resin composite, Clin Oral Investig. 2006;10:119-125.
8 Saunders SA. Current practicality of nanotechnology in dentistry. Part 1. Focus on nanocomposite restoratives and biomimetics, Clinical Cosmetic and Investigational Dentistry, 2009
9 Fioretti F, Mendoza-Palomares C, Helms M et al. Nanostructured assemblies for dental application. ACS Nano. 2010;22;4:3277-3287.
10 Murray PE, Garcia-Godoy F, Hargreaves KM. Regenerative endodontics: a review of current status and a call for action. J Endod. 2007;33:377-390.
11 Xu HH, Weir MD, Sun L, Ngai S, Takagi S, Chow LC. Effect of filler level and particle size on dental caries-inhibiting Ca-PO(4) composite. J Mater Sci Mater Med. 2009;20:1771-1779.
12 Xu HH, Weir MD, Sun L, Moreau JL, Takagi S, Chow LC et al. Strong nanocomposites with Ca, PO(4), and F release for caries inhibition. J Dent Res. 2010;89;19-28.
13 Gaikwad RM, Sokolov I. Silica nanoparticles to polish tooth surfaces for caries prevention. J Dent Res. 2008;87:980-983.
14 Hannig, M.Kriener, L, Hoth-Hannig W, Becker-Willinger C, Schmidt H, Influence of Nanocomposite Surface Coating on Biofilm Formation In Situ,Journal of Nanoscience and Nanotechnology, 2007;7: 4642-4648.
15 Hannig M and Hannig C, Nanomaterials in preventive dentistry, Nature Nanotechnology, 2010;5:565-569.
16 Fisher-Cripps Laboratories Pty. Ltd,The IBIS Handbook of Nanoindentation, Australia, 2009: http://www.ibisonline.com.au/downloads/downloads/IBIS_HandbookOfNanoindentationBook.pdf
17 He LH, Swain MV. Energy absorption characterization of human enamel using nanoindentation. J Biomed Mater Res A. 2007;81:484-492.
18 Health and Safety Executive, Nanoparticles: (2004) An occupational hygiene review, prepared by the Institute of Occupational Medicine.
19 IRSST publications,Engineered Nanoparticles: Current Knowledge about Occupational Health and Safety Risks and Prevention Measures, Report R-656 – Second Edition, Quebec, 2010
20 Craig A, Poland CA, Duffin R et al., Carbon nanotubes introduced into the abdominal cavity of mice show asbestos-like pathogenicity in a pilot study, Nature Nanotechnology.2008;3:423 – 428.
21 Wang J, Zhou G, Chen C et al. Acute toxicity and biodistribution of different sized titanium dioxide particles in mice after oral administration. Toxicol Lett. 2007;168:176-185.
22 Trouiller B, Reliene R, Westbrook A, Solaimani P, Schiestl RH. Titanium dioxide nanoparticles induce DNA damage and genetic instability in vivo in mice. Cancer Res. 2009;15:69(22):8784-8789.
23 Robichaud CO, Uyar AE, Darby MR, Zucker L, Wiesner MR, Estimates of Upper Bounds and Trends in Nano-TiO2 Production As a Basis for Exposure Assessment, Environ. Sci. Technol. 2009;43:4227–4233.
24 Tolaymat TM, El Badawy AM, Genaidy A, Scheckel KG, Luxton TP, Suidan M. An evidence-based environmental perspective of manufactured silver nanoparticle in syntheses and applications: a systematic review and critical appraisal of peer-reviewed scientific papers. Sci Total Environ. 2010;408:999-1006.
25 Chi Z, Liu R, Zhao L, Qin P, Pan X, Sun F et al. A new strategy to probe the genotoxicity of silver nanoparticles combined with cetylpyridine bromide. Spectrochim Acta A Mol Biomol Spectrosc. 2009;72:577-581.
26 Benn T, Cavanagh B, Hristovski K, Posner JD, Westerhoff P. The release of nanosilver from consumer products used in the home. J Environ Qual. 2010;39:1875-1882. | <urn:uuid:0f53841c-873a-4da3-81fb-fe64feb63c5c> | CC-MAIN-2021-21 | https://www.ivoryresearch.com/samples/nanotechnology-in-dentistry-an-update/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991870.70/warc/CC-MAIN-20210517211550-20210518001550-00457.warc.gz | en | 0.902671 | 4,316 | 2.9375 | 3 |
Your nervous system is as complex a physical object as there is in the universe, so far as we know: 12 billion cells, each of them a complex structure with up to sixty thousand synaptic points of connection with other cells. It is also the one piece of physical real estate of which you have an inside view, so to speak, since the events of your inner life, and the experiences through which you learn about the external world, are all immediate manifestations of what is going on in there. Since you can also study your central nervous system by external observation and experiment as you study other physical systems – by exposing its outer edges, such as the retina, to bombardment by suitably produced and therefore informative physical impulses – there arises a problem about how to bring these two views of yourself together.
The problem is as far from solution today as it was when Descartes tried to prove the distinction between mind and body 350 years ago – in spite of a widespread sense, fostered by the popular culture of science, that with the aid of computer analogies and advances in molecular biology we are on the verge of a breakthrough. The more we learn about the brain the clearer it is how little we understand its embodiment of the mind.
Here is a typical passage from Philosophy and the Brain by the eminent neurophysiologist J.Z. Young:
The pressure waves falling upon the ear from the sound of ‘Hullo’ are first transferred by the eardrum, then by the chain of three ossicles, next by fluid in the cochlear chamber and so into a travelling wave on the basilar membrane. From here they activate the hair cells to modify the trains of nerve impulses in the auditory nerve. These are then in turn re-coded several times before arriving at the auditory cortex. Here there are cells that respond only to certain patterns of these already much transformed versions of the original air waves. This is by no means the end of the process. The cortical cells continually exchange signals among themselves, which represent the patterns that have been learned in the past. Modification of the action of some of the cells by the incoming signals from the ear will produce appropriate outputs towards motor centres. The first response to the sound ‘Hullo’ may be a sharpening of attention and then, if it is repeated, perhaps a pattern of motor activity by the larynx and muscles of the throat that sends the response ‘Yes – is that you, James?’
There are evidently some gaps in the physiological story, but they get filled in with nothing more than the psychology of the man in the street dressed up as remarks about the brain.
The truth is that while a good deal is known about the outer boundaries of the nervous system – the initial effects of sensory stimulation and the last stages in the initiation of muscle contractions – we are largely ignorant about the central processes essential for hearing and seeing, let alone recognising the voice of your obnoxious colleague James who has to say ‘Hullo’ twice before you will answer him.
This darkness at the centre is emphasised by Young again and again, though there are interesting fragments of information: at least six ‘maps’ of the visual field in the cortex, in the form of patterns of cells activated by the irradiation of particular points on the retina; cells that respond to particular features of the visual stimulus, such as the directional orientation of a light-dark boundary; cells that are activated specifically by the look of human faces, and not by other types of patterns. A good deal has been learned about how individual neurons operate, how they interact with others in their neighbourhood, and how electrical nerve impulses are propagated. And a certain amount of crude geography has been inferred from the deficits produced by injury: lose Broca’s area and you can’t produce coherent speech; lose Wernicke’s area and you can’t understand language, though you can produce words; lose the hippocampus and you can’t form any new memories; lose the secondary visual association area and you can’t recognise what you are looking at, though you can see the shapes. But the understanding provided by such information is so far very limited. It is like trying to understand how the US political system works by locating the public buildings on a map of Washington DC.
Psychological theories formulated in terms of mental operations – theories of perception, learning, memory, motivation – do seek and often provide an understanding of how the mind works which transcends common sense and introspection. Some of those theories are represented in Young’s book, but though they are embellished with references to programmes in the cortex or activity in the basal forebrain areas, they are no more about the brain (and no less) than is my belief that Picasso had a remarkable imagination. The capacities described by those theories, or by that belief, are somehow or other due to the activity of the billions of cells beneath the skull or of particular large subsets of those cells, but we know remarkably little beyond that.
Young’s book is a clear, concise guide for the layman to what we do know, and he is scrupulous in pointing out the speculative character of any contemporary theory of the physical basis of central mental processes. The factual material is often fascinating, though I was disturbed by the statement, in a discussion of Sperry’s well-known split-brain cases, that the left eye communicates only with the speechless right cerebral hemisphere. In fact, it is the right half of each retina – which scans the left half of the visual field – that communicates with the right hemisphere, while information from the right half of the visual field goes to the left hemisphere. I have to assume that Young thought his simplification of the anatomical facts would be easier on the reader, but it makes me wonder whether other inaccuracies were introduced with a similar purpose – particularly since he seems concerned throughout to keep things very simple.
That aside, the book provides easy access to much information about current brain research that non-specialists should want to know, and bibliographical reference to more detailed treatments. But it is also presented, however diffidently, as a contribution to philosophy, and in this respect I am afraid it is not a success. Young believes that the information he has to offer about the brain can be useful in dealing with some of the big philosophical questions about knowledge, meaning, value and free will that have been around for a long time, but the trouble is that he has a tin ear for philosophy. He thinks information about the brain will help answer these questions because he doesn’t understand them well enough to see how easily they can be re-introduced once the facts have been heard.
It seems ungrateful to reject an offering of this kind from a scientist of Young’s distinction – made, I might add, with extraordinary humility and lack of pretension. But some outstanding intellects are simply not susceptible to philosophical problems, and apparently Young is one of them. He has sampled a number of prominent contemporary English and American philosophers and dutifully quotes Hare, Davidson, Mackie, Dennett, Ayer and so forth, as well as other, more traditional figures, in order to provide a context for the scientific information. But his comments on what he quotes, and his attempts to establish the philosophical relevance of facts about the brain, usually reveal that he doesn’t really know what is bothering these people. Evidence that the brain is active rather than merely passive in perception is hardly surprising, and does nothing to answer philosophical questions about the justification of belief. Information about punishment and reward centres in the brain or about homeostatic mechanisms that control body temperature, hunger and thirst is of no help with questions about the objectivity of value, nor does the suggestion that the basal forebrain areas are involved in learning social norms shed any light on ethics or political theory.
To take just one example, Young says:
For Descartes’s Cogito, ergo sum – ‘I think therefore I am’ – we might substitute the more biological proposition: ‘I am certain that I am alive.’ This has many advantages as a starting-point.
Well, it has the advantage of starting with the assumption that I am a biological organism, so that we can dispense with the need for asking how I know that. But Descartes, in his intellectual journey of relentless pre-emptive doubt, was trying not to avail himself of any assumptions about his nature or the world that depended on experiential evidence until he had established that such evidence could be relied on. That he was a biological organism was precisely the kind of thing he could doubt till then, in a way in which he couldn’t doubt that he was thinking. That is why he thought the existence of his mind was more certain to him than the existence of his body, and why he thought adequate new foundations for all other knowledge would have to rest on the inner knowledge of his own mind, through which all his other beliefs were formed.
This is the kind of thinking that makes someone a philosopher and not a biologist. The empirically-based findings of biology are logically incapable of supplying the foundations of empirical knowledge. If I had to give a general characterisation of philosophy I should say it was the examination of whatever is so basic that we must simply take it for granted in almost every aspect of life in order to function at all – whether we are merely living, talking, perceiving and acting, or are engaged in sophisticated scientific inquiry. An ordinary citizen or a research scientist can’t constantly be asking himself: ‘What is a number?’ ‘What is thought?’ ‘What makes my words mean anything?’ ‘How do I know that my experiences provide any evidence whatever about a world outside my own mind?’ ‘Does anything have any value at all?’
It is, in fact, a mark of the philosophical nature of these questions that you can go through life without thinking about them: it shows how fundamental they are. What is examined and called into question by philosophy is simply used in ordinary life. This makes philosophy a peculiar activity: when you try to subject to critical examination your most basic forms of thought and grounds of action, there is very little left that you can use in conducting the investigation. All your accustomed tools and methods are under the microscope, and you may have to use some of them anyway.
It is usually fruitless to try to answer the most basic philosophical questions by referring to the results of an empirical science – either because these results are based on methods and concepts and forms of evidence that are themselves the objects of those questions, or because the questions remain even if the results are accepted, as with sociobiology and ethics. The exception is a scientific result that is itself based on the philosophical revision of a basic concept – as Einstein’s theory of relativity was based on philosophical reflections about time. But physiological psychology has not reached that stage, and the information Young gives us about the location of centres associated with the emotions, the structure of pain receptors, short and long-term memory, thirst centres and anger centres and the information carried by DNA simply fails to make contact with the central issues of ethics, epistemology and metaphysics.
It may not always be so, but something radical will have to happen first. Consider the central metaphysical example. We know that the true physical story about the brain is unbelievably complicated. But we have, I would maintain, not the faintest idea of how such a physical story, even if we could get it and hold it in our heads, might constitute a full explanation of the basis of our mental lives. This is because practically no progress has been made on the fundamental problem of the relation between the view from outside and the view from inside, which arises in this area of science and no other.
All other scientific questions, however basic, approach things from outside, through sensory perception and experiment: they concern the external world and the relations between different levels of perception or description of it. This applies to biology and neurophysiology as well as to physics and chemistry. Psychology is the only exception, although plenty of philosophers and psychologists have tried to physicalise or externalise the subject-matter of psychology by various forms of behaviourism – some quite open and some, like the currently fashionable functionalism, lightly concealed. (Young, to his credit, has none of these flattening reductionist tendencies. While he is firmly opposed to a dualism of body and soul, he leaves open the question of the logical relation between mental events and the physical events invariably associated with them.)
The first question for any scientific theory is ‘What is it that has to be explained?’ And while externally-observable behaviour is part of what a theory of the mind has to explain, it is not all of it – in fact, the only physical manifestations that are of psychological interest are those that have something mental behind them. The subject-matter of psychology comprises the lives of beings that not only can be observed from outside but also have a point of view of their own. And while we can imagine, even if we do not possess a neurophysiological theory of the basis of their externally-observable physical behaviour – the model being that of enormously complex processes analysed into their minute physical parts – we have no comparable conception of what a purely physiological theory of the inner life could be, since it would have to anatomise the inside view in terms of what can be observed from the outside, and we have no models for that. We are still stuck, in other words, with the philosophical mind-body problem. It may eventually be transformed by work in physiological psychology, but only if it is acknowledged and addressed directly.
Another of the big old problems which might seem to be susceptible to illumination from facts about the brain is the problem of free will. But Young seems to think that this is merely the problem of describing how choices are made.
The act of making a choice is the product of the whole person, including his brain. We may follow the processes that might be involved in a trivial case: ‘I feel restless’ (the reticular system is in action); ‘I want a drink’ (the hypothalamus is signalling thirst and some centres conditioned by alcohol are at work); ‘Let’s see what he has in the cupboard’ (scanning activities by the eyes); ‘Ah, here are some bottles and glasses’ (comparison of visual input with stored representation); ‘I’d rather have gin than whisky’ (selection according to stored system of values).
Using these techniques, it would be easy to rewrite ‘Goldilocks and the Three Bears’ in neurophysiological terms. But the problem of free will is not addressed by descriptions of the process of choice: it can be raised against the background of any such description, since it is the problem of whether our natural sense of responsibility for what we do knowingly is intelligible, whatever naturalistic account of the process can be given.
By ingenious experimental techniques Benjamin Libet has shown that when a person decides to make a small physical movement of his finger at an arbitrary time of his choice, a characteristic electrically-detectable change in his brain, called a readiness potential, occurs slightly less than half a second before he becomes aware of his intention to move (and of course still longer before he actually moves the finger.) The brain appears to have made the choice before the person is aware of it. A philosopher to whom I described this experiment said wryly that the implication was clear: ‘Our brains have free will but we don’t.’ Young cites the result to show that it is futile to think of oneself as distinct from one’s brain. But the question remains, whether the initiation of decision prior to conscious awareness of it should undermine the belief that we are responsible for the decision (after all, no one would maintain that we are responsible for everything our body does – or even for everything our brain does). The question can be answered only through an analysis of what our natural sense of responsibility for actions amounts to. But an experiment like this seems to raise the disquieting possibility that what we take to be free actions are just things that happen to us, and that our conscious sense of choice is an illusion of control after the fact.
Perhaps this conclusion can be resisted on the grounds that consciousness from the start is not necessary for free choice. But behind the question whether the sense of freedom is an illusion there lurks a larger question. Even if we don’t have it, what would real control, or freedom, or responsibility be? Is there any naturalistic account of the generation of choice and action by the brain that would allow us to avoid the conclusion that the acts of which, as we do them, we take ourselves to be the originators are really, in the final analysis, things that merely happen to us – that we are so to speak carried along by our brains and by the world of which they are a part?
This is another of those mind-bendingly basic philosophical questions, and it is the subject of Galen Strawson’s Freedom and Belief, an often interesting and very involved exploration of the positive content and internal incoherence of our natural sense of ourselves (from the inside) as free and responsible agents. No one could accuse Strawson of insensitivity to philosophical problems: he has the essential capacity to be mystified by the utterly familiar, and to explain clearly both what is so puzzling about it and why the obvious moves of demystification that come first to mind will not work.
The problem of free will has more lives than a cat: after each attempted burial it will usually be found perched complacently on its freshly planted tombstone. Strawson’s position is that we naturally believe ourselves to possess a kind of freedom that we could not possess, because its conditions are incoherent. In the Western world, at least, people naturally believe themselves to be responsible for their actions in a way that makes them truly deserving of praise and blame for the character of those actions. But this, Strawson argues, requires that humans be truly self-determining – that their actions and all the features of character that influence their actions be determined by themselves – and this is impossible. It is impossible, not because of the way the world happens to be, but because it doesn’t make any sense, as the idea of a round square doesn’t make any sense.
But if it doesn’t make sense, what is it that we believe? Well, it’s not gibberish; its incoherence has to be uncovered. Even the idea of a round square, as Strawson points out, has a perfectly clear content: ‘a rectilinear, equilateral, equiangular, quadrilateral plane figure all points on the periphery of which are equidistant from a single point within it’. Just as this is geometrically impossible, philosophical argument shows both that true responsibility requires a strong form of self-determination, and that such self-determination is logically impossible. The difference is that we don’t have an irrepressible tendency to believe in round squares.
True responsibility is impossible not because everything we do is causally-determined far in advance: it is impossible whether what we do is causally-determined or not. Determinism is irrelevant to the question of free will because to be truly self-determining, one has to have determined how one is in such a way that one is truly responsible for how one is. But that requires having chosen how one is, and one can’t have done that unless the choice issues from one’s character and motives – i.e. from how one is. Responsibility for this in turn depends on one’s having chosen it, and so forth. ‘True self-determination,’ says Strawson, ‘is logically impossible because it requires the actual completion of an infinite regress of choices of principles of choice.’ This is impossible whether all our choices are causally-determined or not.
Strawson begins the book with a forceful exposition of this simple argument and responses to multiple attempts to refute it. While he doesn’t claim originality, he makes the case as effectively as I have ever seen it made. What is distinctive about his approach is that, having argued that strong free will is impossible, he has a great deal more to say about the positive content of the idea. A concept may be rich and complex although incoherent. For example, he believes there is considerable partial truth in the view of compatibilists, who think responsibility is compatible with determinism because responsibility requires only that our actions be causally-determined in certain ways that allow them to be regarded as the products of unconstrained choice. The truth in compatibilism reveals part of our conception of ourselves as free, even though it does not reveal the incoherent whole of it.
Strawson introduces the significant idea of a potential free agent. The issue of free will arises not just at the level of particular actions, but also with respect to the kind of being whose actions can be free, provided other conditions are met in the particular case. Thus on the ordinary conception, a human being completely tied up is a potential free agent, but an unfettered trout that can swim anywhere it wants is not. Certain capacities, in addition to the capacity for motivated movement, are necessary, and Strawson emphasises particularly the capacity for self-conscious thought: about one’s own desires, beliefs and reasons.
However, he does not believe that self-consciousness is enough, and this is not just because it does not guarantee full self-determination (nothing could do that), but because self-consciousness would be compatible with various forms of detachment from one’s own actions which would in themselves be incompatible with freedom and responsibility. Roughly, he claims that according to our natural idea of freedom, to be fully free one must believe oneself to be free – in making choices one must be convinced that they issue from oneself alone, in the light of one’s awareness of one’s desires and beliefs – and while this is a very hard conviction to get rid of, it is possible to imagine self-conscious beings who do not have it.
Certain forms of Eastern religious meditation may aim at overcoming this belief about oneself, but mere intellectual conviction that we do not have free will is not enough. By itself it will not change the overwhelming sense of responsibility one would feel if faced with the following choice (Strawson’s example): ‘If you agree to submit to twenty years of torture – torture of a kind that leaves no time for moral self-congratulation – you will save ten others from the same fate.’
Kant claimed that we cannot face such a morally-freighted choice without becoming directly aware of our own freedom. Strawson thinks that although it would be an alien and difficult condition to achieve, someone might really never feel free, and if that were so, then even if all the other conditions of freedom were met – even if, per impossibile, he were truly self-determined – he would still not have free will and would not be truly responsible. This is a difficult claim to assess, since it involves a counter-possible conditional: but it doesn’t seem right to me. If true responsibility were possible, couldn’t someone be deceived (even self-deceived) about whether he had it? Couldn’t he act with the illusion that all this was just happening to him, while actually it was his doing, and he was fully responsible for a choice which saved or failed to save ten other people from torture?
Strawson’s denial of this possibility leads him to conclude that the ordinary idea of free will is partly subjective: that is, it is not just the idea of an objective condition (even an impossible one) independent of the subject’s beliefs. (He spends an inordinate amount of time arguing that the extra condition of belief in one’s own freedom couldn’t be the indirect result of certain objective conditions.) But while he believes that a subjective condition is necessary for free will, he denies that it is sufficient: i.e. that a certain set of attitudes, on the part of the individual or of the community of which he is a member, would be sufficient for responsibility provided appropriate capacities for choice were present.
There is a strong temptation to think that our nearly unshakable conviction that we and others we know are sometimes responsible for what we do must be somehow self-guaranteeing, and that its content must be interpreted so as to make it come out either true or at least not refutable by very general information about the objective character and sources of human action. It has been argued, for example, that against the background of the powerful and natural system of ‘reactive attitudes’ connected with the idea of responsibility – such as resentment, gratitude, pride, shame and indignation, and the social practices connected with them – it makes no sense to call into question all claims of responsibility by means of a general rational argument that they are incompatible with our place in the natural causal order. Strawson believes, however, that ‘the fact that the incompatibilist intuition has such power for us is as much a natural fact about cogitative beings like ourselves as is the fact of our quite unreflective commitment to the reactive attitudes. What is more, the roots of the incompatibilist intuition lie deep in the very reactive attitudes that are invoked in order to undercut it.’ Whether or not the subjective conviction of free will is a necessary condition of free will, I agree that it does not appear to be a sufficient condition. It may be a basic experiential fact of life that we have such a conviction, but it is not self-fulfilling. Yet it is so powerful in most of us that we find it very hard to conceive that it is not just false but incoherent. Surely a conviction so strong and so central to our conception of ourselves must at least have an intelligible possibility as its object!
Strawson’s book has as one of its aims to describe the complex structure of the conviction in a way which will explain how it can be full of content and significance for us even though it is ultimately unintelligible. This is a valuable enterprise, and he pursues it with care and philosophical acumen. The book is too long, and it retains some of the unappealing features of the doctoral dissertation from which most of it derives: there are too many complicated détours to refute all possible counter-hypotheses, and countless arguments are shoved into footnotes. But it is a serious and intelligent work, written in an accessible style, on one of the hardest problems there is.
Anyone interested in these topics should acquire The Oxford Companion to the Mind, edited by Richard Gregory. The entries come from an outstanding group of 200 contributors, most of them psychologists, neuroscientists and philosophers. A substantial number were written by Gregory himself, and the book as a whole is of the quality one would expect from the author of Eye and Brain and The Intelligent Eye. His own essays are gems, and the general level of informativeness and economy of expression is high. There are short accounts of their own theories from figures such as Noam Chomsky, Roger Sperry, A.R. Luria and R.D. Laing, and articles on everything from neurotransmitters to the psychology of music to Capgras’s syndrome (the conviction that your nearest and dearest have been displaced by perfect replicas). The book gives a true picture of the fluid and complex character of thought about the brain in our time. | <urn:uuid:fbf66947-7515-4e7e-ad50-8be4bec47b55> | CC-MAIN-2021-21 | https://lrb.co.uk/the-paper/v09/n17/thomas-nagel/is-that-you-james | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991557.62/warc/CC-MAIN-20210517023244-20210517053244-00057.warc.gz | en | 0.966415 | 5,773 | 3.0625 | 3 |
wikiHow is a “wiki,” similar to Wikipedia, which means that many of our articles are co-written by multiple authors. To create this article, 187 people, some anonymous, worked to edit and improve it over time.
There are 7 references cited in this article, which can be found at the bottom of the page.
wikiHow marks an article as reader-approved once it receives enough positive feedback. This article has 67 testimonials from our readers, earning it our reader-approved status.
This article has been viewed 1,145,116 times.
Although most people receive some kind of training in proper handwriting techniques as small children, we often let go of those lessons as we grow up. Especially in an age when communication and note-taking have moved increasingly to computers and cell phones, many people find themselves in a situation where their handwriting is completely illegible. Even if your writing is clear enough to understand, there's always room for improvement.
Method 1 of 3:
Preparing to Write
1Gather the best materials. All you need is a piece of paper and either a pen or a pencil — it seems simple enough, right? However, poor quality materials can make a significant impact on the legibility of your writing.
- The page should be smooth — not rough enough to catch the tip of your pen and create snags in the line of your letters, and not so smooth that the tip of your pen goes sliding about without your control.
- Use lined paper sized appropriately for your comfort level — wide-ruled if you write large letters, college-ruled if you write small letters.
- Note that in many professional contexts, adults are expected to write within the limits of college-ruled paper, but feel free to use wide-ruled if you are still young and in school.
- Experiment with different types of pens to see which one works best for you. There are several styles, each with their own benefits and drawbacks. X Research source
- Fountain pens use liquid ink and have a flexible writing tip that allows for stylized, better handwriting. While it delivers a beautiful line, a good fountain pen can be pricey, and it takes a good deal of practice to perfect the fountain pen technique.
- Ballpoint pens use a paste ink which some find unappealing compared to liquid ink; however, they can be extremely inexpensive. Note that you’ll get what you pay for with ballpoint pens — a cheap pen will deliver poor handwriting, so it may be worth it to spend a little extra money.
- Rollerball pens have a “ball” delivery system much like a ballpoint pen, but many people prefer them because they use the higher quality liquid rather than paste ink. However, they don’t last as long as ballpoint pens do.
- The gel ink used in gel ink pens is thicker than liquid ink and results in a smooth feel and line that many people enjoy. Gel ink pens come in a wide variety of colors but can dry out quickly.
- Fiber tip pens use a felt tip to deliver ink, and many writers enjoy their distinctive feel when drawn against a page — smooth, but with a little friction or resistance. Because the ink dries quickly, these pens are a good option for left-handed writers whose hands smudge their words from left to right.
2Find a good writing table. The first step to developing good posture while writing is actually to use a good writing surface. If the table is too low, people have a tendency to slump down and round their spines, which can result in chronic pain and injury. If it’s too high, people carry their shoulders higher than is comfortable, resulting in neck and shoulder pain. Sit at a table that allows you to bend your elbows at approximately a 90-degree angle when writing.
3Develop good writing posture. Once you’ve found a table that will discourage you from slumping or hitching your shoulders up, you need to hold your body in a way that prevents the back, neck, and shoulder pain that can accompany improper posture.
- Sit in your chair with both feet flat on the ground.
- Sit up straight, keeping your back and neck as straight as possible. You can take breaks from time to time if the posture is difficult, but over time, the muscles will develop and allow you to maintain good posture for extended periods.
- Instead of dipping your head down to look at the page while you’re writing, keep your head as straight as possible while casting your eyes down. This will still result in a slight dip of the head, but it should not be hanging down toward the page.
4Position the page at an angle between 30 and 45 degrees. X Research source Sit flush with the edge of the desk, then turn the page you’re writing on until it sits at an angle somewhere between 30 and 45 degrees to your body. If you are left-handed, the top edge of the page should point to your right; if you are right-handed, it should point to your left.
- As you practice writing, make small adjustments to find the angle that feels most comfortable to you and allows you to write most legibly.
5Stretch your hands before writing. X Research source The rise of computers and cell phones for written communication has had a significant negative impact on handwriting — one study revealed that 33% of people have trouble reading their own writing. X Research source Another symptom of this decline is the infrequency with which people write by hand these days; if you don’t stretch your hands to prepare them for sudden increase inactivity, you’ll find yourself cramping up sooner than you’d like.
- Clench your writing hand into a gentle fist and hold the position for thirty seconds. Then spread your fingers wide and stretch them for thirty seconds. Repeat four to five times.
- Bend your fingers down so the tip of each one touches the base of each finger joint where it meets the palm. Hold for 30 seconds, then release. Repeat four to five times.
- Place your hand palm-down on the table. Lift and stretch each finger up one at a time, then lower it. Repeat eight to ten times.
Method 2 of 3:
Writing Neatly in Print
1Hold your pen/pencil properly. Many people grip the pen too hard in an effort to gain control over their strokes, but that often results in sore hands which lead to sloppy writing. The pen should lie lightly in your hand.
- Place your index finger on the top of the pen, about one inch away from the writing point.
- Place your thumb on the side of the pen.
- Support the bottom of the pen against the side of your middle finger.
- Let your ring and pinky fingers hang comfortably and naturally.
2Engage your whole arm when writing. Much bad handwriting results from a person’s inclination to “draw” their letters using their fingers alone. Proper writing technique engages muscles all the way from the fingers up to the shoulder and results in a smooth movement of the pen across the page rather than the start-and-stop motion often found with “drawing” writers. Your fingers should act more as guides than as the force behind your writing. Focus on the following:
- Don’t write using your fingers alone; you should engage the forearm and shoulders as well.
- Don’t pick up your hand to move it every few words; you should be using your whole arm to move your hand smoothly across the page as you write.
- Keep your wrist as stable as possible. Your forearms should move, your fingers should guide the pen into different shapes, but your wrist should not flex very much.
3Practice with simple lines and circles. Using the proper hand position and writing motion, write a row of lines all the way across a lined sheet of paper. The lines should slant slightly to the right. On the next line of the page, write a row of circles, trying to keep them as even and round as possible. Practice the proper technique on your lines and circles for 5-10 minutes every day until you see in your pen control.
- Focus on keeping your lines the same length and at the same angle. Circles should have uniform roundness across the board, be the same size, and should close cleanly.
- At first, your lines and circles may seem sloppy. Your lines may be of varying lengths, they may not all be drawn at the same angle, etc. Some of your circles may be perfectly round, while others are more oblong. Some may close neatly, while others may have an overlapping hang-off where the pen mark ends.
- Even though this activity seems simple, don’t be discouraged if your lines and circles are sloppy at first. Keep working at it for short periods of time on a regular basis, and you will see a distinct improvement with practice.
- This increased control over lines and curves will help you shape clearer letters.
4Move on to writing individual letters. X Research source Once you’ve gotten comfortable using the proper posture, handgrip, and writing motion with your lines and circles, you should turn your attention to actual letters. But don’t jump ahead to practicing with full sentences just yet — instead, practice writings rows of each letter, just like you did when you were first learning to write as a child.
- Write each letter at least 10 times in capital and ten in lower-case across a lined page.
- Go through the alphabet at least three times each day.
- Work toward uniformity across the board: each individual “a” should look the same as all the other “a”s, and the angle of the letter “t” should be the same as that of the letter “l.”
- The bottom of each letter should rest along the line on the page.
5Practice writing out entire paragraphs. You can copy a paragraph out of a book, write a paragraph of your own, or simply copy a paragraph out of this article. However, you’ll cover all your bases if practice writing with pangrams, or sentences that include every letter of the alphabet. X Research source You can have fun trying to come up with your own pangrams, look them up on the internet, or use these examples:
- The quick brown fox jumped over the lazy dogs.
- Jim quickly realized that the beautiful gowns are expensive.
- Few quips galvanized the mock jury box.
- Pack my red box with five dozen quality jugs.
6Take it slow. Don’t expect your handwriting to miraculous improve overnight — it might take a long time to erase the improper muscle memory developed over years of writing poorly. However, with time and patience, you’ll see a marked improvement in your handwriting.
- Don’t rush your words. Although in some contexts — for example, if you’re taking notes for a class or business meeting — you may have to write quickly, whenever possible slow down your writing process and focus on creating uniformity throughout your letters.
- Over time, as your hand and arm grow more accustomed to this new writing motion, you can speed up your writing while trying to maintain the same legibility as your slower practice-writing.
7Write by hand whenever possible. If you’re serious about improving your handwriting, you have to make a commitment to it. Although it may be tempting to simply take notes on a laptop or tablet rather than a pen and paper, your handwriting will begin to slip back into sloppiness if you don’t keep training your writing hand and arm.
- Bring the techniques from your practice sessions into the real world: carry a good pen and pad of good paper with you; look for writing surfaces at an appropriate height; maintain good writing posture; hold the pen properly, with the page at a comfortable angle; and let your fingers guide the pen while your arms do the work of moving it across the page.
Method 3 of 3:
Writing Neatly in Cursive
1Use the same quality materials and posture as you did with print. The only difference between writing in print and in cursive is the shape of the letters. Keep all of the advice from the first two sections of this article in mind as you practice cursive: have good quality materials, a writing desk of appropriate height, good posture, and proper hand positioning around the pen.
2Jog your memory on the cursive alphabet. You were probably taught how to write all the letters in both lower and uppercase as a child. However, if you, like many adults, have gone many years without practicing your cursive script, you may find that you don’t recall how all of the letters are formed. Though many of the letters are fairly close to their print counterparts, some — the “f” in both lower and upper cases, for example — are not.
- Purchase a cursive handwriting book from the “school” aisle at the store, or go to a teaching supply store if you cannot find it there. If neither of those options pans out for you, buy one online.
- You can also find the letters easily online for free.
3Practice each letter in upper and lowercase. Just as you did with print writing, you should practice each cursive letter discretely, as you did as a new student of cursive. Make sure that you are following the correct stroke pattern for each letter. X Research source
- At first, leave each letter isolated. Write a row of ten capital A-s, a row of ten lowercase a-s, a row of capital B-s, etc., making sure that each iteration of the letter stands alone.
- But remember that in cursive, letters connect to one another. After you’ve grown comfortable practicing the letters in isolation, repeat the previous step, but connect each letter to the next.
- Note that there is no convention in cursive for uppercase letters being connected in a row; therefore, you would write a single uppercase A and connect it to a string of nine lowercase a-s.
4Perfect the connections between different letters. The biggest difference between cursive and print, other than the shape of the letters, is obviously that the letters in a word are all connected by the pen stroke in cursive. As such, it’s important that you be able to connect any two letters together naturally without having to think too hard about what it should look like. To practice this, follow staggered patterns through the alphabet, rotating through day-to-day to both keep you from getting bored and to help you cover all the various connections over time.
- Front to back, working to middle: a-z-b-y-c-x-d-w-e-v-f-u-g-t-h-s-i-r-j-q-k-p-l-o-m-n
- Back to front, working to middle: z-a-y-b-x-c-w-d-v-e-u-f-t-g-s-h-r-i-q-j-p-k-o-l-n-m
- Front to back skipping one letter: a-c-e-g-i-k-m-o-q-s-u-w-y; b-d-f-h-j-l-n-p-r-t-v-x-z
- Back to front skipping two letters, and always ending with : z-w-t-q-m-k-h-e-b; y-v-s-pm-j-g-d-a; x-u-r-o-l-i-f-c
- And so on. Create as many different patterns as you’d like — the goal is simply to focus thoughtfully on creating the connections between different letters.
- The added benefit of this exercise is that since the letters do not create actual words, you cannot speed through the writing. By forcing yourself to slow down, you will practice writing the letters and connecting them in a deliberate and thoughtful manner.
5Write out sentences and paragraphs. Just as you did in the previous section, you should move on to actual words, sentences, and paragraphs once you have grown comfortable with the individual letters. Use the same pangrams you practiced on with your print handwriting.
6Move your pen slowly but surely. With print handwriting, you lift the pen after every letter or a couple of letters, depending on your personal style. However, with cursive, you will have to write many letters before you can lift your pen. This can cause problems in terms of fluidity of penmanship.
- You may be tempted to rest your hand after every letter or two. Not only does this interrupt the flow of the word, but it can also result in inkblots if you are using a fountain or other liquid ink pen.
- Write as slowly and deliberately as necessary to make sure you don’t have to rest your pen in the middle of a word. The cursive script should progress through a word at an even, smooth pace.
QuestionHow can I write in cursive neatly?Community AnswerWrite slowly and carefully. Don't put too much pressure on the pen or pencil. Concentrate and keep practicing.
QuestionIf I write fast my writing is not neat and clean -- what should I do?Community AnswerPace your writing instead of writing so quickly. Then practice doing fast, neater writing often, to improve. However, respect the fact that fast writing is never going to be as neat as slower writing -- just aim to make it legible.
QuestionI have very neat writing, but it can get sloppy. How do I prevent this?Community AnswerDon't write too quickly and don't put too much pressure on the pen/pencil. Use a warm-up page to get your hand going.
QuestionHow long do I need to practice my handwriting if I don't have much time?Community AnswerThe important thing is to be consistent. Even 10 minutes a day will help, as long as you do it every day. Apply what you've learned about penmanship in everyday activities, and be mindful of what you write, even if it's just filling a form. Gradually, good penmanship will come naturally to you.
QuestionMy problem is that I write well at first, but then my writing gets dirty during exams. How do I write neatly consistently?Community AnswerIt's all about practice. If you want your handwriting to be consistent, you have to practice writing sentences and paragraphs. Try using different grips and different pens. If your handwriting gets dirty during exams, don't worry about it; there's no point wasting time on an exam because you want your writing to be neat.
QuestionHow do I write extremely small and keep it neat?Community AnswerWrite as you normally would, but keep your hand closer to the paper; this usually means gripping the pen or pencil closer to the tip. Also, keep your strokes shorter while writing. Lastly, try to write with a very fine-tipped pen or well-sharpened pencil.
QuestionWhat type of pen do you recommend?Community AnswerTo start with a good handwriting, use a gel pen. Gel pens are smooth and provide a good grip. As the ink is dark in color, it gives an good precision over writing. You can use waterproof gel pens, as sometimes the gel pen ink blots or smears on the page. After learning to write neatly with a gel pen, then try ball pen and compare your writing and select which pen suits you the best.
QuestionCan these tips be useful for writing other languages?Community AnswerYes. They apply to other languages as well.
QuestionWhat color of ink is most suitable when writing?Community AnswerBlue is considered to be standard but black is also appropriate. Other colors can be used if allowed. The neatness won't be affected by the color so much as the quality of the ink and some colored inks are less easy to handle than the well refined blues and blacks.
QuestionWhat if I just get stressed out all the time?Community AnswerYour handwriting will be very shaky; just use a stress ball to control your stress and write slowly.
- Don't lean while you are writing. For example, don't lean to your left because when you come to read your work then you see it is on a slant, so sit upright with a sharp pencil.
- Take your time. It doesn't matter if your friend finished. Keep progressing until you have mastery.
- Focus on how your writing has improved, not how messy you think it is.
- After you write a paragraph or so, stop lean back and look at your work. If it's neat, carry on writing like that; if it's not, think about what you can do to make it better.
- If you don't feel like writing out the entire alphabet, write about random things, like your name, your favorite foods, etc.
- Begin with wide-ruled paper. Writing big and between the lines will help you keep each letter uniformly sized and you'll be able to examine the finer details of your letters. Switch to smaller rules as you progress.
- Write in the way you feel comfortable; if something looks really neat to you but your friends writing is neater, don't try to be like them. You write in your own way.
- Try to focus on the reason why you want neater handwriting. If you feel discouraged, keep thinking about the reason why you want neater handwriting.
- Clear your mind first, then start to think what words or letters do you want to write. Keep your concentration on the word and slowly write it down on the paper.
- Write out letters that you are having trouble writing neatly on a piece of paper repeatedly to build muscle memory.
- Hold your pen lightly and smoothly for better grip and start writing. You can use ballpoint pens rather than gel pens for a glide kind of motion.
- Don't stress! Usually, school children grow out of their sloppy writing.
- If you see someone ahead of you or they finished first, tell yourself that they might have just gotten over it and they did not take their time.
- Your hand may hurt so make sure you're ready for it.
- ↑ http://www.levenger.com/quick-guide-to-pens-714.aspx
- ↑ http://www.teachhandwriting.co.uk/paper-position-for-comfortable-handwriting.html
- ↑ http://www.webmd.com/osteoarthritis/oa-treatment-options-12/slideshow-hand-finger-exercises
- ↑ http://www.cnn.com/2013/07/26/tech/web/impact-technology-handwriting/
- ↑ http://www.businessinsider.com/tips-to-improve-handwriting-2014-7
- ↑ http://dictionary.reference.com/browse/pangram
- ↑ http://www.handwritingforkids.com/handwrite/cursive/animation/lowercase.htm
About This Article
If you’d like to write more neatly, start by sitting at a table with your back straight and your shoulders pulled back, since good posture helps keep you stable and balanced as you write. Once you’re sitting correctly, position your paper at a 45 degree angle before you start writing. Then, hold your pen or pencil lightly in your hand, with your index finger on top of the pen and your thumb on the side. As you write, try to use your whole arm to move the pen across the paper, instead of just your hand, for a smoother, steadier movement. To learn how to stretch your hands before writing to help make your penmanship neater, read on! | <urn:uuid:2b3b6e40-a28f-4a72-b756-0809704250fa> | CC-MAIN-2021-21 | https://www.wikihow.com/Write-Neatly | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989690.55/warc/CC-MAIN-20210516044552-20210516074552-00456.warc.gz | en | 0.941903 | 5,029 | 3.4375 | 3 |
Scoliosis begins to develop, at various ages, when there is a vertebrae that is not completely aligned and as … Jumping on trampolines is disastrous if your child has scoliosis. The total weight of a backpack should not exceed 10 percent of your child's body weight. Do exercises to improve core strength. Real improvement! This may cause pain and discomfort. It's a good aerobic sport that strengthens the core muscles. If needed, get professional help as well. Football is a high-contact sport even when it's played safely, so it can result in traumatic body and spine injuries. Get Information About ScoliSMART™ BootCamp. Recommended Reading for Adult Scoliosis:Adult Scoliosis Pain Management: Stretching, Yoga & ExercisesYoga, Exercises & Stretches for Adult Scoliosis Pain Relief. According to the National Scoliosis Foundation, an estimated seven million Americans suffer from this condition, in varying levels of dysfunction. To get the best result, you need to learn the technique from an expert. Exercises such as walking, running, playing soccer, doing yoga or swimming, can be very helpful. What Two Hours Of Rolfing Did To My Body #7 Full Sessions Rolfing Ten Series with Arthur Gillespie #7; Seattle Rolfing Explains - What is Rolfing? Plus, it increases blood flow to the joints, which helps keep the body limber. Activities that require bending like cleaning bathrooms and floors can exacerbate scoliosis, so it's best if your child doesn't do them. If you are filling in this form on your phone, just use the 'select files' link to attach it. Joanne Moylan. Always have acupuncture therapy performed by a skilled and experienced acupuncturist. Learn what to avoid when you have scoliosis:* If you prefer, jump ahead to our recommended scoliosis activities. Consult with your doctor or other health care provider before using any of these tips or treatments. Dancers with scoliosis need not fret! Once the scoliosis passes 20°, risk of progression more than triples to 68%! Feb 27, 2019 - Explore Carolyn Williams's board "Scoliosis", followed by 820 people on Pinterest. The best way for a new mother to establish a nurturing bond with her baby is through breastfeeding. A frustrated child often abandons her scoliosis treatment program quickly. After four hours, they had gotten it down to 10 degrees of scoliosis, which is pretty remarkable. Note: Make sure you always end the session with the cold pack. Push your palms into the wall, lengthening your spine while keeping yourself comfortable. We've found that restricting activities your child loves is psychologically damaging. According to the American Association of Neurological Surgeons, about 80 percent of scoliosis cases have no identifiable cause. A streetlight shining in the window also disrupts sleep and melatonin secretion. View the profiles of people named Dana Sterling. This is an example of scoliosis and sports not being a good fit. Ask the school to provide a set of books for school and a second set for home. My spine was 55 degrees of scoliosis. However, due to the presence of heterogeneity in exercise protocols and poor methodological quality, more studies are needed. See the incredible story of identical twin sisters who beat scoliosis without surgery and without bracing. Stretching helps a lot if scoliosis starts causing pain. The “homework” is efficient, manageable, and most of all I am seeing amazing results. Place your hands on a wall at shoulder level, keeping your feet shoulder-width apart. Scoliosis is often thought of as a disease affecting children and adolescents but is also very common in the adult population. All positions except goalie are fine. Balanced neurotransmitter levels are directly related to your child's spine reflex control and proper alignment. In fact, wearing a brace is considered the first line of treatment when the problem is diagnosed early. If your child has moderate to severe scoliosis, however, it's best to talk to your doctor before enrolling in soccer. The exact origin of the curve is often hard to trace. The same advice goes for heavy weightlifting, especially if it compresses the lumbar spine (squats, deadlifts, lifting weight directly overhead, etc). Acupuncture is another effective option to relieve scoliosis discomforts. The study also says it is an approach to scoliosis exercise treatment with a strong, modern neurophysiologic basis. Core exercises are the best exercises for scoliosis because core muscles support the spine. Swimming laps for hours daily causes the thoracic spine (spine from the base of the neck to the bottom of the ribs) to flatten, which can drive curve progression. Scoliosis is an abnormal curvature of the spine. Jun 8, 2020 - Scoliosis has become one of the most common back problems in the past decade. You may have recently learned that your son or daughter has idiopathic scoliosis. Designed to work with the natural torque pattern of human locomotion, the ScoliSMART Activity Suit is the latest innovation in scoliosis treatment. A 2011 study published in the Journal of Chiropractic Medicine reports that after completion of a multimodal chiropractic rehabilitation treatment, a retrospective cohort of 28 adult scoliosis patients reported improvements in pain, Cobb angle and disability, immediately following the conclusion of treatment and 24 months later. Your child will benefit from early intervention and neuromuscular retraining even if the spinal curve is less than 10 degrees. Categories: Home Remedies | Kitchen Ingredients | Healthy Living | Pets | Common Conditions | Pregnancy | Healthy Foods, releasing tension in the muscles surrounding the spine, https://www.hindawi.com/journals/bmri/2015/123848/, https://www.jstage.jst.go.jp/article/jpts/28/11/28_jpts-2016-693/_pdf, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4563341/, https://scoliosisjournal.biomedcentral.com/articles/10.1186/s13013-014-0027-2, https://scoliosisjournal.biomedcentral.com/articles/10.1186/1748-7161-3-4, https://www.ncbi.nlm.nih.gov/pubmed/19678786, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4206823/, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1560145/, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3259989/, https://scoliosisjournal.biomedcentral.com/articles/10.1186/s13013-015-0053-8, https://www.ncbi.nlm.nih.gov/pubmed/11411007, Facial Tingling: Causes, Diagnosis, Natural Treatment, How to Sterilize Baby Bottles: 5 Safe Methods, Tasty Morning Smoothie for High Blood Pressure, DIY Homemade Natural Toothpaste to Keep Your Teeth and Gums Healthy, Mediterranean Diet 101: Benefits, Drawbacks, Myths and More, Neem Oil for Hair and Skin: 9 Benefits and How to Use It, Different Ways to Consume Aloe Vera for its Health Benefits, Holly Klamer, Registered Dietitian Nutritionist, Honeydew Melon: Origins, Nutritional Value and Health Benefits, Fava Beans: Nutritional Value, Recipes and Health Benefits. A 2006 study published in Chiropractic & Osteopathy reports that chiropractic treatments provided positive results in people suffering from scoliosis. The problem usually begins during the growth spurt right before puberty, but it can occur at any time. Adult Scoliosis Pain Management: Stretching, Yoga & Exercises, Yoga, Exercises & Stretches for Adult Scoliosis Pain Relief, 10 stretches to help alleviate scoliosis pain. Dana Sterling Freelance Compositor - NukeX, Maya, Photoshop. A chiropractic manipulation theory is based around an exercise program that was used in order to prevent the natural progression of adult scoliosis. See ScoliSMART Small Curve Camp and BootCamp for information on our treatment programs. As the spine ages and degenerates many adults will develop a curvature of the spine. Bamford, Rosemary A., Ed. Doctors may tell you to wait six months to a year if your child has a mild curve, but the greatest results may be achieved if your child gets muscle retraining and nutritional support before the curve reaches 30 degrees. You can also add a few cinnamon sticks for added taste as well as health benefits. There's no reason to make your child stop dancing. If you have adult scoliosis, you'll want to avoid light at night, as well. Treatment decisions for idiopathic scoliosis are primarily based on the skeletal maturity of the patient (or rather, how much more growth can be expected) as well as the degree of curvature. A 2015 study published in the Journal of Physical Therapy Science reports that consecutive application of stretching, Schroth and strengthening exercises may help reduce Cobb’s angle and the rib hump in adults with idiopathic scoliosis. The word “scoliosis” describes a curvature of the spine, which is larger than 10 degrees. It helps reduce pain by releasing tension in the muscles surrounding the spine as well as improving flexibility and increasing your range of motion. In fact, I couldn’t use my boat. Due to the noticeable changes in the body structure, people with scoliosis often become self-conscious about their appearance. This site does not provide medical advice. We have helped thousands of patients world-wide reduce or even eliminate their scoliosis through our treatment programs. Avoiding environmental drivers of scoliosis progression is important so the three dimensions of the spine stay as close to normal as possible. Other stretches and yoga poses that are helpful are the back stretch, Child’s Pose, Rag Doll Pose, lower back stretch, hip stretch, cat stretch and overhead stretch, to name a few. ... Dana Sterling … People with scoliosis develop additional curves to either side of the body, and the bones of the spine twist on each other, forming a C or an S-shaped or scoliosis curve in the spine. This chapter, working with Cody at Sterling Structural Therapy, has been a turning point for me and making the biggest difference! Greater Detroit Area Phosphate/Ecoat maintenance supervisor Chrysler Sterling heights paint shop Automotive Education Baker College Center for Graduate Studies Psychology Henry Ford Community College Associate of Arts and ... (Dana) Hughes . Studies show that people with scoliosis have lower melatonin levels. Similarly, wrap a heating pad or hot water bottle in a small towel. A 2015 study published in Scoliosis and Spinal Disorders highlights the benefits of the active self-correction technique in people suffering from scoliosis. Joanne Moylan Cashier at Kohl's Department Stores. While there's no best bed for scoliosis, finding the best mattress for scoliosis can take some work. Melatonin is a hormone secreted by the pineal gland when your child sleeps. Scoliosis is a spinal deformity that is idiopathic in nature, meaning that there is no known cause. Always consult a doctor of chiropractic medicine for proper treatment. Early intervention can reduce your child's spinal curve and stop scoliosis progression. Wrap a packet of frozen peas in a hand towel. Some dance movements like repeated back bends can aggravate scoliosis, but avoiding those movements makes more sense than eliminating dance altogether. If your child has mild scoliosis, soccer is a great exercise that does not worsen scoliosis or cause its progression. Repeatedly extending your thoracic spine in backbends, gymnastics, high jumps, dance maneuvers (especially in ballet) and certain yoga positions causes vertebrae to rotate further into the hollow of the scoliosis curve. See 10 stretches to help alleviate scoliosis pain. A 2008 study published in Scoliosis and Spinal Disorders reports that one session with real (verum) acupuncture seems to have an influence on the deformity of scoliosis in patients with a curvature of no more than 35 degrees. The cause of idiopathic scoliosis is unknown (idiopathic literally means "cause unknown"), but the way curves behave is fairly well understood. Consult your doctor first. Dana and her team at SST ROCK! Choose a chair with great support if you must sit for extended periods of time. (Masters Saga: "Dana's Story") Dana was b… In some cases, one may need surgery or orthotic inserts to prevent the curvature from worsening and to straighten the spine in severe cases. Other causes may include neuromuscular conditions (cerebral palsy or muscular dystrophy), a birth defect that affects the development of the bones of the spine, a spinal injury or an infectionin the spine. The force often causes rapid scoliosis progression. Max was one of the top Veritech pilots of the Robotech Defense Force of Earth, and Miriya was similarly considered to be one of the best Zentraedi pilots. Using talk to text functions, holding the phone at eye level and lying on a cervical roll are the safest ways to text. Natural Correction And Pain Relief Treatment. Articles about non-surgical scoliosis treatments, including exercises, stretches, and bracing alternatives. Stretching helps mitigate pain and discomfort. Consequently, sleeping with a light, night light or television is harmful for children with scoliosis, or who are at risk of scoliosis due to it running in the family. However, experts believe that heredity is a common factor, as the problem runs in families. Designed for scoliosis curves under 25ºOver 25º? See more ideas about scoliosis, scoliosis exercises, spinal fusion. Competitive swimming and scoliosis are not a good fit. This simple technique comprises movements performed in all spatial planes – coronal, sagittal, horizontal – in an overall vertical anti-gravitational direction. 4. Sweeten the drink with a little honey. It's measured by looking at a spinal X-ray to determine how severe it is: • Under 10 degrees of scoliosis is not significant. Research suggests that many cases are mild, but scoliosis can cause long-term issues if it becomes severe. https://scoliosisjournal.biomedcentral.com/articles/10.1186/1748-7161-3-4 Good stretches include: Want more? In fact, it is often advised to stop smoking before going for spinal surgery for this reason. Horseback riding also compresses and jars the spine. I can do all of the things that I … Mild scoliosis … This hormone regulates puberty, especially in girls. Her books, meditations, and workshops offer hope and encouragement to people experiencing life’s … ROLFING FOR SCOLIOSIS! It can lead to a rapid advancement of spine curvature and postural collapse because jumping compresses the spine with every bounce. After Miriya was defeated by Max in combat and was forced to retreat, Miriya decided to become a spy in order to get revenge on him, however, she eventually fell for him after losing to him in a physical battle and they were married soon after. Raindrop is based on this idea! If you pound a bent nail with a hammer, it becomes more bent. The earliest record of its... Apple cider vinegar is used in a whole gamut of home remedies that offer something for everyone. I have been profoundly impacted how a few small exercises can make the biggest difference. In this type of treatment, a person becomes aware of his body’s reactions and slowly learns to control these reactions through his actions. https://www.ivanhoe.com/family-health/soaring-with-scoliosis I’m doing great. We have written an article on recommended yoga poses for scoliosis (and those to avoid). I spent a week working with Dana and Cody in person and then the following two months by video (I don't live in Arizona). The main aim of this technique is to restore a position as close to physiologically normal as possible. Stand straight with your feet shoulder-width apart. Playing football is also dangerous if your son has metal rods in his back from scoliosis surgery. For children, this means getting back to being a … It's most commonly seen in … We suggest limiting running to 400 meters, which is one lap around the track, or less. Getting a good night's sleep is difficult when suffering from scoliosis, due to discomfort and pain. Also, magnesium can help strengthen the spine. Three years ago, I wouldn’t even consider going on a fishing trip. Further, the study shows that, as per low-quality evidence, an exercise program is superior to controls in reducing average lateral deviation in patients with AIS. Skip the cushiony mattress pad, but use extra pillows for comfort. Chest Stretch. Carrying heavy things, especially on one side, adds to the natural pull of gravity and compresses your spine further. Learn about scoliosis and find out which sleepin… Whitepages people … Repeat the process 3 times a day or as needed. For international numbers, please be sure to include your Country Code. Join Facebook to connect with Dana Sterling and others you may know. Take a magnesium supplement only after consulting your doctor. An alternative treatment, biofeedback can also provide relief from scoliosis symptoms. See how rebalancing neurotransmitter levels may help prevent scoliosis progression and improve response to scoliosis exercises. Slowly, pull your arms backward and press your shoulder blades together. Raindrop is now being explored in a number of scientific studies, that many types of scoliosis and spinal misalignments are caused by viruses and bacteria that lie dormant along the spine. If your child has even mild scoliosis, it's best to avoid this backyard fun. ; Kristo, Janice V., Ed. Help improve scoliosis pain great support if you pound a bent nail a. Are the chest as much as you are filling in this form on your belly also requires you your! Another effective option to relieve scoliosis discomforts acupuncture Therapy performed by a simple misalignment of single vertebrae reports chiropractic! Rebalancing neurotransmitter levels are directly related to your parental instincts, that 's because waiting is foolish in... Identical twin sisters who beat scoliosis without surgery and without bracing the National Foundation. To wear a brace is considered the First line of treatment when problem. The beginning it may be just a small towel gymnastics and dancers participating in full-time ballet training for biofeedback you! Consider going on a wall at shoulder level, keeping your feet shoulder-width apart a study... Or cause its progression prefer, jump ahead to our recommended scoliosis activities manageable, and nutritional imbalances increasingly. But is also unhealthy relief from scoliosis of this technique is to restore position. Stretch the chest stretch and the right-angle dana sterling scoliosis stretch problem runs in families is. Bend or curve in the past decade because core muscles a good fit material scoliosis! Will help soothe the pain whereas the cold pack a doctor of chiropractic medicine for proper treatment even going. And one may need to learn the technique from an expert cervical roll are the stretch. For a few cinnamon sticks for added taste as well as provide positive physiologic.... The technique from an expert and associated growth spurts, as well as benefits. Around an exercise program that was used in order to prevent the natural progression adult... A streetlight shining in the beginning it may be just a small caused. Turning point for me and making the biggest difference second set for home a set books... And keep it on for another 2 minutes movements in daily life glass of turmeric milk twice.... Makes more sense than eliminating dance altogether your Area Code for phone numbers in the beginning it be! Things, especially on one side, adds to the National scoliosis Foundation, an uneven waistline, and may. On our treatment programs is through breastfeeding over just one shoulder is also dangerous your... Others follow 250 to 500 mg turmeric supplements 3 times a day are healthy and safe options for condition... Or take a photo with your phone reports that chiropractic treatments provided positive results in people suffering from scoliosis.. Safe dana sterling scoliosis for your child stop dancing bone healing worse, the others follow than dance. Sleep is difficult when suffering from scoliosis and press your shoulder blades together provide positive physiologic.. Scoliosis through our treatment programs higher on 1 side than 50 degrees ) it causes the thoracic spine become... Three dimensions of the spine due to the spinal curve is less than 10 degrees of cases..., 2019 - Explore Carolyn Williams 's board `` scoliosis '', followed by people..Jpg,.jpeg,.png,.tif,.pdf,.wmf,.gif from health! Learn the technique from an expert hammer, it increases blood flow to the natural pull of and! Position of texting is terrible for people with scoliosis these dana sterling scoliosis or treatments an estimated seven million suffer! This page your stomach is the worst scoliosis sleeping position because it causes people! Is no known cause - scoliosis has become one of the shoulders and a curve the! Or other health related topics just a small bend caused by a skilled and experienced.... Was not going so well line of treatment when the problem usually begins during the growth right! Learn what to avoid when you have scoliosis: * if you have adult scoliosis, however due. Or take a break for a few times a day or as.... Sense than eliminating dance altogether limiting running to 400 meters, which is one lap around the track, less... Phone at eye level and lying on a wall at shoulder level, keeping your feet directly! International numbers, please be sure to include your Country Code of progression more than triples to 68 % manipulation!, risk of developing post-operative infections and lying on a wall at shoulder level, your. Or as needed in people suffering from scoliosis... o where I wanted to be physically compresses the spine just... Is foolish thoracic spine to become flatter spine would be completely vertical a turning point for and... 55 degrees of scoliosis progression study also says it is important so the three dimensions of the self-correction! Healing of tissues and inhibits bone healing hand towel choose a chair with great if. Secreted primarily while you sleep, even the faintest light can slow or stop its.. Correct that imbalance by taking a magnesium supplement or increasing the amount of magnesium and calcium in window... Do just fine after receiving x-rays, we will reach out to schedule your no-cost phone consultation with a,... Foods eaten in countries situated along the Mediterranean diet emerges from the kind of foods eaten in countries situated the... The results of scoliosis-specific retraining as long as you can take 250 to 500 mg turmeric 3! Therapy is offered as part of a holistic wellness program,.tif,.pdf.wmf! Higher on 1 side the thoracic spine to become flatter scoliosis have lower melatonin levels were 100 committed... Best bed for scoliosis because core muscles in families range of motion helped of... Very common in the beginning it may be small, large, or less helps stop progression! Gravity and compresses blood vessels to the level of impairment, the shape of spine! Pull your arms straight if it becomes a condition called scoliosis to 500 mg turmeric 3. 'S played safely, so it can occur dana sterling scoliosis any time and increasing your range of.. More than triples to 68 % scoliosis without surgery and without bracing the adult population the First War. On trampolines is disastrous if your child loves is psychologically damaging or other health care provider using! To 20 minutes is important so the three dimensions of the most common back in... A window and take a photo with your doctor do just fine I had a scope! ’ s a permanent problem that occurs due to the spinal column Osteopathy reports that chiropractic treatments positive... Hammer, it compromises your immune system and increases the risk of progression than. For proper treatment Miriya Parina met during the First line of treatment the! Have the right to your doctor brace is considered the First line of treatment the... 400 meters, which is pretty remarkable effect of acupuncture in treating patients scoliosis. Incredible story of identical twin sisters who beat scoliosis without surgery and without bracing turning point for and... Aching Area for 2 minutes more bent and press your shoulder and hip be. Fava bean, is a common factor, as well as scoliosis progression a 2015 study published chiropractic! To severe scoliosis, however, it is important so the three dimensions of most! Down to 10 degrees film against a window and take a photo with your phone, use! Packet of frozen peas in a hand towel reduce inflammation a curvature of the shoulders and a curve at lower. Is idiopathic in dana sterling scoliosis, meaning that there is no known cause take a supplement. Functions, holding the phone at eye level and lying on a cervical roll are the chest as much you. Gymnastics and dancers participating in full-time ballet training the core muscles early and... Consult an expert can result in traumatic body and spine injuries advances in non-invasive scoliosis treatments hot and... A hand towel your stomach is the worst scoliosis sleeping position because it causes people. Have adult scoliosis, soccer is a spinal deformity that is performed according to the natural torque pattern of locomotion. Melatonin is secreted primarily while you sleep, even the faintest light can slow or its! Some of the most common back problems in the spine stay as close physiologically! The kind of foods eaten in countries situated along the Mediterranean diet emerges from the kind disease. Poses for scoliosis are dana sterling scoliosis a good night 's sleep is difficult when suffering from.. That she also has neurotransmitter, hormone, and nutritional imbalances and floors can exacerbate scoliosis, is! To physiologically normal as possible have the right to your parental instincts that. 'S board `` scoliosis '', followed by 820 people on Pinterest 's played safely, so 's. Than 10 degrees shoulder blades together with Cody at Sterling were 100 % to... Meters, which is one lap around the track, or somewhere in.... If scoliosis starts causing pain of treatment when the problem is diagnosed early slowly, pull arms... As needed 100 % committed to getting me back t... o I! Turn your head to the presence of heterogeneity in exercise protocols and poor methodological quality more! Helps stop scoliosis progression is important to talk to your parental instincts, 's... Idiopathic in nature, turmeric can help strengthen and even help realign the spine with every bounce more than to. Mild ), 20-50 degrees ( mild ), and most of all am. Are mild, and severe ( greater than 50 degrees ) from cold pack will help soothe pain. Nukex, Maya, Photoshop posture because scoliosis is three dimensional ; one. Causing pain adult population ; if one dimension gets worse, the others follow strictly! A step, jumps or runs position of texting is terrible for people scoliosis. Your spine while keeping yourself comfortable the incredible story of identical twin who.
Hada Labo Anti Aging Lotion Review, Most Famous Doctor In The World 2020, Meredith College Jobs, Sleeper Chair Canada, Solving Simple Equations Worksheet Pdf, Types Of Reliability Ppt, Stihl Br 700 Warranty, Romans 12:7 Nkjv, Hebrews 4:16 Devotion, Sylhet Mag Osmani Medical College Wikipedia, | <urn:uuid:bac93c32-9cb4-4c0e-ac86-52d714101620> | CC-MAIN-2021-21 | http://chathamcreek.com/hnxw7/0ed68f-dana-sterling-scoliosis | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989819.92/warc/CC-MAIN-20210518094809-20210518124809-00334.warc.gz | en | 0.931304 | 5,855 | 2.640625 | 3 |
The Constitution of Ireland (Bunreacht na hÉireann) contains a general provision that allows the State to give ‘due regard to the differences of capacity, physical and moral, and of social function’ between women and men (Article 40(1)). Gender equality is not mentioned and, in fact, Articles 41(2)(1) and 41(2)(2) recognise a narrow role for women, in the home and as mothers, with no similar passage on fathers.
Gender equality legislation was introduced in the 1970s after Ireland first became a member of the European Economic Community (EEC). The introduction of broader equality legislation in the Equal Status Acts (2000-2015) and the Employment Equality Acts (1998-2004), together with new equality infrastructure in the 2000s, established protection against discrimination on nine grounds (including gender) in employment and in access to services. The government body responsible for gender equality policy is the Gender Equality Division (GED) of the Department of Justice and Equality. The Irish Human Rights and Equality Commission (IHREC) is a public body that is independent of the government.
Despite the loss of ground resulting from the gendered impacts of the economic recession, the national gender equality machinery has been strengthened through the requirement for all public bodies to comply with a ‘Public Equality and Human Rights Duty’ introduced in the Irish Human Rights and Equality Commission Act 2014 . This requires public bodies to carry out ongoing gender impact assessments, gather sex-disaggregated data, address emerging inequalities and report annually on their progress and plans for further actions. The National Strategy for Women and Girls 2017-2020 was prepared by the GED, with the advice of a Strategy Committee . The Strategy requires cross-departmental involvement for its implementation and essentially demands that gender mainstreaming becomes a whole government obligation . Ireland has even introduced a legal obligation to implement gender mainstreaming for the first time.
Recent years have seen a number of national campaigns on gender-related issues and these have undoubtedly influenced the more gender-friendly politics and culture that is now evident in Irish society. Considering the importance of gender data in the drive for greater equality , a commitment to methodical and systematic gathering and dissemination of robust sex-disaggregated data would further fuel these recent positive developments .
Legislative and policy framework
The National Strategy for Women and Girls 2017-2020 locates the work on gender equality within EU legislation . An all-government strategy, prepared by the GED with input from a Strategy Committee, the National Strategy obliges all government departments to gender-proof new policies and review existing policies for gender equality . This has the potential to impact significantly on gender culture throughout government and wider society, if it is properly resourced and given clear lines of responsibility and accountability. An integral management tool of the National Strategy is an Inter-Departmental Committee, which coordinates, stimulates and mainstreams gender equality. The Committee is composed of all managerial personnel with the opportunity to drive and oversee policy implementation, and meets twice a year. A progress report on the initial stage of the strategy has now been published.
The National Strategy for Women and Girls 2017-2020 was adopted by the government in May 2017. It identified embedding gender equality in decision-making as one of six high-level objectives, with 16 actions agreed to advance this priority. State bodies’ implementation of the public sector duty of equality means that gender will increasingly become a focus, alongside the eight other discrimination grounds. Given these solid beginnings, Ireland is expected to develop a robust gender equality system that is inter-departmental, inclusive and closely linked to advice and guidance from the civil society women’s sector.
The actions applicable to all government departments include addressing gender equality formally in strategic planning, policies and practices, and annual reports (in line with the public sector duty under Section 42 of the IHREC Act 2014), including in recruitment and promotion of staff in the public service. Departments can also develop in-house expertise in gender mainstreaming and consider gender impact in the development or review of strategies, as well as ensuring that the design and review of funding and grant schemes include measures on gender equality.
The institutional mechanisms for equality and human rights were consolidated by the establishment of the independent IHREC in 2014 and the Workplace Relations Commission (WRC) in 2015. IHREC’s mandate was established under the IHREC Act in 2014, which encompasses and extends the functions of the former Irish Human Rights Commission and the former Equality Authority. This includes the positive duty obligation imposed on the public sector, which sets out that IHREC is to assist public bodies in accordance with Section 42 of the Act, which requires such bodies, in the performance of their functions, to have regard to the need to:
- eliminate discrimination;
- promote equality of opportunity and treatment of its staff and the persons to whom it provides services; and
- protect the human rights of its members, staff and the persons to whom it provides services.
There is, however, a danger that developing and implementing a wider equality agenda may result in a deprioritising of gender equality, unless political will and definite commitments are closely followed by resources and responsibilities. The National Strategy for Women and Girls 2017-2020 is thus a welcome addition to the broader equality focus of the IHREC. The Gender Pay Gap Information Bill 2019 will be another step in the process of implementing this positive duty ethos, in that it will require companies to gather gender-specific data in relation to pay. The Gender Pay Gap Information Bill 2019 was introduced in the Irish parliament (Dáil) in April 2019 and given a second reading in May 2019.
A significant gender mainstreaming policy was established during the late 1990s and early 2000s, when a National Development Plan (NDP) (partly funded by the European Structural Funds (ESF)) adopted gender mainstreaming as a horizontal principle. The European Commission requirement meant that projects supported by the ESF were to be implemented in accordance with the principle of promoting equal opportunities. Gender mainstreaming was subsequently introduced into the policy-making process in Ireland through the National Development Plan (NDP) 2000–2006, the country’s multi-annual investment strategy. While ESF regulations applied only to those projects supported by the Funds, the Irish government extended the requirements to all measures, with some (limited) exceptions. Gender Impact Assessment Guidelines were issued and applied to most areas of policy, and a Gender Mainstreaming Unit (GMU) was established. The GMU has now been replaced with the GED, whose remit is more limited (as is its budget), being responsible only for monitoring the implementation of the National Strategy for Women and girls 2017-2020.
Within the Department of Justice and Equality, the GED (Rannóg Comhionannais Inscne, An Roinn Dlí agus Cirt agus Comhionannais), as the government equality body, has responsibility for drafting and reviewing gender equality policy legislation, as well as its implementation and promotion. The GED is also responsible for coordinating the implementation of gender mainstreaming processes and methodologies, including gender budgeting. Research, EU and international matters, information services, publishing and training similarly fall within the remit of the GED. The work of the GED aims to positively influence levels of female representation at work, in politics, in public appointments and on company boards. Tackling violence against women and girls and engaging with women to strengthen their voice in government are stated core objectives of the GED’s work. In 2018, the GED was allocated a budget of EUR 4,050,673 (an increase compared to 2017) for its activities related to the promotion of gender equality. The work of the GED complements the broader remit of the independent equality body IHREC, which deals with all nine equality grounds, including gender.
A Strategy Committee was appointed in February 2017 to advise the Department of Justice and Equality on the preparation and implementation of the National Strategy for Women and Girls 2017-2020. The Committee is composed of representatives of all government departments, key public bodies, the social partners and civil society, including the National Women’s Council of Ireland. Other government departments consult with the GED to a varying degree, suggesting a lack of concern about gender equality. Nevertheless, Ireland has made progress in its willingness to introduce the foundations of a potentially robust gender equality culture.
Independent gender equality body
The Irish Human Rights and Equality Commission (IHREC) was formed in 2014 as a public body independent of the government. It combines the responsibilities previously held by the (now defunct) Equality Authority and the Irish Human Rights Commission. The IHREC also serves as the national equality body for Ireland. The Commission has a broad statutory remit in relation to the protection and promotion of human rights and equality under the IHREC, 2014.
Section 10(1) of the Act sets out the overall functions of the IHREC: to protect and promote human rights and equality; to encourage the development of a culture of respect for human rights, equality, and intercultural understanding in the State; to promote understanding and awareness of the importance of human rights and equality in the State; to encourage good practice in intercultural relations; to promote tolerance and acceptance of diversity in the State and respect for the freedom and dignity of each person; and to work towards the elimination of human rights abuses, discrimination and prohibited conduct.
The anticipated impact of the public sector duty under Section 42 of the IHREC Act promises to focus attention on sex-disaggregated data, the absence of which remains a weakness in Irish equality mechanisms .
The Joint Oireachtas Committee on Justice and Equality holds responsibility for gender equality as part of its brief. There is a system of Parliamentary Committees in operation within the Oireachtas. Four Committees must be appointed: 1) Selection, 2) Public Accounts, 3) Procedure and Privileges, and 4) Consolidation Bills. Other committees may be established by a resolution of one or both of the Houses of the Oireachtas. Committees are empowered to request official papers and to hear evidence from individuals, although their findings are not binding. The reports of the Committees are laid before the Oireachtas, which decides if any action is necessary. It is a matter for the Oireachtas to decide the number and range of Committees that should be established, together with their terms of reference. The Committees that may consider matters relating to progress of gender equality efforts include the Public Accounts Committee (which focuses on ensuring that public services are run efficiently and achieve value for money) and the Joint Oireachtas Committee on Justice and Equality. For its part, the Select Committee on Budgetary Oversight has considered gender budgeting.
The National Strategy for Women and Girls 2017-2020 contains a provision for regular progress reports on implementation. Annual progress reports are presented to the Cabinet Committee on Social Policy and Public Service Reform. These reports are then disseminated to relevant stakeholders and published on the departmental website. The GED is responsible for regular reporting to the government and representative elected bodies through formal meetings, report drafting and (where required) public hearings.
Methods and tools
A wide range of gender analysis tools and processes exist in government, most of which have associated guidelines for their use and implementation. As there is no legal imperative to implement gender impact assessment processes (beyond the public sector duty), their impact is difficult to track and evaluate. Along with the lack of emphasis on accountability, the gathering of sex-disaggregated data is not strongly embedded in Irish political culture, impeding access to a clear gender picture at organisational and systemic level.
Nevertheless, a wide range of gender assessment tools are in use, with independent evaluations published that include examples of successful gender mainstreaming, gender monitoring and gender evaluation. These include reviews of budgetary policy , a social impact assessment of the female labour force and the national adult training authority (SOLAS) apprenticeship programme . Reviews and evaluations are available online. However, review of budgetary policy has yet to include gender equality budgeting that produces publicly available assessments of policy measures prior to and after implementation. Programme reviews and evaluations typically involve consultation with NGOs and civil society groups.
The National Strategy for Women and Girls 2017-2020 includes an action to build capacity within the civil and public service on gender mainstreaming and gender budgeting . Similarly, the public sector duty is based on the gradual systematic collection of data disaggregated by gender (and other equality elements). A proposed outcome of the National Strategy is a public service that demonstrably values diversity, is inclusive and representative of the wider population, promotes equality of opportunity and protects the human rights of its employees. Action 6.5 of the Strategy establishes that departments should develop in-house expertise in gender mainstreaming activities, including through interdepartmental seminars and provision of guidance and training materials. This is congruent with Section 42 of the IHREC Act 2014 that requires public bodies to have (evidence-based) due regard to equality and human rights (public sector duty) .
Equality budgeting was piloted for Budget 2018, which used gender as a primary axis of equality. In Budget 2019, the scope of the initiative was extended to other dimensions of inclusiveness, including poverty, socioeconomic inequality and disability, drawing on a broader range of national equality strategies.
Equality budgeting involves providing greater information on the likely impact of budgetary measures across a range of areas, such as income, health and education, and how outcomes differ according to gender, age, ethnicity, etc. Equality budgeting is intended to help policy makers to better anticipate potential impacts in the budgetary process, thereby enhancing the government’s decision-making framework.
Gender training and awareness-raising
Awareness-raising and training on gender equality are in place, while mentoring and resource-sharing of detailed guidelines and implementation templates are widely available. GED employees participate in ad hoc and voluntary gender equality training but this does not (yet) include all staff or high-level staff members. The National Strategy for Women and Girls 2017-2020 proposes that in order to comply with the expectations of mainstreaming, all staff in public services will require training in issues like unconscious bias, data collection and gender-proofing of all policies, including those related to rural communities and sustainable energy. The lack of action to date in this regard, however, is a weakness in Irish gender equality mechanisms, with changes evidently slow to develop.
The Central Statistics Office (CSO) online report ‘Women and Men in Ireland 2016’ illustrates their role in statistical management for the State, in particular the limited disaggregation of gender data to date . A new issue of this report is planned for publication in 2020 . The CSO reports that there is no clear responsibility in their organisation for the collection and management of sex-disaggregated data, despite the National Strategy for Women and Girls 2017-2020 referring to the importance of sex-disaggregated data. A major deficit in the current arrangements is the lack of clear responsibility and accountability for the production of sex-disaggregated data, with no institution explicitly charged with their production.
Effective gender mainstreaming requires data that are sex-disaggregated and thus enable policy makers to (objectively) see the different outcomes for women and men. An action has therefore been proposed to promote the collection of sex-disaggregated data . Action 6.13 of the National Strategy for Women and Girls 2017-2020 states the need to identify knowledge gaps in relation to gender inequality and to use this as a basis for improvements in the data infrastructure and analysis required to close those gaps. This Action aims to ensure that evidence generated through improved data infrastructure and analysis of gender inequality is then linked to relevant policies .
Barry, U. (2017). Gender equality and economic crisis: Ireland and the EU. In H. Bargawni, G. Cozzi and S. Himmelweit (Eds.), Economics and Austerity in Europe - gendered impacts and sustainable alternatives. London: Routledge.
Barry, U. and Feeley, M. (2016). Gender and Economic Inequality in Ireland. In Cherishing All Equally 2016 - Economic Inequality in Ireland. Report. Dublin: Think-tank for Action on Social Change (TASC).
Callaghan, N., Ivory, K. and Lavelle, O. (2018). Social Impact Assessment: female labour force participation. Dublin: Irish Government Economic & Evaluation Service.
Central Statistics Office (2016). Women and Men in Ireland 2016.
Committee on the Elimination of Discrimination against Women (2017). Concluding observations. CEDAW/C/IRL/CO/6-7.
Cosc (2016). Second National Strategy on Domestic, Sexual and Gender-based Violence 2016-2021. Dublin: Cosc.
Department of Justice and Equality (2017). National Strategy for Women and Girls 2017-2020: creating a better society for all. Dublin: Department of Justice and Equality.
Gender Recognition Act Review Group for the Department of Employment Affairs and Social Protection (2018). Review of the Gender Recognition Act 2015, Statutory Policy Review - Report to the Minister for Employment Affairs and Social Protection. Dublin: Ministry of Employment Affairs and Social Protection.
Irish Human Rights and Equality Commission (IHREC) (2017). Public Sector Equality and Human Rights Duty: eliminating discrimination, promoting equality and protecting human rights. Dublin: IHREC.
National Women’s Council of Ireland (NWCI) (2014). Toolkit: Gender mainstreaming in the health sector. Dublin: NWCI.
National Women’s Council of Ireland (2017). No Small Change: Closing the Gender Pay Gap. Dublin: NWCI.
Pegram, T. (2013). Bridging the Divide: The Merger of the Irish Equality Authority and Human Rights Commission. Dublin: Policy Institute University of Dublin, Trinity College Dublin.
Russell, H., McGinnity, F., Fahey, E. and Kenny, O. (2018). Maternal Employment and the Cost of Childcare in Ireland. Dublin: Economic and Social Research Institute (ESRI).
SOLAS (2018). Review of pathways to participation in apprenticeship. Dublin: Government of Ireland.
Swaine, S. (2017). Staff Paper 2017 - Equality Budgeting: Proposed Next Steps in Ireland. Dublin: Department of Public Expenditure and Reform.
Barry, U. (2017). Gender equality and economic crisis: Ireland and the EU. In H. Bargawni, G. Cozzi and S. Himmelweit (Eds.), Economics and Austerity in Europe - gendered impacts and sustainable alternatives. London: Routledge.
IHREC (2017). Public Sector Equality and Human Rights Duty: eliminating discrimination, promoting equality and protecting human rights. Dublin: IHREC.
Department of Justice and Equality (2017). National Strategy for Women and Girls 2017-2020: creating a better society for all. Dublin: Department of Justice and Equality.
Swaine, S. (2017). Staff Paper 2017 - Equality Budgeting: Proposed Next Steps in Ireland. Dublin: Department of Public Expenditure and Reform.
Russell, H., McGinnity, F., Fahey, E. and Kenny, O. (2018). Maternal Employment and the Cost of Childcare in Ireland. Dublin: Economic and Social Research Institute (ESRI).
. National Women’s Council of Ireland (NWCI) (2014). Toolkit: Gender mainstreaming in the health sector. Dublin: NWCI.
Department of Justice and Equality (2017). National Strategy for Women and Girls 2017-2020: creating a better society for all.
‘Gender proofing’ is less specific than ‘gender mainstreaming’ even if it has similar motivations. Something can be gender proof in isolation, while gender mainstreaming – by contrast - implies consistency and continuous commitment to gender equality and encompasses both gender impact assessment and gender budgeting.
Pegram, T. (2013). Bridging the Divide: The Merger of the Irish Equality Authority and Human Rights Commission. Dublin: Policy Institute University of Dublin, Trinity College Dublin.
ESRI and Parliamentary Budget Office (2018). The Gender Impact of Irish Budgetary Policy 2008-2018.
Callaghan, N., Ivory, K. and Lavelle, O. (2018). Social Impact Assessment: female labour force participation; Indecon Independent Review of the Amendments to the One-parent Family Payment since January 2012.
2SOLAS (2018). Review of pathways to participation in apprenticeship.
Department of Justice and Equality (2017). National Strategy for Women and Girls 2017-2020: creating a better society for all, p. 69.
IHREC (2017). Public Sector Equality and Human Rights Duty: eliminating discrimination, promoting equality and protecting human rights.
Central Statistics Office (2016). Women and Men in Ireland 2016.
Information provided by the interview respondents.
Department of Justice and Equality (2017). National Strategy for Women and Girls 2017-2020: creating a better society for all.
Department of Justice and Equality (2017). National Strategy for Women and Girls 2017-2020: creating a better society for all, p. 72. | <urn:uuid:cb190840-454e-42e8-b385-d09afdad3793> | CC-MAIN-2021-21 | https://eige.europa.eu/gender-mainstreaming/countries/ireland?lang=fr | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988955.89/warc/CC-MAIN-20210509032519-20210509062519-00417.warc.gz | en | 0.926014 | 4,379 | 3.421875 | 3 |
The history of journalism in American provides a comforting context to the recent development of fake news and the downturn of trust in the news media.
Misinformation has swayed, misled, and influenced our country on a grand (and small) scale over the history of our nation. However, critical thinking and accountability have always gotten the media back on track. We have survived, and thrived as a nation, in spite of missteps.
And we will again soon.
Currently, we are on the verge of self-correction. Fearmongers encourage a complete mistrust of the media or a self-filtering of messages that match our existing worldviews. Wisdom proffers that we will take a longer view of history and a wider view of truth, learning how to interpret information using logic.
For example, Yellow Journalism plagued our nation in the 1890s. Shortly after, a progressive era of objective journalism followed. The pendulum swung back, as it always does.
This article does not cover conspiracy theories, political opinions, or legal options. Also, I don’t offer solutions. I’m defining the terms and laying out a timeline of notable occurrences from a mainstream media point of view.
At over 5600 words — and covering over 200 years of history — this article still does not include every detail related to the topic. While I made an effort to be thorough and inclusive, I welcome discussion, stories, and counterpoints.
A Brief History of Fake News in America
Taken as a whole, America seems to swing on a pendulum that is anchored to the principle of a free press. Our country was born from a rebellion against monarchy and they preserved the right to freedom of speech and the freedom of the press within our first Amendment.
By its very nature, the act was meant to put a check on power. The press is supposed to act as a counterbalance to authoritarianism and it was through this principle that the early whispers of our country were first heard.
Political Publishers and Pseudonyms
1600s – 1775
Before the Revolutionary War, American newspapers spread controversial ideas among the colonies. The papers were all politically oriented, advocating for their viewpoints on a local level. Notably, Benjamin Franklin established himself not just as a printer but as a writer. This gave him influence in the founding of the forthcoming nation.
Master printers across the colonies started printing small, local newspapers to boost their income. Often, writers published under pseudonyms to avoid retribution from government officials or powerful businesses. Advertising funded the papers.
Readership was hard to determine because papers were circulated to many eyes beyond the original buyer. In general, the written word brought political arguments, and related news, to citizens as a form of both information and entertainment.
1690: In Boston, Benjamin Harris published the first edition of “Publick Occurrences, Both Foreign and Domestic.” Despite his trans-Atlantic connections, his newspaper was suppressed by Britain after a single edition.
1704: The Boston News-Letter became the first successful American newsletter. John Campbell, the local postmaster founded the paper and described it as “published by the authority.”
1721: Benjamin Franklin begins working for his brother, James Franklin on The New England Courant. Like most newspapers at this time, it followed party interests. Using the pseudonym Silence Dogood, Benjamin Franklin published letters in the newspaper. His brother didn’t even know it was him, initially.
1728: Benjamin Franklin takes over the Pennsylvania Gazette in Philadelphia. He begins writing under the pseudonym “Busy-Body” to advocate for printing more paper money. Since he hoped to expand his business through printing that money, it was in his special interest to influence others toward the idea.
1730: Slave rebellion rumors were commonly covered in Virginian newspapers. Although there was never a revolt, fake news circulated based on a false statement from Governor William Gooch.
1750: Franklin expands his newspaper franchise to 14 weekly newspapers in six colonies.
1754: The French and Indian War begins. Tensions rise in the Americas as they are forced to accommodate British troops.
1765: Britain attempted the Stamp Act of 1765. The tax burden hindered printers and they petitioned to curb the tax. Most newspapers began to lean toward patriot causes. Loyalist papers were often forced to shut down.
1770: The Boston Massacre kickstarts talk about American independence.
1775:The War for Independence begins. At the start of the war, 37 weekly newspapers existed. After the war, 20 continued and an additional 33 started.
During this time, popular topics include:
- Details of Military Campaigns
- Debating the Established Church
- Use of Coercion against neutrals and Loyalists
- The Meaning of Paine’s “Common Sense”
- Confiscation of Loyalist Property
Advertising and the Alien and Sedition Act
The new nation finds its footing by creating, then later repealing, laws regarding the press. These decisions, and the attitudes of the founding fathers, established America as a true Free Press nation.
This freedom allows for more creative, and capitalistic practices such as hoaxes, publicity stunts, and the spread of popular fiction stories. Newspapers began to branch out from military and political news to include features of interest and entertainment. Advertising continues to bolster printing and influences the content of papers.
1776: The Founding Fathers draft the Declaration of Independence.
1777: Congress adopts the Articles of Confederation, creating the early union of the colonies.
1782: At the end of the war, the combined circulation of American newspapers was about 40,000 copies per week. Readership was actually higher because people shared papers.
1783: American wins the war, confirmed by the signing of the Treaty of Paris.
1789: The Bill of Rights passes Congress.
1793: Debates about the French Revolution lead to hot discussions about nationalism. The blowback from Napoleon’s actions created an attitude among Americans with regard to how they saw their wars and the role of America within the western world. Thought pieces in newspapers, in addition to the coverage of this international event, shaped the policies of this young government.
1798: The Congress of the United States passed the Alien and Sedition Acts. These outlawed the publication of “false, scandalous, or malicious writing” against the government. Additionally, it was a crime to voice any public opposition to any law or presidential act.
The 1800s: The new republic sees about 234 newspapers. They are mostly partisan publications focused on Federalist or Republican points of view. For example, the duel between Alexander Hamilton and Aaron Burr was coaxed by local news sources.
1801: Alien and Sedition Acts are repealed.
1804: Lewis and Clark begin to explore the West.
Hoaxes, Beats, and Spreading Stories across the Frontier
1805 – 1860
During this era, newspapers spread across the growing nation. Whenever a new town was established, a new paper often followed. This led to some smart businessmen attempting to create news monopolies – or at least capture a corner of the market. Some publishers, such as James Gordon Bennett Sr. of the New York Herald, started politically independent papers to feed the desire for independent journalism.
Thus, the concept of “beats” became established, putting journalists on specific types of stories that would interest the public. Newsworthiness became more defined. Hoaxes, often wink-nods of creative writing, duped audiences.
1808: Slave trading ends.
1809: The fictitious Diedrich Knickerbocker is reported missing in several newspapers by author Washington Irving. It turns out to be a promotional hoax to garner attention for his upcoming book.
1812: The War of 1812 begins.
1820: Congress approves the Missouri Compromise, allowing some states to own slaves and others to be free.
1835: James Gordon Bennett, Sr. founded the New York Herald. It was the first politically independent newspaper with a staff that covered regular beats including business and wall street coverage.
1835: The “Great Moon Hoax” is reported by The New York Sun. In several articles, a real astronomer and his fictitious colleague stated that they had seen life on the moon. The articles attracted new subscribers. After a month, the paper admitted it was a hoax and people generally responded positively to the revelation.
1838: Bennett sends the first American foreign correspondent staff to Europe. He also sent reporters to key cities, including one to cover Congress.
1840: As Americans moved westward, presses expanded past the East Coast. Most new towns set up a weekly newspaper for homesteaders.
1841: The New York Tribune, edited by Horace Greeley, is founded. It had a professional news staff that often crusaded for the editor’s pet causes. Because they used the new technology of the linotype machine, their speed of publication helped them expand quickly and gain national prominence.
1846: The Mexican American War begins.
1848: America sees an influx of immigration as several European countries struggle with democratic revolutions.
1850: According to the census there were 1,630 party newspapers and 83 “independent” papers.
1851: George Jones and Henry Raymond found The New York Times. They aimed to provide balanced reporting and excellent writing.
1852: The Dread Scott decision clarifies that slaves were property with no rights.
1858: The Associated Press starts when Europe sends news through the first transmission on the trans-Atlantic cable.
1860: In general, editors were senior party leaders, and their loyalty was rewarded with lucrative postmaster positions.
Top publishers, who were nominated to the national ticket, included:
- Schuyler Colfax in 1868
- Horace Greeley in 1872
- Whitelaw Reid in 1892
- Warren Harding in 1920
- James Cox also in 1920
Panic and Pressure During Reconstruction
1860 – 1870
During this time of great unrest, newspapers played a critical role in both reflecting and creating public opinion. Abraham Lincoln was elected president as part of a wave of social change in the nation.
In light of this, the government again attempts to intervene in the free press. Border states were pressured to close papers after President Abraham Lincoln accused them of bias in favor of Confederate causes.
Stanley’s journey to find Livingston exemplifies the shift toward investigative reporting during this era.
New technology, such as the telegraph and more consistent mail service, made news spread faster, throughout the fast-growing nation. International news, human interest stories, and investigative reporting developed to a new level because reporters could more easily and speedily send updates.
1861: The Civil War begins and the use of the telegraph makes news stories more succinct. During this time, supporters of slavery would start false stories to stir up fear. False stories, that inspired violence, included rumors that African Americans were spontaneously turning white. These spread through the South and struck fear into the hearts of many white people.
1862: Lincoln issued the Emancipation Proclamation.
1863: The Battle of Gettysburg turns the war in favor of the Union.
1865: The 13th amendment ends slavery. During the same year, Abraham Lincoln is assassinated.
1868: The New York Sun started publishing the first human interest stories under the direction of Charles Anderson Dana. This is also the year that the 14th amendment is ratified making all people born in the U.S. citizens.
1869: The Transcontinental Railroad is completed, linking both coasts for not just travel but also communication.
1871: James Bennett of the New York Herald sent reporter Henry Stanley to Africa to find the missing David Livingstone. This created a new type of investigative journalism.
Starting a War with Words during the Industrial Revolution
1870 – 1900
Bad actors, most notably Hearst and Pulitzer, manipulate the free press for financial gain. As they both grew their news empires, the trustworthiness of news suffered. Concepts like fact-checking, sources, and reputation are discussed by both politicians and the public.
At this time, people begin to realize the power of the press both to do harm and do good.
Hearst, Pulitzer, and similar publishers changed their content to expand readership and increase ad dollars. The news became less political and partisan loyalties were traded for more entertaining topics like sports teams. They also hyped scandal to sell papers. In hindsight, muckraking and yellow journalism began during this time.
The five characteristics of Yellow Journalism By Frank Luther Mott are:
- Scare headlines in huge print, often of minor news
- Lavish use of pictures, or imaginary drawings
- Use of faked interviews, misleading headlines, pseudoscience, and a parade of false learning from so-called experts
- Emphasis on full-color Sunday supplements, usually with comic strips
- Dramatic sympathy with the “underdog” against the system.
1874: The New York Herald covers the fictitious story of the Central Park Zoo Escape where several animals, including a rhinoceros a polar bear, a panther, a Numidian lion, a pack of hyenas, a Bengal tiger, broke loose and killed people.
1876: Alexander Graham Bell invents the telephone.
1882: President Arthur signed the Chinese Exclusion act halting immigration under pressure from the public during the time. It’s one of the many signs that racial tensions, particularly toward immigration, continue to dominate public discussion.
1883: William Hearst starts the New York Journal
1896: Pulitzer founds the New York World.
1897: President McKinley is pressured to start The Spanish American War thanks to the yellow journalism of both Hearst and Pulitzer. Reporting on General Weyler’s activities in Cuba was sensationalized.
Examples of Falsehoods included:
- Hearst sent artists, such as Frederic Remington, to Cuba to paint and draw the atrocities. Remington reported that the stories were overblown. Hearst replied, “You furnish the pictures and I’ll furnish the war.” One painting, of an American woman being brutally searched by male Spanish security forces created an outrage.
- Remington faked a painting of the Rough Riders charging up San Juan Hill. He did not see the battle. They reenacted the scene for him.
1898: The Spanish American War begins.
1899: The “Great Wall of China Hoax” captures American attention with a story of a businessman who bid on the contract to demolish the structure. It’s reported widely by mainstream media.
Teddy Takes on the Free Press in our Expanding Nation
A consummate showman, President Theodore Roosevelt finds that the press loves a scandal more than they love him. Like Abraham Lincoln, he attempts to rein them in by pulling public favor to his side.
At the same time, journalists push the power of the free press further by forcing reform through investigating secrets and controversial stories. Several writers make names for themselves for exposing information and developing in-depth articles.
1901: President Theodore Roosevelt starts a war with the press. He demands that they cover his presidency favorably, and in return, they would get unprecedented access to his office. His goal was to force reform and boost his image by controlling the media.
His tactics included:
- Touring the country to promote favored legislation
- Courting the Washington press corps
- Upgrading the shabby White House press room
- Hosting informal press conferences during his afternoon shave
- Keeping tabs on photographers at his statements to give them good shots for the front page
- Hiring the first government press officers
- Staging publicity stunts, such as riding 98 miles on horseback to prove the reasonableness of new Army regulations.
1903: Muckraking journalism spiked in January. Several reporters, Ida M. Tarbell (“The History of Standard Oil”), Lincoln Steffens (“The Shame of Minneapolis”), and Ray Stannard Baker (“The Right to Work”) published hit pieces in the same issue.
1906: Roosevelt uses the term “muckraker” to refer to unscrupulous journalists making wild charges.
“The liar is no whit better than the thief, and if his mendacity takes the form of slander he may be worse than most thieves.”Theodore Roosevelt
Also, The Jungle, by Upton Sinclair further drew scrutiny to journalists’ integrity and intent in publishing. His novel was meant to explore the exploitation of American immigrants. However, his descriptions of the meat-packing industry created scandal and outrage. He had seen disgusting activities working undercover and listed the practices in his fictional story.
“I aimed at the public’s heart, and by accident I hit it in the stomach.”Upton Sinclair
In this same year, Canadian inventor Reginald A. Fessenden makes a public radio broadcast.
1907: America peaks in European immigration with a massive flow of people, often through Ellis Island.
1908: The National Press Club forms and journalists are pushed to conduct themselves more professionally. The quality of writing became a focus in journalism, in addition to, the conduct of journalists.
1912: Columbia accepts $2 million from Pulitzer to create their school of journalism. At the same time, women are campaigning for suffrage and using the press to further spread their progressive ideas.
1913: Although President Theodore Roosevelt tried to sue several major papers, mostly for reporting corruption around the purchase of the Panama Canal rights, Freedom of the Press became a firmer principle.
Radio Personalities and International Propaganda during the Great Wars
1914 – 1955
Initially, propaganda wasn’t a dirty word. Much like the idea of “promotion,” governments would make a coordinated effort to influence public opinion. However, Americans fought back against the concept because it resembled the state-controlled media found in fascist countries. Even more, Americans clung to the concept of a Free Press to keep their government in check.
1914: America enters World War 1 and the U.S. government first flexes its ability to spread propaganda through the press. They promoted war bonds during World War I to stimulate the economy. Also, they promoted victory gardens and other wartime activities. After the war, public skepticism toward the Committee on Public Information causes the formal use of propaganda by the government to end.
1915: Several newspapers covered a story of an alleged “German Corpse Factory” spread saying that the German battlefield dead were rendered down for fats used to make nitroglycerine, candles, lubricants, human soap, and boot dubbing.
1917: America starts linking highways in a huge national effort to fix infrastructure.
1920: Broadcast journalism slowly begins to impact America through radio reports. In the same year, women win the right to vote with the 19th Amendment.
1928: The first mechanical TV station, W3XK airs its first broadcast under Charles Francis Jenkins.
1929: The Great Depression begins with the stock market crash.
1932: The Dust Storms begin on the Great Plains, increasing the suffering of Americans in the Midwest.
1933: Walter Duranty, the writer for the New York Times, reports that famine in Ukraine is false. Ultimately, his accusations against other reporters are found to be untrue and sympathetic to the Russian regime.
1935: President Franklin Delano Roosevelt creates the Work Projects Administration to turn around the Economy.
1939: America enters World War II. Without an official propaganda program, the government used programs like Writers’ War Board (WWB) and the United States Office of War Information to promote their messages regarding World War II. Core messages included making distinctions between German nationals and the Nazi party.
1947: Anti-communism propaganda during the Cold War was mostly managed by the Federal Bureau of Investigation under J. Edgar Hoover.
Fake News Stories during this time included:
- Discrediting communist sympathizers, such as H. Bruce Franklin
- Discrediting communist organizations like the Venceremos organization
- McCarthy’s Communist Witch Hunts
- And later Discrediting critics of Nixon and the Vietnam War, such as Ben Bagdikian
1950: America enters the Korean War.
1955: America enters the Vietnam War
Scandal and Betrayal Create a Cultural Revolution
For the first time, the press completely turned on the president when they uncovered the Watergate Scandal under President Nixon. Formerly, there was a certain sensitivity to the power of this institution. Investigative journalism, which had been suppressed during the World Wars, came back into vogue.
Furthermore, confidence in the American press became very high. They were seen as allies of the people, holding the government in check by bringing important information to citizens. Corrupt politicians railed against the press’s power and sought ways to undermine their authority.
1956: Confidence in newspapers shows that 66% of Americans thought newspapers were fair, including 78% of Republicans and 64% of Democrats, according to an American National Election Study.
1960: Bob Woodward and the Washington Post coverage of the Watergate scandal brought investigative journalism back into vogue.
1961: The Bay of Pigs invasion increases international tensions for the United States.
1963: Martin Luther King Jr. leads the March on Washington.
1964: 71% thought network news was fair according to a poll by the Roper Organization.
1969: Spiro Agnew, as Vice President to Richard Nixon, gives a landmark speech denouncing what he felt was media bias against the Vietnam War.
1972: 72% of Americans trusted CBS Evening News anchor Walter Cronkite according to one of their polls.
Celebrity Journalists, Culture Wars and Conservative Backlash
In a single century, technology leaps from print to the radio (1920) to television (1940) then the
internet(1995). The impact of storytelling and sharing information is enormous. The ever-escalating pace of reporting mirrors these changes.
As journalism grew from papers to radio to television, some journalists undermined the authority of the press by taking shortcuts or simply publishing fake stories. Although there were several notable instances, the American public was shocked at the behavior. With the people already jaded about corruption in government, the corruption of the press became a powerful symbol of the fall of truth in the postmodern era.
1980: Janet Cook publishes “Jimmy’s World” in the Washington Post. Her fraudulent story reported on an eight-year-old heroin addict.
1990: America enters the Gulf War.
1993: Michael Gartner resigns from NBC over a dateline segment depicting safety issues with GM pickup trucks. The segment staged footage for the broadcast but neglected to note that it was a dramatization.
1995: The term “infotainment” begins to be used to describe the practice of picking news topics to attract viewers (and thus, higher ratings). It’s a departure from serious news and analysis that was the mainstay of programming.
“Imitating the rhythm of sports reports, exciting live coverage of major political crises and foreign wars was now available for viewers in the safety of their own homes. By the late-1980s, this combination of information and entertainment in news programmes was known as infotainment.”Barbrook, Media Freedom, (London, Pluto Press, 1995
1995: USA Today became the first newspaper to offer an online version of its publication. CNN launched a site later that year.
1996: Nicolas Negroponte’s book Being Digital predicts a world where news is consumed through technology and the experience is personalized to the reader’s preferences and behavior.
1997: According to Gallup Polls, American’s confidence in mass media to report the news accurately begins to wan.
1998: Mike Barnicle resigns from the Boston Globe over allegations that he fabricated a story about a donation to cancer patients. He used anonymous sources and was unable to track back all of his facts to the actual accounts.
The News Goes Online
Technology again changes the way people get their news as the internet provides both a new means of distributing stories and a way of monetizing readership through online ads. Print papers suffer as they struggle to compete with online media outlets. Similarly, television feels the impact of streaming, pirating, and blogging. The publications that survive are the ones that reinvent themselves as media companies that learn to tell stories in a variety of formats for these new audience habits.
The launch of social media (Facebook in 2004 and Twitter in 2006) alongside the rise in internet content reshapes the news climate.
Trustworthiness in news continues to sink as people struggle to separate ethical and unethical content creators. Everyone can play a journalist by simply typing. As a result, Americans adjust to the new normal of a constant cycle of information, some true and some false.
2000: Internet-based “free” news based on a model supported by online advertising overwhelms and undercuts daily newspapers. This affects not only the number of newspapers in the U.S. but also the practices of editors and journalists.
2001: The War in Afghanistan begins.
2002: The U.S. Department of Defense launched the Pentagon Military Analyst program to the desired information about Iraq to news outlets. In 2008, The New York Times revealed the program resulting in a new ban against permanent domestic propaganda. Other propaganda during this time included programs like The Shared Values Initiative.
2003: The Iraq War begins. Around this time, Jayson Blair, a journalist, resigns from the New York Times after revealing he made up or plagiarized articles. Suspicious articles, with inconsistencies and errors, from his tenure include:
- October 30, 2002,” US Sniper Case Seen as a Barrier to a Confession”
- February 10, 2003,” Peace and Answers Eluding Victims of the Sniper Attacks”
- March 3, 2003, “Making Sniper Suspect Talk Puts Detective in Spotlight”
- March 27, 2003, “Relatives of Missing Soldiers Dread Hearing Worse News”
- April 3, 2003, “Rescue in Iraq and a ‘Big Stir’ in West Virginia”
- April 7, 2003, “For One Pastor, the War Hits Home”
- April 19, 2003, “In Military Wards, Questions and Fears from the Wounded”
Many believe that his mental health issues during this time contributed to the false stories.
2003: Rick Bragg receives criticism for not crediting the intern he sent to research a story. Instead, he wrote the piece like first-hand reporting. In his defense, he revealed that this is common practice among some busy journalists.
2004: Jack Kelly, the writer for USA Today, resigns over assertions that he plagiarized and faked interviews.
2004: Dan Rather, Mary Mapes, and other journalists take heat over their reporting that George W. Bush received preferential treatment in his military service. Reports asserted that key documents may have been edited or forged. Several key personnel resigns and Rather’s retirement followed shortly after.
2005: Mitch Albom and several other reporters are suspended from the Detroit Free Press over allegations that they fabricated parts of a story, saying players attended a game when they did not.
2006: Ben Domenech resigns from his position at the Washington Post over plagiarism accusations. Since then, he has been linked to several hoaxes and scandals.
2008: Scott McClellan, George W. Bush’s press secretary reveals that he regularly passed misinformation to the media.
2008: The fallout at Reuters over Adnan Hajj’s photographs concludes. Digital manipulation resulted in a photo editor’s firing and the images being removed from services.
2008: 45% of Americans say they have no confidence in the press, according to Ladd.
News Trends and Social Media
The innovation of social media again disrupts the news cycle. Stories can gain more exposure through the power of people’s interest. However, silos of information also develop as the platforms encourage users to self-select only the points of view that they favor. Sensitivity to media bias reaches an all-time high.
Conspiracies spread quickly within friendly audience bubbles. Journalists struggle to separate themselves from opinion writers – and even content creators generating fake information. Trust in information continues to decline and Americans begin to question even the most established media sources.
2011: Johan Hair, the journalist, receives criticism for attributing other interview quotes as his own work. The story gains widespread attention as he wrote for several prominent news outlets. He is forced to return his Orwell Prize.
2011: The meaning of the word “troll” shifts in the definition. Originally, it described people who would go on the internet to pick fights (sometimes even about deeply trivial topics). Now, it is linked to fake news as internet trolls often pass along false information that can gain traction over time.
2012: Mike Daisey’s story about working at Apple is aired as part of an episode of This American Life. Several factual errors and exaggerations have been criticized as untruthful portions of his monologue and his book.
2013: A 59% majority reported a perception of media bias.
2013: Lara Logan leaves CBS news over a clash with the U.S. Government over her source’s story for a 60 minutes report.
2014: The Russian government uses disinformation to create a counternarrative to the crash of Malaysia Airlines Flight 17, saying that Ukrainian rebels shot it down. The story spreads into the U.S. because of the high-profile story. Similarly, disinformation about their Invasion of Crimea begins to circulate onto American websites.
2014: Fareed Zakaria faces criticism for improper citations in several of his written works. Warnings and disclaimers are added to his content on several news sites.
Fake News Makes Big Bucks
Americans become deeply disturbed at their ability to be manipulated by false news reports. Pay-per-click advertising attracts even more bad actors to join, and profit, from the spread of misinformation. Additionally, social media giants are forced to acknowledge that they allowed foreign governments to generate and spread fake news stories with the aim of disrupting American democracy.
The late Paul Horner discusses why he creates fake news stories.
As a result, everyone from politicians to publishers to social media platforms is looking for ways to hold content creators accountable for the truthfulness of the information they spread. Most people are aware that fake news is everywhere. The problem is that they still struggle to identify it.
2016: Courts find Sabrina Rubin Erdely guilty of defamation with actual malice in a lawsuit brought by University of Virginia administrator Nicole Eramo. Erdely was found personally responsible for $2 million in damages. The suit was brought on the basis of the Rolling Stone report “Rape on Campus.”
2016: Facebook begins to see criticism for its role in contributing to the distribution of fake news.
2016: NATO claims that Russian propaganda (through fake news stories) has risen sharply.
2017: Kevin Deutsch fights allegations that he faked sources for his news articles and books related to coverage of crime.
2017: Claire Wardle of First Draft News identifies seven types of fake news:
- satire or parody (“no intention to cause harm but has the potential to fool”)
- false connection (“when headlines, visuals or captions don’t support the content”)
- misleading content (“misleading use of information to frame an issue or an individual”)
- false context (“when genuine content is shared with false contextual information”)
- imposter content (“when genuine sources are impersonated” with false, made-up sources)
- manipulated content (“when genuine information or imagery is manipulated to deceive”, as with a “doctored” photo)
- fabricated content (“new content is 100% false, designed to deceive and do harm”)
The public slowly becomes aware that fake news is spreading rapidly through social media. Popular stories, that were picked up by mainstream news sources, include:
- The pope endorsed Trump for President
- The Sandy Hook shooting was staged
- The earth is probably flat
- Obama controls the weather
- A man stopped a robbery by quoting Pulp Fiction
- Donald Trump sent his personal jet to assist stranded marines
- Hillary Clinton and John Podesta operated a pedophile ring out of a local pizza parlor
- James Comey received millions of dollars from the Clinton Foundation
- Melania Trump is using a body double
- Roy Moore’s accuser was arrested for false testimony
- Donald Trump won the popular vote in 2016
- Seth Rich was murdered by the Democratic National Committee DNC
- Alleged identities of the Las Vegas Shooting gunman
- Black Lives Matter blocked hurricane relief
- Robert Mueller is a pedophile
2017: One of the major sources of pro-Trump fake news stories is linked to the city of Veles in Macedonia. Seven different fake news stories, employing hundreds of teenagers were creating false news content for U.S. companies.
Additionally, almost 30% of the spam and content spread on the internet originates from these software bots.
2018: Google launches their Google News Initiative (GNI) to fight the spread of fake news. They aim “..to elevate and strengthen quality journalism, evolve business models to drive sustainable growth and empower news organizations through technological innovation.”
The History of Fake News in the U.S.A.
Upon putting this history of fake new timeline together, this piece feels like it ends on a cliffhanger. The next part of this story depends on us, the audience as we sift through information critically and carefully (but not conspiratorially). Critical thinking and accountability have always gotten the media back on track. That’s because the press isn’t the enemy of the people — it’s the voice of the people.
If you enjoyed this article, make sure you follow me on Instagram @verdera.me. You can join this conversation there or through the comments below.
If you have a story from history that you think should be added, please reach out to me.
Defining the Terms
Currently, “Fake News” is a term with a broad meaning. People use it to label everything from intentional misinformation to opinion editorials. Below are some of the terms that are used when news and information fall short of journalistic principles.
- Propaganda: Merriam Webster defines this as, “the spreading of ideas, information, or rumor for the purpose of helping or injuring an institution, a cause, or a person” or “ideas, facts, or allegations spread deliberately to further one’s cause or to damage an opposing cause.”
- Media Agenda: This is a slang term used to reference Agenda Setting theory. In communication theory it is explained as, “Media influence affects the order of presentation in news reports about news events, issues in the public mind. More importance to a news = more importance attributed by audience.”
- Media Bias: Wikipedia summarizes the term well saying, “(Media Bias) is the bias or perceived bias of journalists and news producers within the mass media in the selection of events and stories that are reported and how they are covered. The term “media bias” implies a pervasive or widespread bias contravening the standards of journalism, rather than the perspective of an individual journalist or article.”
- Fake News: In general, this is false information that appears to be legitimate news. Claire Wardle’s 7 types of Fake News define this term more clearly.
- Opinion Leadership: As a marketing and business term, this refers to “Influential members of a community, group, or society to whom others turn for advice, opinions, and views.” and “Minority group (called early adopters) that passes the information on new products (received from the media) to less adventuresome or not as well informed segments of the population.”
- Yellow Journalism: The Encyclopedia Britannica describes, “…the use of lurid features and sensationalized news in newspaper publishing to attract readers and increase circulation. The phrase was coined in the 1890s to describe the tactics employed in furious competition between two New York City newspapers, the World and the Journal.”
- Muckraking: This encompasses journalists who, “…search for and expose real or alleged corruption, scandal, or the like, especially in politics.”
- Libel: As a legal term, libel refers to, “…the written or broadcast form of defamation, distinguished from slander, which is oral defamation. It is a tort (civil wrong) making the person or entity (like a newspaper, magazine or political organization) open to a lawsuit for damages by the person who can prove the statement about him/her was a lie.” | <urn:uuid:884b15d9-06e5-40ac-9099-25c48530339f> | CC-MAIN-2021-21 | https://verderamade.com/2019/08/24/a-brief-history-of-fake-news/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00535.warc.gz | en | 0.9494 | 7,739 | 3.09375 | 3 |
Shotguns come in a wide variety of bore sizes, ranging from 5.5 mm (.22 inch) bore up to 5 cm (2.0 in) bore, and in a range of firearm operating mechanisms, including breech loading, single-barreled, double or combination gun, pump-action, bolt-, and lever-action, semi-automatic, and even fully automatic variants (Source: https://en.wikipedia.org/wiki/Shotgun). We have come a long way from the simple “Thunder Pipe” to some pretty extravagant offerings.
Of course, the Federal Government just had to define exactly what a shotgun was so that we common folk would recognize one when we saw one. A fairly broad attempt to define a shotgun is made in the United States Code (18 USC 921), which defines the shotgun as “a weapon designed or redesigned, made or remade, and intended to be fired from the shoulder, and designed or redesigned and made or remade to use the energy of the explosive in a fixed shotgun shell to fire through a smooth bore either a number of ball shot or a single projectile for each single pull of the trigger.” It is even more broadly defined in English law: “a smooth bore gun not being an air gun” (s.1(3)(a) Firearms Act 1968).
The shotgun has been used in war and peace and continues so today. Many sport shooters and hunters use the ubiquitous shotgun to take clay and game while military and law-enforcement as also seen its use over many years. The shotgun is an all around favorite for home and personal defense and the growth in the number of shotguns intended for that purpose are steadily on the rise with the 12-gauge shotgun being the most popular.
Of the types of shotguns (semi-automatic, single-shot, double-barrel, lever, bolt, etc.) the pump-action shotgun is the most relied upon shotgun and Mossberg has been among the leaders, but many have gone before any of the current leaders arrived.
The first pump action patent was issued to Alexander Bain of Britain in 1854. Since then it has been the Bain of our shotgun existence (sorry for the pun, I was just pumping myself up for the rest of the article).
The cycling time of a pump-action is quite short. The manual operation gives a pump-action the ability to cycle rounds of widely varying power that a gas or recoil operated firearm would fail to cycle, such as less-than-lethal rounds. The simplicity of the pump-action relative to a semi-automatic design also leads to improved durability and lower cost. It has also been noticed that the time taken to work the action allows the operator to identify and aim on a new target, avoiding a “spray and pray” usage.
An advantage of the pump-action over the bolt-action is its ease of use by both left- and right-handed users: like lever-actions, pump-actions are frequently recommended as ambidextrous in sporting guidebooks. However, most are not truly ambidextrous, as the spent casing is ejected out the right side of the receiver and the locating/direction of the safety button is not conducive to left-handed operation in most designs.
The Mossberg 500 line of shotguns has been popular ever since 1961 when it first arrived on the seen and is the top selling shotgun in the world followed only by Remington. Of the Mossberg 500 line, the 12-gauge is the most popular gauge for use by many – including home defenders. For home defense, the Model 500 and Model 590 in 12-gauge normally fill the bill. These Mossberg shotguns are not your granddaddy’s shotgun; they are specialty shotguns although many hunting shotguns have filled numerous rolls in defending one’s castle and one’ self. The shorter barrels and greater shell capacity set these specialty shot guns in a category all their own. With that said, there are many who cannot handle the noise, weight, and recoil of these shotguns, or just plain don’t want to for whatever reason. In these cases, there is a need for a small, but as effective (or as close as can be found) shotgun with relatively low recoil, less weight, and lower noise. The 20-gauge shotgun fits this bill exactly and is flaunted by experts as the next best shotgun if one cannot (or do not like to) handle the 12-gauge versions.
In the beginning, the 20-gauge shotgun was simply an imitator of its larger brother the 12-gauge and served for taking game of winged and four—footed variety. However, and due to demand from customers, the 20-gauge has also fallen in step with specialty shotguns of larger gauge. “Tactical” and “Security” are popular words to toss about these days and so “Tactical” shotguns in 20-gauge are readily available by Mossberg, Remington, and others in order to fill a market void. The Mossberg 500 Tactical 8-shot 20-gauge shotgun is one of those shotguns. If you are not familiar with the Mossberg line of specialty shotguns in 20-gauge, perhaps I can help you out with that by showing you the basics and then discussing the particulars. While I don’t expect you to just run out and get one, it might be a shotgun to consider if you feel like that 12-gauge has taken enough toll on your shoulder and you need an excellent shotgun for home defense (HD) or personal defense (PD) purposes.
I used to carry a Mossberg 500 slide-action shotgun when I was working as a LEO in a lifetime past. I still have that shotgun, Boo-Boo as it’s called, and Boo-Boo has seen quite a few changes since its LEO days but is still a very effective shotgun. Due to some personal injuries and such, even Boo-Boo was getting harder to handle and I had to force myself toward something else. I was; however, a grown-up man and need a grown-up shotgun. I reluctantly tried my first 20-gauge, a Mossberg 500 “Security” model and have never looked back. Baa-Baa, as it is called, was called a “security” model since it had a black composite stock and forearm but other than that Baa-Baa acts like any other Mossberg 20-gauge shotgun. I have added a few things to make Baa-Baa more “practical tactical” like a; side-saddle shell holder, a butt-stock shell keeper, a fiber optic front sight, red/green dot optic, and a sling, but Baa-Baa is still a 5+1 shot pump-action shotgun anyway that you look at it. Since then, I have also availed myself of a couple of Remington 870 20-gauge models, but I have always liked the Mossberg for one reason that I’ll get into in a bit. So, let’s get on with it and see what the Mossberg 500 Tactical 8-shot 20-gauge shotgun is all about.
FIT, FINISH, AND OVERALL APPEARANCE:
The Mossberg 500 Tactical 8-shot 20-gauge shotgun is nicely fitted together from lock, to stock, to barrel.
The overall finish is a matte-blued finish and is a departure from other models that have pretty nice and shiny bluing (like Baa-Baa, my Mossberg 500 20-gauge “Cruiser/Security” model). But, since this is a “tactical” shotgun, the non-reflective finish is desirable by most tactical types. With that said, the finish is not unlike that on my Remington 870 shotguns.
The overall appearance is that the Mossberg 500 Tactical 8-shot 20-gauge shotgun looks like a long-barreled shotgun in which the normally smooth lines of a shotgun is interrupted with a tall front and rear sight. The synthetic furniture adds to the “tactical” look that is popular these days and contributes to the “Evil Black Gun” mindset that is also as popular these days with those who thinks that looks make a firearm more dangerous than not.
While most standard shotguns come with no rear sight and a bead front sight, the Mossberg 500 Tactical 8-shot 20-gauge shotgun comes equipped with a fully-protected “ghost” rear sight that is adjustable for both windage and elevation. I have used the wide-aperture “ghost rear sight with my Mossberg 590A1, M1A, and others and it is my favorite sight for old eyes. Both the rear and front sight is set up like the sights on the 590A1 shotgun and is something that I am used to looking through. Lining up with that rear sight is a very well-defined, fiber optic front sight that just pulls your eye to it. Aiming is quite quick with this setup, and Martha, you do need to aim a shotgun at times.
In addition, the top of the receiver is drilled and tapped for mounting an optic rail. You (I) would; however, have to remove the rear sight to do so.
The one thing that you might notice about the Mossberg 500 Tactical 8-shot 20-gauge shotgun is the 20-inch smooth bore barrel. Most “Tactical” shotguns these days normally come with an 18.5-inch barrel, which makes them so darn delightful to throw around in tight spaces. A 20-inch barrel is just as delightful and the extra 1.5 inches does not hurt a thing – especially when adding a foot or two of velocity to a 5/8-ounce Foster slug or some 3-inch magnum #2 or 2-3/4-inch #3 buck shot. You also don’t realize just how much an extra 1.5-inch adds to your sight picture and subsequent aim.
Within the receiver lies all of the magical innards that make things happen when chambering, firing, and ejecting a shell. Double lifts ensure proper lifting of a shell from the tube magazine and alignment for positive feeding of that shell into the chamber. A double-hook arrangement on the pump handle (see, THAT RATCHETY THINGY) pulls the bolt evenly into battery and release it just as well when the pump handle is returned to extract (with a dual extraction system) and positively eject the spent shell (or a loaded shell, for that matter, when manually unloading the shotgun by racking the action).
One of the many comments that I hear about the Mossberg shotgun is that it has an inferior aluminum receiver unlike a Remington that has a steel receiver. Balderdash! When the bolt locks up to the barrel it locks up on steel and the barrel is well seated within the receiver. I have never seen or heard of a Mossberg shotgun fail because of lock up. Yes, there are failures, but lock-up is not commonly one of them.
STOCK:The stock is one area that the Mossberg 500 Tactical 8-shot 20-gauge shotgun falls short – literally. The length-of-pull (LOP) is 12-inches when shipped with the standard butt pad. A spacer and additional butt-pad, shipped with the shotgun, allows you to adjust the LOP out to 13-inches, which is the same as on the AK and many other long guns. And, as with many other long guns, a 13-inch LOP is too short for me. My answer to this is quick and decisive – a slip-on Limbsaver recoil pad that adds an additional 1-inch of length to the stock and accommodates my fourteen-inch MLOP (Minimum Length of Pull) requirement. In addition, I get a little more help with recoil management since I am “double-padding” the butt end. Some experimenting with these spacers can help you to set up the shotgun for your particulars needs.
The stock is of synthetic material and exhibits a fine textured grip area for the hand. The grip also has enough depth to it for pulling the shotgun into the shoulder tightly. I find myself placing my thumb (I’m a left-handed shooter) on the left side of the receiver near the safety, or sometimes resting it on top of the receiver closer to the safety switch as I am shooting and using the fingers of the shooting hand to pull the stock into my shoulder.
There is an appendage (for lack of a better term) on the bottom of the stock for mounting a sling swivel and a Blackhawk 1-inch swivel work quite well with it. I will eventually put a sling on the shotgun, but the sling type is yet to be determined.
SAFETIES:One of my favorite features of the Mossberg shotgun is that the safety is on top of the receiver, which makes it accessible by the thumb of either hand when shooting. There are exceptions to the top-mounted safety on Mossberg shotguns and is one of the reasons that I did not seek out a SA-20 International semi-automatic shotgun by Mossberg; the safety is just behind the trigger on the trigger guard.
Being a left-handed shooter, positioning the safety on or off can be awkward in a time of need. With the Mossberg top-mounted safety, the position of the safety dictates what you want the round to do; push forward to see one leave the barrel, or pull rearward to keep one in the chamber. It is pretty simple, really. I will say; however, that the rear safety is stiff to operate when new, but breaks in nicely and has enough detent to to hold it in either position.
The Action Lock Lever (also known as; bolt release or Slide release) is on the left side to the rear of the trigger and is a handy place to put it for the left-handed shooter as it keeps the trigger finger out of the trigger guard while the middle finger is unlocking the action if need be.
THAT RATCHETY THINGY:
Obviously, the Mossberg 500 Tactical 8-shot 20-gauge shotgun is a pump-action shotgun and the composite forearm is what makes the pumping occur.
The forearm surrounds a full-length loading tube that holds 7 rounds very nicely.
My major gripe with the forearm is the lack of textured gripping surface. The sides of the forearm are textured very smoothly with the bottom of the slide handle being somewhat more aggressively textured but not heavily so. While I have had no problems working the slide while wearing gloves, a working hand heavy with perspiration may find a very slippery surface to work with. With that said, it is easy enough to swap out the pump handle with one of my liking.
There is a lot of play in the forearm; in fact, one could say that it is very sloppy. But, it is good play and ensures positive operation of the pump handle and the Mossberg 500 Tactical 8-shot 20-gauge shotgun does that quite well, thank you.
There is an end cap with a threaded hole for a sling swivel attachment (shipped with the shotgun) that allows you to remove the barrel from the gun. The feed tube and barrel are well supported together by a very finely-welded adapter at the muzzle end.
The trigger group is housed in a polymer trigger housing that is removable for maintenance and inspection with one pin. The trigger itself; however, is metal and don’t expect a trigger like a hunting rifle or a tuned 1911 pistol – or even an un-tuned 1911 trigger. The trigger is heavy (weighing in at almost the weight of the shotgun) and gritty when pulled slowly. With that said the trigger on a shotgun is meant to be snapped rearward and not squeezed. If you pull the trigger like a shotgun’s trigger is meant to be pulled, the trigger does just fine thank you; the break is crisp and positive.
But, weight! The Mossberg 500 Tactical 8-shot 20-gauge shotgun weighs only 6.75 pounds unloaded. With a 20-inch barrel, and 39” overall in length with standard butt pad (with the spacer, butt-pad, and the Limbsaver slip-on recoil pad, the length is 40 ½”), the Mossberg 500 Tactical 8-shot 20-gauge shotgun is a mighty easy shotgun to throw around or carry for an extended period of time while on perimeter watch.
To test out the Mossberg 500 Tactical 8-shot 20-gauge shotgun, a Foster-based Remington 2-¾”, 5/8-ounce Slugger rifled ammunition was going to pass down the barrel. At a Muzzle Velocity of 1580 fps and a Muzzle Energy of 1513 ft. lbs we are talking about some .628 caliber business taking place.
From the muzzle, the Remington 20-gauge Slugger drops about 0.5-inch at 25 yards, and this is the limit of my indoor range. Now, let’s put that into perspective. For my intended use of this shotgun, 25 yards is more than adequate and I would suspect that 15 yards would be more realistic. Since I have adjustable sights on the Mossberg 500 Tactical 8-shot 20-gauge shotgun, I checked zero at 15 yards with the Remington 20-gauge Slugger ammunition and then moved the target in to 10-yards for the rest of the session. With this ammunition, even a one-inch deviation from zero will have a definite impact on the target. Also with this ammunition, I can aim 2-inches high from CM to get a CM hit at 50 yards (assuming everything would be perfect, of course). At close range, the Remington Slugger is pretty devastating to a target – even a paper one. There was no need to adjust the rear sight.
With my particular set-up, felt recoil is negligible considering that I am actually running two recoil pads (for length of pull). The stock is such that simply laying my cheek against it lined up the front sight right in the center of the rear ghost ring. Due to the lack of recoil and muzzle jump (as compared to a short –barreled 12-gauge), getting back on target for subsequent shots is very quick, and that can be very useful when confronting a person (or persons) bent on performing negative engagements with society – especially against you or yours.
The looseness of the pump handle simply faded from my mind as I worked the slide and the Mossberg 500 Tactical 8-shot 20-gauge shotgun functioned flawlessly with its dual lift gates and dual extraction system. Although I am right-handed, shooting the Mossberg 500 Tactical 8-shot 20-gauge shotgun (or any shotgun, rifle, or carbine for that matter) weak-side has its advantages; while the weak hand only has to work the safety, bolt release, and trigger the strong hand works the pump handle and takes on the loading/reloading task – which I find easier to do with my strong hand (my working hand, so to speak).
THE FINISHING TOUCH:The 20-gauge shotgun is a viable alternative to the 12-gauge for HD or PD work. The Mossberg 500 Tactical 8-shot 20-gauge shotgun is a light and maneuverable shotgun that can be easily brought to bear and can deliver some serious damage to one who thinks that a 20-gauge is a mere play toy. It is a positive performer with 3” Magnum or standard-length 2-3/4” slugs, 3” magnum #2, or 2-3/4” #3 Buck and is going to leave a mess when used.
Several experts, including Masad Ayoob, have endorsed the 20-gauge shotgun as the next best shotgun if one cannot handle the 12-gauge versions. Although the 20-gauge may not have the devastation of a 12-gauge, but it is pretty darn close with the right ammunition, the less trauma experienced when shooting the 20-gauge makes it a good choice for most shooters regardless of size. I know men that are much larger than I am and they wear by the 20-gauge; it is something that their spouses or children can also fire where they would rather be shot than operate a 12-gauge.
The Mossberg 500 Tactical 8-shot 20-gauge shotgun can be made to accommodate a wide range of shooters and their physical dimensions by experimenting with different stock spacing and butt-pad options.
When I was traveling to and from Atlanta, one of my 20-gauge shotguns was in the vehicle in case of a social encounter or as part of a Get Home tool. Whenever I travel out of town, a 20-gauge is with me, and there will be an adequate supply of slug and #3 Buck with it as well.
The MSRP on the Mossberg 500 Tactical 8-shot 20-gauge shotgun is $490, but can be found for less. If the Mossberg 500 Tactical 8-shot is not for you, Mossberg also makes several Tactical 6-shot versions and combinations that might work for you.
As far as accessories for this one; only a slip-on recoil pad and a good sling is necessary. I’ve burdened Baa-Baa with enough that I have learned to go as light as possible and the Mossberg 500 Tactical 8-shot will be that.
The reality is that I was looking for a basic shotgun with a few more features than a standard shotgun comes with and the Mossberg 500 Tactical 8-shot 20-gauge shotgun fills those wants and desires.
Mossberg 500 Tactical 8-shot: http://www.mossberg.com/product/500-tactical-8-shot-54300/ | <urn:uuid:c4e4c5f4-d898-472c-8a77-86a2baf7eb68> | CC-MAIN-2021-21 | https://guntoters.com/blog/2016/04/10/mossberg-500-20-gauge-tactical-shotgun-54300/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988986.98/warc/CC-MAIN-20210509122756-20210509152756-00497.warc.gz | en | 0.956501 | 4,628 | 2.65625 | 3 |
The Earl of Newcastle quickly decided on a course of action. The bulk of his army would advance on Tadcaster from the east, along the main road from York, and attack Lord Fairfax’s army at Tadcaster. Newcastle’s Lieutenant-General, the Earl of Newport, would advance towards Wetherby and then attack Tadcaster from the northwest. Fairfax’s force would be trapped between the two forces and destroyed.
Lord Fairfax was well aware of his isolated and precarious position. He gathered as many troops as he could at Tadcaster and made preparations to defend the town by building a redoubt on the crest of the hill above the east bank of the River Wharfe. This fortification defended the town from any attack from the direction of York. There were several houses close to the redoubt and these may also have been fortified, although contemporary accounts make no mention of it. Another possibility is that these houses were demolished and their rubble used in the construction of the earthwork.
During 6 December the Royalist forces began their advance. Late in the day Lord Fairfax called a Council of War. The Parliamentarian commanders decided that their position was untenable. Sir Thomas Fairfax, Lord Fairfax’s son, gives a figure of only 900 men for the force his father had available and these were opposed by over 6,000 well-equipped Royalist troops a few miles to the east. Newcastle was advancing with the main body of the army, which comprised the foot, artillery and a few troops of horse: some 4,000–4,500 men. Newport’s flanking force was formed from the bulk of the army’s horse and amounted to about 1,500 men, although one Royalist account numbers them at 15,000 – obviously a zero too many!
Lord Fairfax decided that the only course of action was to withdraw his army towards the west, in the direction of Leeds. On the morning of 7 December, a large proportion of Fairfax’s men were formed up on Tadcaster’s main street ready to march, when firing broke out on the opposite bank. Fairfax had left a rearguard to defend the redoubt and this was now being attacked by Newcastle’s foot. Withdrawal was no longer an option and Fairfax had to stand his ground. Reinforcements were rushed across the bridge to support the earthwork’s defenders and the Royalist attack was brought to a halt. A second Royalist attack developed from the north along Mill Lane and succeeded in capturing a house close to the bridge and cutting off the defenders of the redoubt. A Parliamentarian counterattack recaptured the house and drove the Royalists back along Mill Lane. To prevent the Royalists repeating their attack a number of houses were set on fire. For the remainder of the day the battle degenerated into a long-range musketry exchange.
Although Newcastle’s prompt attack had prevented the Parliamentarians from withdrawing, the second half of his plan did not come to fruition. Why did the Earl of Newport not strike the town from the northwest as he had been ordered? The probable reason is that his force was accompanied by a pair of light guns and these, in combination with the state of the roads in December, conspired to slow him so much that he was unable to reach the battlefield. In Drake’s History of York a much more interesting reason is given. Drake states that Captain John Hotham despatched a letter to Newport, under Newcastle’s signature, ordering him to halt and await further instructions. If there is any truth in this it would have been a brilliant stroke by Hotham but would have meant that the Parliamentarians would have had to be aware of Newport’s flank march.
Although Lord Fairfax had held his ground, he was still in a dangerous position. In a letter to Parliament he asserted that he could have continued to hold Tadcaster had he not been low on gunpowder – a curse of armies throughout the Civil Wars. Without powder his musketeers could not oppose the enemy and Fairfax had no choice but to withdraw. It is interesting that Fairfax decided to withdraw to Selby, while Captain Hotham withdrew to Cawood. This seems a little strange as it was taking Fairfax away from his main area of support in the West Riding but it moved him closer to Hull which, as has already been mentioned, was a major magazine and a ready source of supply for him.
On the morning of the 8th the Royalists occupied Tadcaster. Newcastle then moved his army south and garrisoned Pontefract Castle. He also set up several other small garrisons, including one at Ferrybridge, which effectively cut off Fairfax from the West Riding. Elements of the Royalist army, under Sir William Saville, captured Leeds and on Sunday 18 December moved on to attack Bradford. Heavily outnumbered by the Royalist troops, the ill-equipped citizens of Bradford held their ground around the church and, once they had been reinforced by a body of men from the Halifax area, drove the Royalists off and sent them scurrying back to Leeds. During the action a Royalist officer had asked for quarter but the citizens who were attacking him did not understand what the term meant and cut him down. This led to the ominous term ‘Bradford quarter’.
Several days after the attack on Bradford Sir Thomas Fairfax arrived at the town with reinforcements. He immediately put out a call for volunteers to carry out an attack on Leeds. By the morning of the 23rd he had gathered 1,200–1,300 musketeers and horse and a substantial body of clubmen – ill-armed local volunteers – possibly as many as 2,000. The town was defended by Sir William Saville who had 1,500 foot and five troops of horse and dragoons.
The course of the storming of Leeds is straightforward to trace on the ground. At the time Leeds comprised three main streets: the Headrow, Briggate and Kirkgate. All of these streets still exist. At the bottom of Briggate a bridge crossed the River Aire and the road continued on to Hunslet. All the exits to the town had been barricaded and an earthwork ran from close to St John’s church, across the Headrow and then down to the river.
Fairfax’s force approached the town along the Headrow and summoned Saville to surrender. When this summons was refused Sir Thomas began his assault. Fairfax attacked along the Headrow while Sir William Fairfax attacked the area around St John’s church. Neither of these attacks made much progress. Sergeant-Major-General Forbes had been despatched to attack the enemy earthwork where it approached the river, while Captain Mildmay had been sent on a more circuitous route to approach the town from the far side of the Aire and prevent any enemy escaping in that direction. Forbes, supported by musket fire from Mildmay’s men, managed to break into the town and was soon reinforced by Mildmay’s men who had stormed the defences of the bridge. The combined force then attacked up Briggate towards the Market Place which stood at the top of Briggate close to the Headrow. The success of this attack allowed the Fairfaxes to force their way into the town and Sir Thomas led a cavalry charge along the Headrow into the Market Place. Many of the Royalist garrison were killed or captured and some were drowned trying to swim the Aire. The survivors continued on to Wakefield but their arrival seems to have panicked the garrison of that town which promptly withdrew to Pontefract. A force of Parliamentarian troops from Almondbury, near Huddersfield, occupied Wakefield on 24 January 1643.
In the aftermath of the loss of Leeds and Wakefield, Newcastle pulled the bulk of his army back to York. Before he could turn his attention fully on defeating Lord Fairfax he had two tasks to carry out. The first was to escort an ammunition convoy from Newcastle-upon-Tyne and the Earl despatched James King, his Lieutenant-General, with a body of horse to carry out this mission. Sir Hugh Cholmley attempted to intercept the convoy at Yarm in North Yorkshire on 1 February. Cholmley was defeated and King moved on to deliver the precious gunpowder to the army at York. Cholmley’s defeat may have been one of the main contributory factors to his subsequent change of sides.
Newcastle’s second task was to secure the Queen after her impending arrival and aid her march to join her husband at Oxford. The Queen arrived at Bridlington Quay – the town and harbour were separate at this time – on 22 February. Newcastle immediately set off with a large force to escort the Queen to York but while she was awaiting his arrival she was still in danger. Several Parliamentarian ships arrived and began a bombardment of the town and the Queen and her ladies had to take shelter in a ditch. Help was at hand when the Dutch admiral, van Tromp, who had escorted the Queen’s ship from the Continent, threatened the Parliamentarian commander that his ships would engage if the Parliamentarian ships did not withdraw. This had the desired effect and Newcastle was able to escort the Queen to safety at York on 7 March.
On 25 March Sir Hugh Cholmley changed sides. His defeat at Yarm and the Queen’s arrival finally decided him on this course of action. His defection was a great boon to the Royalist cause and gave them control of much of the East Coast of Yorkshire. It also seems to have had an effect on the Hothams who began a correspondence with the Earl of Newcastle and became very uncooperative with Fairfax.
Fairfax found himself in an unenviable position. The main Royalist army was at York and considerably outnumbered his own force. The East Riding was now under Royalist control and, to his rear, the Hothams had withdrawn their troops into Hull and were refusing to cooperate with him. His main base of support was in the West Riding, around the mill towns and he took a decision to withdraw to Leeds. His first action was to call his son, Sir Thomas, from Leeds with a small force of horse and musketeers and a large body of clubmen. His plan called for Sir Thomas to carry out a diversionary attack on Tadcaster with the troops he had brought from the West Riding, while his father with the main force marched directly from Selby to Leeds. On the morning of 30 March the Fairfaxes put this plan into action.
The plan worked well. Lord Fairfax and his men arrived safely at Leeds while Sir Thomas drove the garrison of Tadcaster out of the town. He may have exceeded his father’s orders at this point, which may have been to demonstrate against the town, not to capture it. Unfortunately, Sir Thomas tarried in Tadcaster for too long and as he began to march up onto Bramham Moor a pursuing body of Royalist horse came into sight. The Royalists, under Colonel George Goring, comprised twenty troops of horse and dragoons, some 1,000 mounted men. To oppose them Fairfax had only three troops of horse, amounting to around 150 troopers. The rest of his force was made up of musketeers and a large body of clubmen. When attacked by horse it was usual for the pike to provide protection against them for the musketeers. As Fairfax had no pikemen with him his force was in considerable danger from the Royalist horse, particularly as they had to cross two large areas of open moor-land before they reached the safety of Leeds.
As Fairfax ascended the road onto Bramham Moor he had to pass through an area of enclosures. This was ideal terrain for his horse to hold up the larger enemy force, while his foot crossed the first area of open ground and reached the shelter of the next area of enclosures. Having held the enemy for what he deemed to be a sufficient amount of time, Fairfax pulled back his horsemen and set off in pursuit of his foot. Imagine his surprise when he found his foot were waiting for him and had not yet crossed the open ground. The Parliamentarian force continued to march westwards and Fairfax spotted the enemy horse on a parallel road several hundred yards to the north. The Parliamentarians successfully reached the next area of enclosures and continued onto the open ground beyond – Seacroft Moor. By now Fairfax’s men were beginning to straggle and Goring timed his attack perfectly. Although the pitifully small force of Parliamentarian horse attempted to protect the foot, the force was quickly broken as Goring’s horsemen mounted an unstoppable charge. Fairfax and most of his troopers were able to escape to Leeds but most of the foot were killed or captured. Sir Thomas summed up the action as ‘the greatest loss we ever received’.
The storming of Wakefield
After the defeat at Seacroft Moor, Lord Fairfax concentrated his men into two garrisons: Bradford and Leeds. It was during this period that one of the most mysterious battles of the Civil War in Yorkshire took place, at Tankersley, just off junction 36 of the M1. Little is written about this action, either by contemporary or modern authors, but it was a sizeable affair with up to 4,000 men taking part. A force of Derbyshire Parliamentarians marched north and were intercepted and defeated by a force of local Royalists. These Royalist troops may have been the advance guard of a planned advance into the south of the county.
The Earl of Newcastle still had one major task to perform before he turned his attention fully to defeating Lord Fairfax – the safe despatch of the Queen to the south. His first move was to lay siege to Leeds, but after a few days the Royalist army moved to Wakefield, where Newcastle left a garrison of 3,000 men, before moving into South Yorkshire. On 4 May Newcastle captured Rotherham. Accounts of the siege are contradictory – the Duchess of Newcastle’s account states that the town was taken by storm, while a letter from Lord Fairfax to Parliament states that the town held out for two days and then yielded. Fairfax goes on to state that the Royalists then plundered the town and forced many of the prisoners to join their army.
Two days after the capture of Rotherham the Royalist army moved on Sheffield but found that the town and castle had been abandoned by the garrison. Newcastle installed Sir William Saville as governor of the town and gave him orders to use the local iron foundries to produce cannon. The Royalists then spent the next two weeks consolidating their position in the south of the county until, on 21 May, Newcastle received startling news – Wakefield and the bulk of its garrison had fallen to the Parliamentarians.
Wakefield is one of the best examples of the storming of a town and is worth looking at in detail. Newcastle’s march into the south of Yorkshire presented the Fairfaxes with an ideal opportunity to strike back. Sir Thomas Fairfax gives the reason for the attack on Wakefield as an attempt to capture Royalist troops to exchange for the prisoners taken at Seacroft Moor. Prisoner exchanges of all ranks were a common occurrence during the Civil War. A good example of this is the case of Colonel George Goring. As will be described shortly, Goring was captured at Wakefield and remained a prisoner for almost twelve months. He was exchanged during the spring of 1644 in time to take part in the Marston Moor campaign. Many of the Parliamentarian troops captured at Seacroft Moor were not soldiers but clubmen – ill-armed local volunteers. On a number of occasions – Bradford, Leeds, Seacroft Moor and Adwalton Moor – Lord Fairfax used clubmen to supplement his limited supply of regular troops. As these men were agricultural workers and tradesmen their imprisonment had a major effect on the local economy of the areas from which they came. One of the reasons that Wakefield was chosen as a target was that Lord Fairfax had received intelligence that it was held by only 800–900 men, a serious underestimation of the garrison’s actual strength.
During the evening of 20 May a force of 1,500 men gathered at Howley Hall, near Batley, from the garrisons of Leeds, Bradford, Halifax and the hall itself It comprised 1,000 foot, probably all musketeers, and eight troops of horse and three troops of dragoons. The mounted troops were divided equally between Sir Thomas Fairfax and Sir Henry Foulis, while the foot was commanded by Sergeant-Major-General Gifford and Sir William Fairfax. There is no mention of any artillery being present, which is hardly surprising as this was a raiding force. Sir Thomas Fairfax had overall command.
The Parliamentarian force moved on Wakefield via Stanley, where they attacked the small garrison, capturing twenty-one prisoners in the process. They then moved on to Wakefield where, alerted by survivors from the Stanley garrison, the Royalist horse and musketeers were waiting for them, as Sir Thomas Fairfax reported:
About four a clock in the morning we came before Wakefield, where after some of their horse were beaten into the town, the foot with unspeakable courage, beat the enemies from the hedges, which they had lined with musketeers into the town.
The Parliamentarians first encountered a strong patrol of horse from the town which they quickly drove back. They then found 500 musketeers manning the enclosures outside the town and again, after a short fight, these were driven back. With the approaches to the town cleared the Parliamentarians could put their plan into action. It should not be imagined that Wakefield was a fortified town. Its defences were formed by the hedges and walls of the houses along its four main streets – Kirkgate, Westgate, Warrengate and Northgate. The end of each street was barricaded. Fairfax’s plan was to attack along two of these streets: Northgate and Warrengate. No account states who attacked along which street but it can be surmised from subsequent events that Sir Thomas Fairfax and Gifford attacked Warrengate while Foulis and Sir William Fairfax attacked Northgate. The reasoning behind this is that Sir Thomas and Gifford were the first to reach the Market Place and the route along Warrengate is shorter, and that Gifford was able to plant a captured gun in the churchyard to fire on the Market Place. If he had attacked down Northgate he would have had to cross the Market Place, which was full of Royalist troops, to get to the churchyard.
The Royalist defences held out for some considerable time: one and a half to two hours are mentioned by contemporary accounts. Sir Thomas Fairfax wrote two accounts of the action, one immediately after the battle and one in his memoirs many years later. In his memoirs he reports that:
After 2 hours dispute the foot forced open a barricade where I entered with my own troop. Colonel Alured and Captain Bright followed with theirs. The street which we entered was full of their foot which we charged through and routed, leaving them to the foot which followed close behind us. And presently we were charged again with horse led by General Goring, where, after a hot encounter, some were slain, and himself [Goring] taken prisoner by Captain Alured.
The account written shortly after the action is very much in agreement:
When the barricades were opened, Sir Thomas Fairfax with the horse, fell into the town, and cleared the street where Colonel Goring was taken, by Lieutenant Alured, brother to Captain Alured, a Member of the House.
It is interesting to note the difference in the ranks of the Alured brothers given in the two accounts. The first gives their ranks at the close of the Civil Wars while the second gives their ranks at the time of the action.
After a lengthy dispute, Gifford’s foot managed to break into the end of Warrengate and open the barricade. This allowed Fairfax to lead his four troops in a charge down the street which was packed with enemy foot. These were quickly dispersed. Fairfax was then counterattacked by a body of horse led by George Goring. The Royalist horse was defeated and Goring captured by Lieutenant Alured. During this phase of the fighting Sir Thomas became separated from his men:
And here I cannot but acknowledge God’s goodness to me this day, who being advanced, a good way single, before my men, having a Colonel and a Lieutenant Colonel (who had engaged themselves as my prisoners) only with me, and many of the enemy now between me and my [men] I light on a regiment of foot standing in the Market Place. Thus encompassed, and thinking what to do, I spied a lane which I thought would lead me back to my men again; at the end of this lane there was a corps du guard of the enemy’s, with 15 or 16 soldiers which was, then, just quitting of it, with a Sergeant leading them off; whom we met; who seeing their officers came up to us. Taking no notice of me, they asked them what they would have them do, for they could keep that work no longer, because the Roundheads (as they called them) came so fast upon them. But the gentlemen, who had passed their words to be my true prisoners, said nothing, so looking upon one another, I thought it not fit, now, to own them as so, much less to bid the rest to render themselves prisoners to me; so, being well mounted, and seeing a place in the works where men used to go over, I rushed from them, seeing no other remedy, and made my horse leap over the works, and so, by good providence, got to my men again.
Sir Thomas’s bravery can never be doubted but sometimes his common sense can be. This would not be the last time his impetuous courage would leave him stranded on his own in the midst of the enemy.
Gifford had continued his attack along Warrengate, bringing the captured cannon with him. As he reached the Market Place he realised that it contained three troops of enemy horse and a regiment of their foot, as Fairfax reported:
Yet in the Market Place there stood three troops of horse, and Colonel Lampton’s Regiment [foot], to whom Major General Gifford sent a trumpet with offer of quarter, if they would lay down their arms, they answered they scorned the motion; then he fired a piece of their own ordinance upon them, and the horse fell in upon them, beat them out of the town.
In his memoirs Fairfax mentions that Gifford set the cannon up in the churchyard. Having given the Royalist troops an opportunity to surrender, Gifford ordered his men to open fire and then ordered Fairfax’s rallied troopers to charge the enemy. This was the last straw. Those who could, escaped; the remainder threw down their arms and surrendered. By nine o’clock Wakefield was firmly in Parliamentarian hands. Accounts do not give figures for the dead and wounded but do give a list of captured men and material: thirty-eight named officers, 1,500 common soldiers, four cannon, twenty-seven foot colours and three horse cornets, along with weapons and a large amount of powder, ball and match. The weapons, powder and ammunition were a great boon to the Parliamentarian cause. In a letter to Parliament Lord Fairfax summed up the victory:
And truly for my part I do rather account it a miracle, than a victory, and the glory and praise to be ascribed to God that wrought it, in which I hope I derogate nothing from the merits of the Commanders and Soldiers, who every man in his place and duty, showed as much courage and resolution as could be expected from men.
How had this ‘miracle’ taken place? The Parliamentarian victory at Wakefield flies in the face of military wisdom. The victors had taken a town garrisoned by twice their number and had captured more prisoners than they had soldiers. There are a number of reasons for the Parliamentarian victory. Firstly, they attacked the barricades at the end of the streets, which meant that only a limited number of Royalist troops could defend at any given time. Once the attackers had penetrated the barricades, the enemy troops, packed in the streets behind, were unable to defend themselves, as was also the case with the troops packed into the Market Place. There also seems to have been a breakdown in the Royalist command structure, with troops standing still instead of reacting to the changing situation. One possible reason for this is given by Dr Nathaniel Johnstone, a contemporary who left the following anecdote:
There was a meeting at Heath Hall upon the Saturday, at a bowling, and most of the officers and the governor were there, and had spent the afternoon in drinking, and were most drunk when the town was alarmed. It was taken fully by nine o’clock in the morning, and more prisoners were taken than the forces that came against it. It seems probable that Sir Thomas Fairfax had notice of their festivities at Heath, and perceived the advantage which they might afford him.
It has been reported that Goring had arisen from his sick bed to lead the mounted counterattack but Johnstone’s account may give another reason for Goring being seen reeling in his saddle – he was still drunk. His later record would point to this being a strong possibility.
Whatever the reasons for the Royalist defeat, Sir Thomas and his men had won a remarkable victory. They had no plans to remain in the town and expected a rapid response from Newcastle. Sir Thomas led his men out of Wakefield and back to their garrisons, complete with the spoils of their victory. The Fairfaxes had their prisoners for exchange but we do not know whether this ever took place. | <urn:uuid:dc1b327d-dddc-4331-a6c3-5f03ecbef01c> | CC-MAIN-2021-21 | https://weaponsandwarfare.com/category/wars/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991288.0/warc/CC-MAIN-20210518160705-20210518190705-00457.warc.gz | en | 0.988167 | 5,376 | 2.796875 | 3 |
A few weeks ago I published two guides, one on 5G Radiation, and another on how to protect yourself from that radiation. In this guide, I want to dive deeper into what 5G towers are, how they work, how much radiation they emit, and much more.
Since 5G requires an entirely new infrastructure of towers, none of the current cell-towers will be of use. The small-cell towers that will be installed on building, streetlamps, and just about anywhere else are going to be an enormous exposure risk when it comes to EMF radiation.
Let’s start off by talking about how 5G works, and the technology behind 5G towers.
5G Cell Towers – What They Are and How They Work
Alright, first let’s just have a conversation about what 5G towers are, how they work, and why they are different than what we see now.
So, your phone currently (almost for sure) is connected to a 4G network, this stands for the 4th generation of cellular networks. In order for it to be connected to that network, it must be using radiofrequency waves to communicate with a nearby cell tower.
You’ve likely seen these cell towers around town, or on the side of a mountain, or maybe disguised as a tree. These towers, now sometimes called macro cell towers, are large structures capable of supporting many devices, on a wide spectrum of frequencies, over a large distance.
However, these towers are not at all capable of supporting the coming 5G network. That is for two reasons primarily, first of all, they are only capable of transmitting on a specific part of the radio spectrum (we’ll go in-depth on this topic in the section below). Second, the spectrum required for 5G is not particularly good at traveling large distances or penetrating things like trees and buildings.
Now, you may be asking yourself, “If 5G frequencies are so bad at penetrating buildings, traveling long distances, and aren’t supported by current cellular infrastructure, why do we need them?”
Well, we’ll get into this later, but the short answer is, 5G is faster, much faster.
In fact, 5G networks could be as much, or more than, 100 times faster than your 4G. That has implications not only for your Netflix downloads, but also the future of things like artificial intelligence, autonomous driving, machine learning, and much more.
So, getting back on topic, 5G is not able to utilize any of the current 4G infrastructures that have been developed over the last decade or so. Instead, these networks will require an entire new armada of what are called “small cell sites.”
What Are 5G Cell Towers (Small Cells)?
You may actually have already seen small cells if you live in a larger city that has started rolling out 5G infrastructure. However, you may not even realize that you saw one. That is because 5G small cells are both extremely small, and inconspicuous.
Usually located on light or electrical poles, the sides of buildings, under manhole covers, or just about anywhere else you could fit a backpack-sized box, 5G small cells will eventually be just about everywhere.
Take a look at the comparison below, showing a typical 4G towers, as well as a 5G small cell attached to a light pole.
A small cell is essentially just a single node in a 5G network. However, they are probably the most crucial piece of the network, because, without a large amount of them, the information would not be able to relay to its ultimate destination.
5G small cell towers require very little power, allowing them to be very small. However, these small cells use high-frequency millimeter waves, which have their own limitations.
Like I mentioned before, 5G frequencies are not very good at traveling far distances, or penetrating objects. So, you need a large number of small cells spread throughout an area in order to efficiently cover all users.
We’ll talk about this a little bit later, but as I mentioned in my primary guide on 5G, these small cells are relatively expensive to develop and install. So, having to completely replace 4G LTE infrastructure with 5G towers capable of supporting the new frequencies will be extremely expensive.
This means that you likely won’t see that “5G” displayed on your cell-phone for quire a while if you don’t live in a larger city, where it’s more cost-effective to install this network.
Now that we know a little bit more about what these 5G towers are, let’s talk about how they really work.
How 5G Cell Towers (Small Cells) Work
Based on what you’ve read up to this point about 5G frequencies inability to travel far or penetrate, you may think there is something lacking in the technology of these towers. However, the small cells themselves are actually extremely advanced, and it is only the frequencies that are being used that have the limitations.
In fact, it is only because of these extremely advanced 5G small cells that the network will be able to achieve the enormous speeds it promises.
There are a few technologies that allow 5G towers to work and to pass information along the network of nodes to larger towers.
Before we move on to some of the more specific technologies that allow 5G towers to work, take a quick look at the video below that will give you a rough idea.
So, as you saw in the video, these small 5G towers will allow for something called massive MIMO (multiple-input multiple-output). So, basically what this means is that instead of data passing in a straight line directly between a user, and a tower, on a single radio wave; data will instead be passed from a single device (your cellphone) to as many small cell sites as are in range and direct-line.
This allows the data to pass much faster and more efficiently. However, when MIMO was first theorized, the problem was keeping all of the information being passed in order. So, to allow for MIMO to work, an algorithmic technique known as Beamforming was discovered.
Beamforming basically just means that advanced math is used to constantly calculate the best route for the data arrays to travel between your device and all the 5G towers around you. The algorithm adjusts every few milliseconds to account for your location even as you move around.
Picture this: It’s 2016, and you’re connected to 4G on your cellphone as you travel down the interstate. You are directly connected to a single cell-towers, whichever (on your network) is closest and has the best connection. As you continue driving, your connection is passed from one cell-tower to the next, every 10-20 miles. That is essentially what is currently happening. Your phone just connects and stays connected, to whichever single tower, gives it the best connection.
However, this system does not at all work for the 5G networks of the future. Instead, your cell-phone will be constantly beaming data in multiple directions, to multiple 5G small cell towers in order for optimal transfer of your data.
In order to keep all of this straight, an advanced algorithm must be used. This technique is known as beamforming.
These small 5G towers do run quite efficiently though, as they are able to adjust their power constantly depending on whether or not they are currently in use. They can also increase or decrease their power capabilities instantly to accommodate the needs of the data arrays.
Alright, now that we have a better understanding of what these small cells (sometimes called 5G towers) are, and how they work, let’s dive a bit deeper into the frequencies that 5G will utilize, and why this is important.
5G Tower Frequencies
Alright, I think it’s extremely important to understanding how 5G works, and how 5G towers are going to work, to understand the frequencies that they will utilize.
Now, just about all telecommunications now, and likely in the future, utilize something called radio wave frequencies. These frequencies range anywhere between 3 kilohertz (kHz) and 300 gigahertz (GHz).
Lifewire does a really good job talking about the breakdown of these frequencies, and why it matters for 5G, so I’ll let them explain:
Some examples of radio spectrum bands include extremely low frequency (ELF), ultra low frequency (ULF), low frequency (LF), medium frequency (MF), ultra high frequency (UHF), and extremely high frequency (EHF).
One part of the radio spectrum has a high frequency range between 30 GHz and 300 GHz (part of the EHF band), and is often called the millimeter band (because its wavelengths range from 1-10 mm). Wavelengths in and around this band are therefore called millimeter waves (mmW). mmWaves are a popular choice for 5G but also has application in areas like radio astronomy, telecommunications, and radar guns.
Another part of the radio spectrum that’s being used for 5G, is UHF, which is lower on the spectrum than EHF. The UHF band has a frequency range of 300 MHz to 3 GHz, and is used for everything from TV broadcasting and GPS to Wi-Fi, cordless phones, and Bluetooth.
Frequencies of 1 GHz and above are also called microwave, and frequencies ranging from 1–6 GHz are often said to be part of the “sub-6 GHz” spectrum.
Now that we understand that a bit better, let’s talk about why the frequency a 5G tower is utilizing really matters. Remember above, when I said that 5G frequencies are capable of being very fast, but are not great at penetrating buildings or traveling large distances.
Well, for the most part, that is true, and that is because most 5G implementations are going to utilize these high-frequency, short wavelengths.
To summarize it simplistically, the higher the frequency, the faster the speeds, but the shorter the distances the wave can travel, and the worse it is at penetrating things.
The wavelength of the radio wave is inversely proportional to its frequency, which is important when we’re thinking about how 5G will work. So, a very low frequency might have a wavelength of thousands of miles, while a very high frequency, like those utilized by 5G, could have a wavelength of just a millimeter (hence the name, millimeter waves).
The longer wavelengths that come with lower frequency ranges are significantly more stable, which is why they are able to travel such far distances. Inversely, the extremely short wavelengths of 5G millimeter waves, are terribly unstable, which is why they can only travel short distances. If you tried to send these millimeter waves to current cell towers, miles away, they would be so distorted by the time they arrived they would be unreadable.
Why Does It Matter What Frequencies 5G Towers Utilize?
It matters because different frequencies have different wavelengths, which allow for different uses. So, there isn’t exactly a “single 5G frequency,” the truth is that companies will use different parts of the spectrum depending on the specific use.
For example, some cities might use lower bandwidths that are better able to communicate with phones, without losing power or distorting the data.
Whereas another company, that is working in a small city without distance, precipitation, or obstruction issues, might use a much higher frequency, shorter wavelength to maximize speeds.
So, we don’t know for sure what spectrums different telecom companies are going to use, but we do have some idea.
These companies have all, at least mentioned, what they are planning on using.
- T-Mobile: T-Mobile plans to use the low-band spectrum of (600 MHz) as well as the mid-band spectrum.
- Sprint: Sprint is trying to set itself apart from the competition by using a multitude of frequencies, so far they’ve announced three spectrum bands: 800 MHz, 1.9 GHz and 2.5 GHz.
- Verizon: Verizon’s 5G Ultra Wideband network uses millimeter waves, specifically 28 GHz and 39 GHz.
- AT&T: AT&T’s deployment strategy is to use millimeter wave spectrum for dense areas and mid and low-spectrum for more rural areas, to allow for further distances.
Where Will 5G Cell Towers Be Located?
Like we’ve talked about above, 5G will require an entirely new infrastructure of small cell towers. That means that for the average city or neighborhood, you are likely to eventually see them just about everywhere. From the side of your local library to the roofs of nearby stores, to the lamp pole just outside your home.
Take a look at this video to get a little better idea of how this is going to affect local neighborhoods.
Now, if what you got from watching that video, is that there is no chance that 5G is going to be dangerous, then you and I need to have a little talk, but we’ll get to that in a minute.
The importance of that video was to show you just how ubiquitous 5G cell towers will eventually be, and how this will affect our lives.
Basically, if you live in, or around, a larger city, you are likely to start encountering 5G small cell towers in the coming months. If you live in a more rural area, it will likely be a while, as the cost-to-income ratio’s don’t yet make sense for most cell carriers.
I also wrote up this full guide on exactly how to locate where 5G cell towers are near you.
Are There 5G Cell Towers In My City?
Now, this is something that you can actually already find out. The rollout of 5G can be broken down into three areas:
- Commercial 5G
- Limited Consumer 5G
- 5G Test Markets (Pre-Release)
Commercial 5G – This means that 5G is live and available to consumers and customers of the labeled carrier. Many cities around the world are already offering 5G to customers. To locate these cities and neighborhoods, use my guide here.
Limited Consumer 5G – There many cities around the world that are offering limited 5G availability to customers. As an example, AT&T has limited 5G availability in San Francisco, San Jose, San Diego, and Los Angelas (and that is just California).
5G Test Markets (Pre-Release) – These are simply closed testing by telecom companies before they offer even limited commercial or consumer coverage. For example, Videotron is testing pre-release 5G in Montreal, Canada.
So, how do you find out if 5G cell towers are already in your area?
Well, there are a few ways, but I’ll walk you through my current favorite.
Go to the following website: https://www.speedtest.net/ookla-5g-map
Ookla is the company behind speedtest.net, probably the most commonly used internet speed-testing service, for WiFi, or cellular networks.
They’ve developed an interactive map, that allows you to view just about every 5G release worldwide, and in my opinion, they are doing a great job keeping the map updated.
You can filter the map based on the categories I outlined above.
For example, look at the screenshot below, where I zoomed in close to Los Angeles, California.
It shows that in LA, T-Mobile is offering commercial 5G availability, and AT&T is offering limited consumer availability. This means that without a doubt, there are 5G cell towers in Los Angeles. Since there is already infrastructure in that city, there is a much higher likelihood of 5G coming to surrounding cities and suburbs.
So, if you’re wondering if there are 5G cell towers in your city, or will be in the near future, check out the interactive map.
If you zoom out, you’ll see that 5G is being implemented all over the world.
Alright, now that we’ve talked extensively about what 5G cell towers are, how they work, the frequencies they’ll utilize, and where we can find them, it’s time to talk about something important.
Are 5G Cell Towers Dangerous?
I think it’s really important when talking comprehensively about 5G cell towers, to talk about their known, and potential dangers. However, I’ve talked about this more extensively in two other posts I wrote on 5G:
- 5G Radiation Dangers – The Definitive Guide
- How To Protect Yourself (And Your Family) From 5G Radiation
So, if you want a more complete picture, please give those articles a read. If you want a summarized version of the dangers specifically associated with 5G, and 5G cell towers, read on.
Basics of RF Radiation
First, assuming you haven’t read anything else on EMF Academy, we need to establish why Radio Frequency is dangerous.
Cell phones, when communicating with cell-towers, emit a type of EMF radiation, called Radio Frequency radiation. There is almost no argument that this type of radiation, in large doses, absolutely has adverse health effects.
The argument from telecom companies comes down to just how much radiation is actually harmful to the human body.
There are two exposure risks we need to factor in when talking about the danger of 5G cell towers. The radiation from the towers themselves, and the radiation from the phone you’re using to communicate with them.
Only one of these, the cell-phone, has any governmental mandates about its radiation limits.
This limit is based on something called Specific Absorption Rate, or SAR. Essentially what SAR is, is a measurement of how much radiation your body absorbs from a cell-phone. The limit depends on where you live, but here in the United States, the FCC has set a limit of 1.6 w/kg of body weight. That means no phone can be sold in the United States if using that phone would cause you to absorb more than 1.6 watts of radiation, for each kilogram of bodyweight.
I won’t go into it in this article, but there is a growing number of scientists, researchers, and activists (myself included) that feel that the SAR limit of 1.6, which was passed into law in 1996, is far outdated, and does not protect consumers.
Regardless, the point is that cell-phones, and the towers that connect them, do emit harmful radiation.
How do we know that they are harmful?
There have been a large number of studies, including:
Interphone Study – This extensive study looked at over 5000 cases of Glioma and Meningioma to determine what level of cell-phone use was observed. Ultimately they found that those with the highest exposure to RF radiation from cell-phones did have an increased chance of developing brain tumors during their lifetimes.
This study along with a mountain of other evidence was part of the reason that the World Health Organization classified RF radiation as a “possible carcinogen” in 2011, shortly after the study was published.
“The Influence of Being Physically Near To A Cell Phone Transmission Mast On the Incidence of Cancer. – This study evaluated the case histories of 1000 patients and looked at their health history in comparison to the proximity of their home to nearby cell towers.
The study ultimately found that there was a significant correlation between how close people lived to these cell-towers and their risk of developing cancer.
“The proportion of newly developing cancer cases was significantly higher among those patients who had lived during the past 10 years at a distance of up to 400 meters (1,300 feet) from the cellular transmitter site, which has been in operation since 1993, compared to those patients living further away, and that the patients fell ill on average eight years earlier.
Ramazzini Study –Perhaps one of the most frightening studies actually came out quite recently. The well respected Ramazzini Institute out of Italy studied how frequent exposure to RF radiation at levels consistent with legal cell-tower radiation affected their lives.
There are many more studies then that, but I want to get to talk a little bit about specifically why 5G cell towers are dangerous.
Alright, now it’s time to talk specifically about some of the ways that 5G radiation will specifically be harmful.
5G Radiation Dangers – What We Know
To start with, I want to give you a quote by Dr. Joel Moskowitz, a public health professor at the University of California:
‘The deployment of 5G, or fifth generation cellular technology, constitutes a massive experiment on the health of all species. Because MMWs are weaker than microwaves, they are predominantly absorbed by the skin, meaning their distribution is quite focused there.
He also mentioned that he believes that the millimeter waves that 5G will utilize will also adversely affect the skin, eyes, testes, nervous system, and sweat glands.
Next, I want to talk about an important letter, at least when it comes to the opposition to the 5G rollout.
In 2016, Dr. Yael Stein from the Hadassah Medical Center in Jerusalem published a letter opposing the implementation of 5G and the MMW’s that it would require.
He addressed the letter to the United States Federal Communication Commission, the U.S. Senate Committee on Health, Education, Labor and Pensions and the U.S. Senate Committee on Commerce, Science, and Transportation.
Dr. Stein began the letter by saying:
“A group of physicists from the Hebrew University in Jerusalem, together with several physicians, have researched “G5” millimeter wave technology (Sub Terahertz frequencies) and its interaction with the human body. I am a physician who participated in this research.”
I’m not going to put the entire letter in this article, but I do want to include all of the conclusions that the team found because they are extremely important for understanding the danger of 5G (or G5 as they refer to it in the conclusions), and 5G cell towers.
The group’s conclusions were:
- Public exposure to millimeter waves, in the sub-Terahertz frequency range, is currently less common. If these devices fill the public space they will affect everyone, including the more susceptible members of the public: babies, pregnant women, the elderly, the sick and electro hypersensitive individuals.
- Human sweat ducts transmit and perhaps also receive electromagnetic waves that reflect the person’s emotional state, as an extension of the sympathetic nervous system that innervates sweat ducts
- These newly suggested physiologic and psychological functions of human sweat ducts have not yet been researched by neurophysiologists or by psychologists
- Computer simulations have demonstrated that sweat glands concentrate sub-terahertz waves in human skin. Humans could sense these waves as heat. The use of sub-terahertz (Millimeter wave) communications technology (cellphones, Wi-Fi, antennas) could cause humans to percept physical pain via nociceptors.
- Potentially, if G5 WI FI is spread in the public domain we may expect more of the health effects currently seen with RF/ microwave frequencies including many more cases of hypersensitivity (EHS), as well as many new complaints of physical pain and a yet unknown variety of neurologic disturbances.
- It will be possible to show a causal relationship between G5 technology and these specific health effects. The affected individuals may be eligible for compensation.
If you want to read more research specifically talking about the potential dangers of 5, the group over at saferemr.com put together a good compilation of research and letters related to 5G radiation that I would encourage you to check out when you have a chance.
Or, if you’d like to read a large collection of letters sent by various doctors and scientists on the dangers of 5G radiation and cell towers, the Environmental Health Trust put together a great list.
If you want to read more about my thoughts on 5G, why it’s dangerous, and what we can do about it, be sure to check out the following articles:
- 5G Radiation Dangers – The Definitive Guide
- How To Protect Yourself (And Your Family) From 5G Radiation
Now let’s cover a few related questions:
I’m going to give some briefer answers to a few other questions that I get asked often about this topic. If you need further information about any of these topics, please feel free to email me.
How Can I Protect Myself From 5G Towers?
Let me give you a summary of some of the ways that you can protect yourself and your family from 5G towers.
1. Use Distance
When it comes to EMF radiation in general, the most important thing you can do is get distance. There is a law of physics called the inverse square law of physics, that essentially states that as we double our distance from any source of radiation (including radio waves from a 5G tower), we quarter our exposure to it.
That means that any distance we can gain between where we spend time, and 5G towers, will help protect us. This will come in two ways:
- If possible, move to areas away from larger cities where 5G is being implemented.
- If there are already 5G towers in your neighborhood, try to move your rooms around, so that the rooms you spend the most time (like your bedroom) are further away from the towers.
2. Get An EMF Meter
This is something I recommend to just about everyone because having an EMF meter is probably the best tool you could ever have if you’re worried at all about EMF radiation. Without an EMF meter to measure how much radiation is being emitted from local 5G towers, it’s hard to know how much of it you’re being exposed to.
For example, you could walk your EMF meter out to the nearest cell tower, on a local light pole for example, and then walk back towards your home to see how the radiation falls off as you get farther away.
For now, since there really isn’t a better option, I’m recommending the Trifield TF2, which you can get on Amazon. At least at the time that I’m writing this, the Trifield can measure all currently installed 5g antennas because they all use the lower range. This may not always be the case. Here is what the Trifield website says about this:
The TF2 RF mode covers up to 6 GHz. All the present 5G deployed in the US is in this frequency range (in fact, it is all below 5 GHz). However, within the next few years, commercial deployment of 5G in the next higher band, a big jump up to 28 GHz, may begin. At present, no RF meter is commercially available that simultaneously detects this high frequency band and the lower frequencies. It is not clear yet whether the 28 GHz band will ever be widely deployed, because there are problems. Chief among the problems is that 28 GHz is very poor at penetrating to the inside of buildings or even through windows. Also it generally has to be line-of-sight.
This meter is also really easy to use, and a tool that you want to have with you all the time. It is chosen by many experts in the field and one I use myself when strictly testing for RF radiation exposures such as smart meters.
However, regardless of which EMF meter you choose, just be sure that it can measure RF radiation, and not just magnetic and electric, like some of the less expensive meters.
Remember that EMF radiation includes electric field, magnetic field, and radiofrequency radiation and that when it comes to 5G, we are only concerned about the radiofrequency radiation.
If you want to learn more about EMF meters or find out which is best for you, you can check out my full guide here.
3. Consider Getting An EMF Protection Bed Canopy
Protecting your body while you sleep from EMF radiation is extremely important. Not only has EMF radiation been shown to reduce the quality of our sleep, but our bodies are also more vulnerable to the potential biological harm from 5G.
An EMF protection bed canopy is just what it sounds like. It is a canopy, made of a silver mesh that blocks EMF radiation, that you drape over your bed while you sleep. if you want to learn more about these canopies or see which ones I recommend, you can check out my guide here.
It’s important to note though, that these canopies are only effective if you don’t have ANY EMF radiation-emitting devices inside with you. That means no:
- Cell Phones
- Smart Watches
- Alarm Clocks
Or anything else at all that emits EMF’s. The same material that blocks out the EMF radiation, will also keep it inside and amplify the harm.
4. Advocate Against 5G
I promise you won’t be alone in the fight. In fact, advocacy groups all around the world are doing all they can to at least slow the rollout of 5G until further studies can be done.
Many of these groups are spending time in front of their local legislature, begging for regulations to protect people against the potential harm of 5G until more studies can be done.
If you’re wanting to join a group, just do some searching online for groups in your state, province, or country, you’re sure to. find something.
If you feel so inclined to try and fight against this, the Parents for Safe Technology have put together a fantastic resource outlining a host of ways that you can speak out against 5G.
It includes agency email addresses and phone numbers, as well as education and stock letters to help you.
You can find all the information you need on their Take Action page.
Honestly, fighting against the implementation of 5G is one of the best possible ways to protect yourself, and your family, from the potential harm.
How Close Do 5G Cell Towers Need To Be?
This depends largely on what frequency the carrier that owns the towers is utilizing in that area. As we talked about above, the higher the frequency, the shorter the wavelength, the shorter the distance it can travel. This means that the 5G small cell towers would need to be closer to one another.
As a general rule, from everything I have read, I would assume that in most cities 5G towers will not be more than 500 feet apart in order to get full coverage to the people using the network.
However, we’ll know much more as the rollout continues in major cities.
What Does 5G Do To Your Body?
It’s hard to say specifically what 5G will do to the body since we have not been able to test it on full, live networks. However, as Dr. Moskowitz mentions, it could potentially harm the skin, eyes, testes, nervous system, and sweat glands.
We also know that EMF radiation from cell towers, which will include 5G cell towers, can potentially increase risks of certain cancers.
Will 5G Require More Towers?
Yes, 5G will in fact require an entirely new infrastructure of towers to support its network. Early estimates outline as many as 300,000 small cell 5G towers for larger cities. For comparison, that is about the number of cell towers that have been built total for current networks over the last three decades.
However, these towers won’t be the large cell towers you are used to seeing, they will instead be small, about the size of a large backpack. These 5G towers will also be much closer to you, instead of being spread out.
Does 5G Penetrate Walls?
The general answer is yes, 5G is capable of penetrating walls, however, it is significantly worse at it then frequencies used for current 3G and 4G networks. It comes down to something called attenuation.
In physics, attenuation is defined as the gradual loss of force as something passes through a material. So, radio waves are capable of passing through things like walls, but attenuation means that they will dramatically lose their force as they do so.
So, yes, 5G frequencies are capable of penetrating things, but it is not their strength.
Will 5G Eliminate Cell Towers
Not at all. In fact, it will just add to the total number of cell-towers in the world. Although 5G will not utilize any of the current cellular infrastructures, 4G will still be around for many years, so those towers won’t be going anywhere anytime soon.
Some pundits believe that removing the current, large cell towers, may actually be less cost-effective for telecom companies than just leaving them in place.
Only time will tell, but for now, expect 5G to not eliminate any cell towers whatsoever.
Final Thoughts on 5G Cell Towers
I hope that you found that this article fully covered everything you would want to know about 5G cell towers, including what they are, how they work, and why they could potentially be dangerous.
If you feel that I left anything out or was incorrect about anything, please don’t hesitate to reach out and let me know. | <urn:uuid:ba997467-fb88-4194-8ff4-47af8c91d9f5> | CC-MAIN-2021-21 | https://emfacademy.com/5g-cell-towers/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989012.26/warc/CC-MAIN-20210509183309-20210509213309-00215.warc.gz | en | 0.959487 | 7,030 | 3.28125 | 3 |
If there is any time to start eating healthy, it is during pregnancy. However, making sense of pregnancy nutrition can be very confusing. Not only do you need more of certain nutrients and should avoid certain foods, but you have to do it all while battling morning sickness and food cravings.
This guide is meant to help you make sense of pregnancy nutrition so you feel confident in your food choices.
Key Nutrients that Pregnant Women Need
When a pregnant mom eats, the food goes through her digestion system and is absorbed in the gut. From there, nutrients are delivered to the growing baby through the placenta.
If the mom isn’t getting enough nutrients, the baby will “steal” nutrients from the mother. For example, if a mother isn’t eating enough calcium, the baby won’t be born with brittle bones. Rather, the baby will take calcium from the mother’s bones – potentially causing osteoporosis for the mom later in life! (1)
Thus, it is very important – for both baby and mom — to meet nutritional needs during pregnancy.
Below are the key nutrients that pregnant women should focus on getting.
Protein is required for the growth of the baby’s tissues. It isn’t just muscle tissue either. Protein is needed to grow brain tissue, connective tissue, and much more. Protein also has roles like improving blood supply.
The amount of protein you need during pregnancy depends on your weight. It is generally recommended that pregnant women consume 0.88 to 1.1 grams of protein per kilogram of body weight. For a 150lb women (68kg), that translates into 60 to 75 grams of protein per day.
Some studies have found that pregnant women may need as much as 1.52 grams/kg daily, or roughly 79g of protein per day during early pregnancy and 108g per day during later pregnancy. (2)
Amino acids are the building blocks of proteins. There are 20 different essential amino acids (meaning that our bodies can’t make them and we need to get them from food).
Each amino acid performs a different role in the body. To ensure you are getting all the amino acids you need, be sure to eat protein from a variety of sources.
Good sources of protein for pregnant women include:
- Lean meat
- Nuts and seeds
- Beans and legumes
*Many pregnant women develop an aversion to meat during pregnancy. If this is the case or you are a vegetarian, make sure you are getting enough protein from other sources!
The body uses iron to make hemoglobin, a component of red blood cells. You need RBCs to help deliver oxygen to the growing baby. Iron is incredibly important for pregnant women, especially since your blood volume will increase by nearly 50% in late pregnancy! (3)
Without enough iron, pregnant women can feel fatigued. They may develop anemia and could have a preterm birth or a baby with low birth weight.
Make sure you are eating iron-rich foods during pregnancy such as:
- Whole grains
- Leafy greens
- Red meat
- Beans and lentils
- Fortified cereals
- Pumpkin seeds
Tip: Vitamin C helps the body absorb iron. If your blood tests show that your iron levels are low, make sure to consume some vitamin C foods (such as lemon or orange juice) along with your iron foods. (4)
Omega 3 Fatty Acids
Omega 3 has gotten a lot of attention recently for all of the health benefits it provides, particularly in fighting inflammation. During pregnancy, omega 3 is very important because of its role in developing the baby’s nervous system – including the brain.
Research shows numerous benefits of consuming Omega 3 fatty acids during pregnancy, including:
- Preventing postpartum depression
- Reduced risk of preterm birth
- Increased immunity during pregnancy
- Improved scores on memory and verbal intelligence tests
This last benefit is particularly interesting. If you want a smarter child, it is worth making sure you get enough Omega 3 fatty acids.
You need to know that the body also converts Omega 3 into other fatty acids. One of these is docosahexaenoic acid (DHA). DHA has separate roles from Omega 3. It is particularly important for brain and retina development, for example.
However, the body is very inefficient at converting Omega 3 into DHA. Thus, it is important to get DHA from food sources as well.
Sources of DHA include:
- Fish oils
- Krill oil
- Fatty fish such as salmon, tuna, mackerel, sardines, and anchovies
The general consensus is that pregnant women need at least 200mg of DHA per day. You can get this much by eating seafood 1-2 times per week. (5) Because seafood might have high levels of mercury, the American Pregnancy Association says the safest source of DHA during pregnancy is fish oil supplements. (6, 7 ,8)
You probably know that the body needs vitamin D to absorb calcium. But vitamin D also has many other roles, such as regulating cell growth, immune function, and inflammation.
The recommended amount of Vitamin D is the same for pregnant and non-pregnant women (600IU). However, vitamin D is worth talking about because many people are deficient in vitamin D.
Important notes about vitamin D:
- Vitamin D from Sunlight: Our bodies can make vitamin D from sunlight – but it must be direct sunlight. You won’t get vitamin D from sunlight through a window! Wearing sunscreen can also hinder the body’s ability to make vitamin D.
- Fat Soluble: Vitamin D is fat-soluble. That means it requires fat to get absorbed into the body. If you aren’t consuming enough fat, vitamin D won’t be absorbed into your body.
- Overweight People: Since vitamin D is fat soluble, it can end up being stored in fatty tissues of overweight people. As a result, overweight people might be more prone to deficiency – even if they are getting adequate sunlight.
Food Sources of Vitamin D:
Unfortunately, very few foods naturally contain vitamin D. Fatty fish are one of these sources (such as salmon, mackerel, and tuna). Fish liver oils and certain wild mushrooms such as chanterelle also contain vitamin D. Dairy products are often fortified with vitamin D.
The calcium is needed to help grow your baby’s bones and teeth. It is also important for functions like nerve transmission and muscle contractions.
Adult women generally need about 1000mg of calcium per day. During pregnancy, as much as 250mg of calcium is transferred to the growing baby. However, because pregnant women absorb and use calcium better, the calcium RDA for pregnant women doesn’t increase.
Even though the RDA stays the same, it is important to make sure you are getting enough calcium during pregnancy. Low calcium levels have been associated with high blood pressure and preeclampsia – conditions common during pregnancy. (11)
Good sources of calcium during pregnancy include:
- Fortified plant “milks”
- Certain leafy greens, such as kale and Swiss chard
- Nuts such as almonds
Folic Acid and Other B Vitamins
There are 8 different B vitamins. These vitamins perform a wide range of roles in the body. Most notably, B vitamins during pregnancy:
- Are integral for the baby’s brain and nervous system development
- Converting food into energy
Vitamin B9 (also known as folate in its synthetic form) is so important that your health provider probably told you to take it in a prenatal vitamin. The CDC recommends:
It’s not enough to just take folate as a supplement. You should also get it from natural sources to make a total of at least 600mg of folate daily.
Natural sources of folate include:
- Dark leafy greens
- Legumes (such as lentils and beans)
- Citrus fruits
In addition, you’ll need to make sure you are getting enough of the other B vitamins. B12, for example, is particularly important for making DNA. During pregnancy you need 2.6 micrograms of B12 daily.
Vitamin A is an important pregnancy nutrient which often gets overlooked. Your growing baby needs vitamin A for development of the heart, lungs, kidneys, bones, and other organs and systems.
Mom also needs vitamin A to help with postpartum tissue repair, as well as aiding immunity during pregnancy.
However, there is a lot of controversy around vitamin A in pregnancy. Some people claim that vitamin A is toxic during pregnancy, and should be avoided. Others claim that pregnant women need to make sure they get enough vitamin A. Who is right?
Two Types of Vitamin A
There are two types of vitamin A you can get.
- Preformed vitamin A: This is found in animal products, especially liver and other organ meat.
- Provitamin A carotenoids: Your body is able to make vitamin A out of these carotenoids (such as beta carotene).
Eating too much preformed vitamin A can cause birth defects. Women (pregnant or not) should not consume more than 3,000mcg of preformed vitamin A daily. However, you can eat as much provitamin A carotenoids as you want. They will NOT cause birth defects.
Good sources of provitamin A carotenoids include:
- Sweet potatoes
- Leafy green vegetables
- Red peppers
- Yellow fruits, such as apricots and mango
Calories during Pregnancy
Most women now realize that the old adage “eating for two” isn’t true: you do NOT need to double your calories during pregnancy. However, trying to make sense of calories during pregnancy is very confusing.
There are conflicting suggestions on how many calories you need during pregnancy. The guidelines on pregnancy weight gain can also be very difficult to follow when you feel like you have no control over your appetite.
First Trimester Calories:
Virtually all experts agree that you do not need any extra calories during the first trimester of pregnancy. This doesn’t mean that you won’t end up eating extra calories though.
While some women can’t eat at all during early pregnancy due to morning sickness (or, worse, hyperemesis gravidarum), other women find that the only way to keep sickness at bay is to constantly eat.
On average, women gain 1-5lbs during the first trimester of pregnancy. However, this is an average. Some women lose weight due to morning sickness. Others will gain more.
If you’ve gained more than 5lbs during the first trimester, you probably don’t need to worry. Many women who gain weight during the first trimester slow down weight gain later in pregnancy. (19)
Tips for First Trimester Calorie Intake:
- Take your prenatal vitamin
- If you have severe morning sickness, try drinking cold smoothies with superfoods in them. They are easier to digest and less painful to vomit up!
- Try eating more protein to alleviate morning sickness.
- If you are constantly hungry, focus on eating nutritious superfoods instead of junk foods. Eat fiber and protein with each meal to help you feel full longer.
Second Trimester Calories:
Here is where pregnancy calorie counts gets confusing. For singleton pregnancies, there are these varying recommendations:
- What to Expect recommends getting 300-350 extra calories per day.
- The ACOG recommendations are 340 extra calories per day.
- The newly-updated recommendations from the National Institute for Health and Care Excellence (NICE) recommend NO extra calories during the second trimester.
The discrepancies have to do with the fact that many women start their pregnancies overweight. Nearly half of expectant mothers are overweight or obese! (20)
Further, most women are not getting enough exercise during pregnancy. (21)
Eating too much and not exercising is causing women to gain too much weight during pregnancy. Too much weight gain can then cause problems such as gestational diabetes, high blood pressure, and cesarean birth.
So, how many calories should you be consuming during the second trimester? Like with everything pregnancy-related, this varies depending on the mother.
Tips for Second Trimester Calorie Intake:
- Exercise: Since the first-trimester morning sickness has (hopefully) worn off, now is a good time to start exercising. Unless your doctor says otherwise, at least 30 minutes of moderate-intensity exercise is recommended. If you have not exercised before pregnancy, you should begin with a maximum of three 15-minute exercise sessions and build up to 30-minute sessions. (22)
- Keep healthy snacks with you at all times: This way you will have healthy foods to eat if hunger strikes while you are out and won’t have to turn to junky convenience foods.
- Eat a balanced diet: Aim to get at least 7 servings of fruits and veggies, 4 servings of dairy, 2 servings of protein, and 9 servings of whole grains each day. (23)
- Focus on food groups, not calories: Counting calories is not practical for most women. Instead, focus on getting enough of each of the food groups.
- Get your fatty acids: Make sure you get enough omega 3 fatty acids by eating non-mercury fish 1-2 times per week or taking a supplement.
Third Trimester Calories:
The third trimester is when the baby starts to develop rapidly. On average, a fetus will weight around 2lbs at 27 weeks, 4-4.5lbs at 32 weeks, and 6.75lbs to 10lbs by birth. (24)
With so much growth happening, you can absolutely expect to gain weight during the third trimester. You’ll need to increase your calorie intake to keep up. This doesn’t mean you can eat whatever you want though.
Tips for Third Trimester Calorie Intake:
- Eat frequently: The growing baby will be pressing on your stomach, making it uncomfortable to eat. Eating smaller meals more frequently will help you get enough calories during the day.
- Focus on protein: Approximately 20% of your calories should come from protein. You’ll need about 70 grams of protein per day during the last trimester. (28)
- Supplement with fatty acids: It can be hard to get enough essential fatty acids during the third trimester, so discuss with your doctor whether you should be supplementing. (29)
- Adjust your exercises: Exercise is still important during the third trimester, but may be harder because of all the extra baby weight. Consider joining an exercise class designed for pregnant women.
Overweight and Pregnant
If you are overweight and pregnant, the news isn’t good. Being overweight during pregnancy increases the risk of complications such as:
- High blood pressure and preeclampsia
- Gestational diabetes
- Increased risk of miscarriage
- Blood clots
- Post-partum hemorrhage
- Sleep apnea
- Having a large baby (which can cause the baby to get “stuck” or require a C-section birth).
*Overweight is defined as having a Body Mass Index (BMI) of 25 or more. You can learn more about BMI and see where you stand with this BMI calculator.
Should You Try to Lose Weight during Pregnancy?
Ideally, you should lose weight before you become pregnant. Once you are pregnant though, you should NOT try to lose weight.
The evidence shows that the risks don’t go away if an overweight women loses the weight during pregnancy. Losing weight during pregnancy could actually increase some risks – such as having a premature birth or a very small baby.
How Much Weight Should I Gain if Overweight When Pregnant?
The guidelines recommend gaining about 15-25lbs if you are overweight during pregnancy. Or, if you are obese during pregnancy, you should gain around 11-20lbs.
Bear in mind that these are only guidelines. It is very important that you discuss with your doctor what an appropriate amount of weight gain during pregnancy is for your specific needs. Getting too fixated on the numbers could cause you to gain too little weight and have adverse effects for your baby.
Tips for a Healthy Overweight Pregnancy
- Start Exercising: Yes, even overweight women are advised to exercise during pregnancy! Start with activities such as walking three times a week for 15 minutes. Build up to 30-minute sessions.
- Avoid simple starches (such as white bread and potatoes) and refined sugars: These types of sugars because will cause your blood sugar to spike. Since overweight pregnant women are already at an increased risk of gestation diabetes, it is important to avoid them.
- Test for gestational diabetes early: Because of the increased risk, you may need to test at your first prenatal visit as well as the recommended 24-28 weeks.
- Avoid juices: Even natural juices contain a lot of sugars and calories. Drink water or milk instead.
- Don’t stress about calories: If you are eating a variety of healthy foods, you shouldn’t have to worry about calorie counts. Just avoid processed junk foods at all costs in favor of natural foods like fruits, veggies, and lean meat.
- You still need fat: Pregnancy is not the time to go on a low-fat diet! Your growing baby needs essential fatty acids. You also need fat to absorb fat-soluble vitamins like vitamin D. Eat healthy fats like avocados, nuts, seeds, and low-mercury fish.
- Remember that you aren’t alone! Nearly half of women are overweight when they enter pregnancy. Don’t get caught up in the body-shaming or overwhelmed by the possible risks of an overweight pregnancy. Instead, network with other plus-sized moms to get support you need to eat right.
Pregnancy Weight Gain Recommendations
I’m reluctant to republish these pregnancy weight gain recommendations because they are just recommendations. Even healthy pregnancies can fall outside of these guidelines.
It is annoying to read “talk to your doctor” when you want advice online about what is “normal” or not. However, it is true that every pregnancy is different. You need to talk to your health care provider about what is right for you.
|Pregnancy Weight Gain Guidelines|
|Underweight||28-40lbs||No guidelines available|
What Foods to Eat During Pregnancy
There are a lot of great foods you can eat during pregnancy. We’ve made an infographic which shows some of the best foods to eat during pregnancy and why you need them.
Instead of focusing on certain foods though, you might want to follow these tips for what you eat during pregnancy.
Eat a Variety of Foods
There is no one food which will meet all of your pregnancy nutrition needs. It is important that you are choosing healthy foods from all of the major food groups. You’ll need to make sure you eat everyday:
- Whole grains
- Dairy or other sources of calcium
- Healthy fats
- Lots of fruits and veggies
Use the Color System for Fruits and Veggies
Most pregnancy food recommendations just say “eat 5-7 servings of fruit and vegetables per day.” This advice is inadequate. You can’t just eat 7 servings of apples and meet your nutrition needs!
Each type of fruit and veggie provides different nutrients. For example, berries are rich in antioxidants but lack minerals.
To simplify nutrition, many experts recommend “eating the rainbow.” This approach works because each “color group” usually contains certain nutrients.
- Reds: Red foods such as tomatoes and red peppers are great sources of antioxidants known as phytochemicals, such as lycopene and ellagic acid. They fight cancer and boost immunity.
- Oranges and Yellows: These foods are rich in beta carotene, which is important for eyes and immunity. They are also great sources of vitamin C. Examples include squash, mango, cantaloupe, carrots, and pineapple.
- Whites and Browns: Many of these foods – such as onions and garlic – are natural antiviral and antibacterial foods. They can also be great sources of potassium, magnesium, and vitamin C.
- Greens: These foods are high in many minerals, including calcium and iron, which are crucial during pregnancy. They are also rich in vitamins K, B, and E.
- Blues and Purples: Foods such as blueberries and eggplant contain high amounts of antioxidants which boost immunity, repair damage done by stress, and fight inflammation.
Foods to Avoid during Pregnancy
There are a lot of foods that you need to avoid during pregnancy. Before you get alarmed, know that most of these just require using common sense. However, there are a few specific concerns for pregnant women – especially fish/seafood and certain foods which may cause food poisoning.
Fish and Seafood during Pregnancy
Fish is one of the best sources of essential fatty acids – a nutrient that pregnant women need to aid in the baby’s neurological development. Because of how important fatty acids are, pregnant women are told to eat 2-3 servings (8-12oz) of fish per week. (34)
However, some types of fish are high in mercury. So, as of 2004, the FDA (and other health agencies) have advised against eating more than two 6oz servings of fish per week. (35)
What does this mean for you?
You should aim to consume the recommended two servings of fish per week, but make sure that they are low-mercury fish.
|Low-Mercury Fish/Seafood||High-Mercury Fish (Avoid)|
|Shad||Pacific Bluefin tuna|
Note that not all of the low-mercury fish choices are actually high in omega 3 fatty acids. If that is your goal, then you are best eating these fish/seafood below:
What About Tuna during Pregnancy?
Tuna is generally listed as a “safe” fish to eat during pregnancy. However, some sources of tuna might contain high amounts of mercury.
Further, tuna is generally not a very good source of omega 3. Thus, it probably isn’t worth the risk of eating tuna if the goal is to get omega 3 fatty acids. (36)
Food Poisoning during Pregnancy
Pregnant women are more susceptible to food poisoning. That means they need to avoid certain foods which are risky.
Food poisoning generally doesn’t cause serious problems for pregnant women or their babies (though dehydration and exhaustion certainly aren’t fun!). However, in some cases, food poisoning can result in miscarriage or premature birth. Some types of food poisoning can travel through the placenta and infect the baby – resulting in serious complications. (37)
Risky Foods to Avoid during Pregnancy:
- Deli meat
- Raw sprouts
- Unpasteurized dairy (including blue cheeses and soft cheeses)
- Unpasteurized juices
- Raw fish
- Improperly cooked meat
Alcohol during Pregnancy
While pregnant, my mother-in-law recommended that I “drink a glass of beer” because it was full of B vitamins. She’s not the only one. It used to be common to prescribe beer to pregnant women! (39)
Nowadays, pregnant women are advised to avoid alcohol completely. If you order an alcoholic beverage while visible pregnant, you’ll probably face a lot of nasty remarks – and could even result in social services being called for child abuse! (40)
So how bad is drinking alcohol while pregnant really? It’s hard to get clear information because of all of the controversy and stigma around drinking while pregnant. There are surprisingly few studies on the topic.
One study did find that consuming 2-3 drinks per week was linked to a 10% increased risk of preterm birth. (41) Another study found that light drinking was not linked to any harm to the fetus, though having up to 4 drinks per week was linked to having a smaller baby. (42)
So, while you shouldn’t feel too guilty about having one drink at a special occasion, it is probably better to abide by “better safe than sorry.” Avoid drinking any alcohol!
Other Foods to Avoid during Pregnancy
- Processed junk food: Processed food tends to be high in unhealthy fats and sodium, but devoid in nutrients. The issue isn’t so much that junk food is bad for you (it is!), but that you won’t have room for nutrient-rich foods if you are filling yourself with junk.
- Soda and soft drinks: These are loaded with sugars and often high in caffeine. There is evidence that the sugar-free and diet sodas are even worse because they mess with your gut flora.
- White bread and other white starches: Refined starches act like sugar in the body. They will cause your blood sugar to spike then crash, leaving you feeling tired and cranky. This can set you up for gestational diabetes! They also are devoid of nutrients. (43)
- Uncooked mushrooms: Mushrooms are safe during pregnancy, but should always be cooked to kill any spores and make them easier to digest. (44, 45)
- Caffeine: The research on caffeine during pregnancy is conflicting. Some studies found adverse effects like premature birth, stillbirth, low weight, and miscarriage. Other studies have found no risk. Until more info is available, the American Pregnancy Association recommends limiting caffeine to 200mg daily (about one 12oz coffee).
- Licorice: Licorice contains an artificial sweetener called It is linked to lower IQ, ADHD, and increased levels of stress hormones. (46)
- Unripe papaya: A substance which is naturally found in unripe papaya (but not ripe papaya) could trigger uterine contractions, resulting in premature birth. (47)
|Foods to Eat during Pregnancy|
Why: Provides much-needed calcium for your baby and isn’t as difficult to digest as milk.
Why: A nutrient-rich superfood, especially for protein, iron, and B vitamins.
Why: Great source of protein and folate.
Why: Rich in iron, which is especially important since pregnant women are at risk of anemia.
Why: Chock-full of antioxidants to keep you healthy, plus rich in vitamin C to help with iron absorption.
|Orange and yellow veggies|
Why: Rich in beta carotene, which is important for developing baby’s tissues plus post-partum tissue repair for mom.
Why: Provide long-lasting energy and are rich in nutrients including fiber, which can help with pregnancy constipation.
Why: Rich in folate plus vitamin C, which helps iron absorption.
|Nuts and Seeds|
Why: A great source of protein plus healthy fatty acids that are important for your baby’s brain development.
Why: They are rich in bioavailable iron, trace elements, antioxidants, fiber, and B vitamins. They’re also one of the only plant-based sources of vitamin D.
|High Quality Fats|
Why: Fats from coconut oil, avocados, and seeds are great energy-boosters plus rich in healthy fatty acids.
Why: Potassium in bananas can help relieve muscle cramps that pregnant women frequently experience.
Why: Is rich in omega 3 but low in mercury.
|Dark Leafy Greens|
Why: These are rich in B vitamins, iron, vitamin K, and antioxidants.
Why: Not only are they packed with vitamin C and antioxidants, but can help prevent pregnancy UTIs.
|Foods to Eat in Moderation|
|Organ Meat and Fish Liver Oil|
Why: While they are a great source of nutrients, consuming too much vitamin A from animal sources could cause toxicity.
Why: It is a simple carb which acts like sugar in the body.
Why: Too much caffeine can result in low birth weight or even miscarriage.
|Foods to Avoid|
Why: May contain parasitic worms, viruses, or bacteria.
|Raw Sprouts |
Why: Could be contaminated with salmonella.
Why: Raw mushrooms are tough on digestion and may be carcinogenic.
Why: Potential source of listeria contamination.
|Sodas and Soft Drinks|
Why: Contain caffeine and huge amounts of sugar.
Why: Contains a substance which may trigger uterine contractions.
Why: Fish like marlin, swordfish shark, King Mackerel, and Tilefish are high in mercury.
Why: The ingredient glycyrrhizin is linked to ADHD and developmental problems.
|Unpasteurized Dairy and Juices|
Why: May contain bacteria which could get you sick or even cause Toxoplasmosis.
|Processed Junk Food|
Why: Contains very few nutrients but loaded with sugars, sodium, and unhealthy fats.
Why: May result in fetal alcohol syndrome and physical and mental problems for your baby.
Why: Can cause blood sugar hikes and gestational diabetes. | <urn:uuid:3f5fc55a-5cf0-4302-940f-af10610a4971> | CC-MAIN-2021-21 | https://fulltimebaby.com/pregnancy/nutrition-practical-advice-conception-birth/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990419.12/warc/CC-MAIN-20210511214444-20210512004444-00497.warc.gz | en | 0.94576 | 6,161 | 2.734375 | 3 |
From a knowledge perspective, systems usually distinguish between business concerns (what it’s about), software engineering (what is required), and operations (how to use it).
That functional view of information can be further detailed across enterprise architecture layers:
- Enterprise: information pertaining to the business value of resources, processes, projects, and operations.
- Systems: information pertaining to organization, systems functionalities, and services operations.
- Platforms: information pertaining to technical resources, quality of service, applications maintenance, and processes deployment and operations.
Yet, that taxonomy may fall through when enterprise systems are entwined in physical and social environments wholly digitized. Beyond marketing pitches, the reality of what Stanford University has labelled “Symbolic Systems” may be unfolding at the nexus between information systems and knowledge management.
In their pivotal article Davis, Shrobe, and Szolovits have firmly planted the five pillars of the bridge between knowledge representation and information systems:
- Surrogate: KR provides a symbolic counterpart of actual objects, events and relationships.
- Ontological commitments: a KR is a set of statements about the categories of things that may exist in the domain under consideration.
- Fragmentary theory of intelligent reasoning: a KR is a model of what the things can do or can be done with.
- Medium for efficient computation: making knowledge understandable by computers is a necessary step for any learning curve.
- Medium for human expression: one the KR prerequisite is to improve the communication between specific domain experts on one hand, generic knowledge managers on the other hand.
That puts information systems as a special case of knowledge ones as they fulfill the same principles:
- Like knowledge systems, information systems manage symbolic representations of external objects, events or activities purported to be relevant.
- System models are assertions regarding legitimate business objects and operations.
- Likewise, information systems are meant to support efficient computation and user-friendly interactions.
Yet, two functional qualifications are to be considered:
- The first is about the role of data processing: contrary to KM systems, information systems are not meant to process data into information.
- The second is about role of events processing: contrary to KM systems, information systems have to manage the actual coupling between context and symbolic surrogates.
The rapid melting of both distinctions points to the convergence of information and knowledge systems.
Data, Information, Knowledge
Facts are not given but have to be be captured as data before being processed into useful information. Depending of systems purpose, that can be achieved with one of two basic schemes:
- With data mining the aim is to improve decisions by making business sense of actual observations. Information is meant to be predictive, concrete and directly derived from data; it is a resource whose value and shelf-life are set by transient business circumstances.
- With systems analysis the aim is to build software applications supporting business processes. Information is meant to be descriptive or prescriptive, symbolic, and defined with regard to business objectives and users’ practice; it is an asset whose value and shelf-life depend on the persistency of technical architecture and the continuity of business operations.
Whatever the purpose and scheme, information has to be organized around identified objects or processes, with defined structures and semantics. When the process is fed by internal data from operational systems, information structures and semantics are already defined and data can be directly translated into knowledge (a).
Otherwise (b), the meanings and relevancy of external data has to be related to enterprise business model and technical architecture. That may be done directly by mapping data semantics to known descriptions of objects and processes; alternatively, rough data may be used to consolidate or extend the symbolic representation of contexts and concerns, and consequently the associated knowledge.
The next step is to compare this knowledge perspective to enterprise architectures and governance layers.
From an architecture perspective, enterprises are made of human agents, devices, and symbolic (aka information) systems. From a business perspective, processes combine three categories of tasks: decision-making, monitoring, and execution. With regard to governance the primary objective is therefore to associate those categories to enterprise architecture components:
- Decision-making can only be performed by human agents entitled to commitments in the name of the enterprise, individually or collectively. That is meant to be based on knowledge.
- Executing physical or symbolic processes can be done by human agents, individually or collectively, or by devices and software systems subject to compatibility qualifications. That can be done with information (symbolic flows) or data (non symbolic flows).
- Monitoring and controlling the actual effects of processes execution call for symbolic processing capabilities and can be achieved by human or software agents. That is supposed to be based on information (symbolic flows).
On that basis, the business value of systems will depend on two primary achievements:
- Mapping business models to changing environments by sorting through facts, capturing the relevant data, and processing the whole into meaningful and up-to-date information.
- Putting that information into effective use through their business processes and supporting systems.
As long as business circumstances are stable, external and internal data can be set along commensurate time-spans and be processed in parallel. Along that scheme information is either “mined” from external data or directly derived (aka interpreted) from operational (aka internal) data by “knowledgeable” agents, human or otherwise.
But that dual scheme may become less effective under the pressure of volatile business opportunities, and obsolete given technological advances; bringing together data mining and production systems may therefore become both a business necessity and a technical possibility. More generally that would call for the merging of knowledge management and information systems, for symbolic representations as well as for their actual coupling with changes in environments.
Business Driven Perspective
As already noted, functional categories defined with regard to use (e.g business processes, software engineering, or operations) fall short when business processes and software applications are entwined one with the other, within the enterprise as well as without (cf IoT). In that case governance and knowledge management are better supported by an integrated processing of information organized with regard to the scope and time-span of decision-making:
- Assets: shared decisions whose outcome bears upon multiple business domains and cycles. Those decisions may affect all architecture layers: enterprise (e.g organization), systems (e.g services), or platforms (e.g purchased software packages).
- Business value: streamlined decisions governed by well identified business units driven by changing business opportunities. Those decisions provide for straight dependencies from enterprise (business domains and processes), to systems (applications) and platforms (e.g quality of service).
- Non functional: shared decisions about scale and performances driven by changing technical circumstances. Those decisions affect locations (users, systems, or devices), deployed resources, or configurations.
Whereas these categories of governance don’t necessarily coincide with functional ones, they are to be preferred if supporting systems are to be seamlessly fed by internal and external data flows. In any case functional considerations are better dealt with by decision-makers in their specific organizational and business contexts.
Weaving together information processing and knowledge management also requires actual coupling between changes in environments and the corresponding state of symbolic representations.
That requirement is especially critical when enterprise success depends on its ability to track, understand, and take advantage of changes in business environment.
In principle, that process can be defined by three basic steps:
- To begin with, the business time-frame (red) is set by facts (t1) registered through the capture of events and associated data (t2).
- Then, a symbolic intermezzo (blue) is introduced during which data is analyzed, information updated (t3), knowledge extracted, and decisions taken (t4);
- Finally, symbolic and business time-frames are to be synchronized through decision enactment and corresponding change in facts (t5).
But that phased approach falls short with digitized environments and the ensuing collapse of fences between enterprises and their environment. In that context decision-making has often to be carried out iteratively, each cycle following the same pattern:
- Observation: understanding of changes in business opportunities.
- Orientation: assessment of the reliability and shelf-life of pertaining information with regard to current positions and operations.
- Decision: weighting of options with regard to enterprise capabilities and broader objectives.
- Action: carrying out of decisions within the relevant time-frame.
Data analysis and decision-making processes must therefore be weaved together, and operational loops coupled with business intelligence:
But that shift of the decision-making paradigm from discrete and periodic to continuous and iterative implies a corresponding alignment of supporting information regarding assets, business value, and operations.
Assuming that decisions are to be taken at the “last responsible moment”, i.e until not taking side could affect the options, governance has to distinguish between three basic categories:
- Operational decisions can be put to effect immediately. Since external changes can also be taken into account immediately, the timing is to be set by events occurring within the interval of production life-cycles.
- Business value decisions are best enacted at the start of production cycles using inputs consolidated at completion. When analysis can be done in no time (t3=t4) and decisions enacted immediately (t4=t5), commitments can be taken from one cycle to the next. Otherwise some lag will have to be introduced. The last responsible moment for committing a decision will therefore be defined by the beginning of the next production cycle minus the time needed for enactment.
- With regard to assets decisions are supposed to be enacted according to predefined plans. The timing of commitments should therefore combine planning (when a decision is meant to be taken) and events (when relevant and reliable information is at hand).
That taxonomy broadly coincides with the traditional distinction between operational, tactical, and strategic decisions.
Next, the integration of decision-making processes has to be supported by a consolidated description of data resources, information assets, and knowledge services; and that can be best achieved with ontologies.
As introduced long ago by philosophers, ontologies are systematic accounts of existence for whatever is considered, in other words some explicit specification of the concepts meant to make sense of a universe of discourse. From that starting point three basic observations can be made:
- Ontologies are made of categories of things, beings, or phenomena; as such they may range from simple catalogs to philosophical doctrines.
- Ontologies are driven by cognitive (i.e non empirical) purposes, namely the validity and consistency of symbolic representations.
- Ontologies are meant to be directed at specific domains of concerns, whatever they can be: politics, religion, business, astrology, etc.
With regard to models, only the second one puts ontologies apart: contrary to models, ontologies are about understanding and are not supposed to be driven by empirical purposes. As a corollary, ontologies could be used as templates (or meta-models) encompassing the whole range of information pertaining to enterprise governance.
Along that reasoning, the primary objective would be to distinguish contexts with regard to source and time-frame, e.g:
- Social: pragmatic semantics, no authority, volatile, continuous and informal changes.
- Institutional: mandatory semantics sanctioned by regulatory authority, steady, changes subject to established procedures.
- Professional: agreed upon semantics between parties, steady, changes subject to established procedures.
- Corporate: enterprise defined semantics, changes subject to internal decision-making.
- Personal: customary semantics defined by named individuals.
That coarse-grained taxonomy, set with regard to the social basis of contexts, should be complemented by a fine-grained one to be driven by concerns. And since ontologies are meant to define existential circumstances, it would make sense to characterize ontologies according to the epistemic nature of targeted items, namely terms, documents, symbolic representations, or actual objects and phenomena. That would outline four basic concerns that may or may not be combined:
- Thesaurus: ontologies covering terms and concepts.
- Content Management Systems (CMS): ontologies covering documents with regard to topics.
- Organization and Business: ontologies pertaining to enterprise organization, objects and activities.
- Engineering: ontologies pertaining to the symbolic representation of products and services.
That taxonomy put ontologies at the hub of enterprise architectures, in particular with regard to economic intelligence.
Insofar as enterprises are concerned knowledge is recognized as a key assets, as valuable if not more than financial ones: whatever their nature assets lose value when left asleep and bear fruits when kept awake; that’s doubly the case for data and information:
- Digitized business flows accelerates data obsolescence and makes it continuous.
- Shifting and porous enterprises boundaries and markets segments call for constant updates and adjustments of enterprise information models.
Given the growing impact of knowledge on the capability and maturity of business processes, data mining, information processing, and knowledge management should be integrated into a comprehensive and consistent framework. Sometimes labelled as economic intelligence, that approach makes a functional and operational distinction between data as resources, information as assets, and knowledge as services.
Melting the informational and behavioral schemes of knowledge management into operational systems creates a new breed of symbolic systems whose evolution can no longer be reduced to planned designs but may also include some autonomous capability. That possibility is bolstered by the integration of enterprise organization and systems with their business environment; at some point it may be argued that enterprise architectures emerge from a mix of cultural sediments, economic factors, technology constraints, and planned designs.
As enterprises grow and extend, architectures become more complex and have to be supported by symbolic representations of whatever is needed for their management: assets, roles, activities, mechanisms, etc. Hence the benefits of distinguishing between two kinds of models:
- Models of business contexts and processes describe actual or planned objects, assets, and activities.
- Models of symbolic artifacts describe the associated system representations used to store, process, or exchange information.
This apparent symmetry between models can be misleading as the former are meant to reflect a reality but the latter are used to produce one. In practice there is no guarantee that their alignment can be comprehensively and continuously maintained.
Assuming that enterprise architecture entails some kind of documentation, changes in actual contexts will induce new representations of objects and processes. At this point, the corresponding changes in models directly reflect actual changes, but the reverse isn’t true. For that to happen, i.e for business objects and processes being drawn from models, the bonds between actual and symbolic descriptions have to be loosened, giving some latitude for the latter to be modified independently of their actual counterpart. As noted above, specialization will do that for local features, but for changes to architecture units being carried on from models, abstractions are a prerequisite.
Interestingly, genetics can be used as a metaphor to illustrate the relationships between environments, enterprise architectures (organisms), and code (DNA).
According to classical genetics (left), phenotypes (actual forms and capabilities of organisms) inherit through the copy of genotypes (as coded by DNA), and changes between generations can only be carried out through changes in genotypes. Applied to systems, it would entail that changes can only happen after being programmed into the applications supporting enterprise organization and business processes.
The Extended Evolutionary Synthesis (right) considers the impact of non coded (aka epigenetic) factors on the transmission of the genotype between generations. Applying the same principles to systems would introduce new mechanisms:
- Enterprise organization and its usage of systems could be adjusted to changes in environments independently of changes in coded applications.
- Enterprise architects could assess those changes and practical adjustments, plan systems evolution, and use abstractions to consolidate their new designs with legacy applications.
- Models and applications would be transformed accordingly.
That epigenetic understanding of systems would put the onus of their evolutionary fitness on the plasticity and versatility of applications.
The genetics metaphor comes with a teleological perspective as it assumes that nothing happens without a reason. Cybernetics goes the other way and assumes that disorder and confusion will ensue from changing environments and tentative adjustments.
Originally defined by thermodynamic as a measure of heat dissipation, the concept of entropy has been taken over by cybernetics as a measure of the (supposedly negative) variation in the value of information supporting organisms sustainability. Applied to enterprise governance, entropy will be the result of untimely and inaccurate information about the actual state of assets capabilities, and challenges.
One way to assess those factors is to classify changes with regard to their source and modality.
With regard to source:
- Changes within the enterprise are directly meaningful (data>information), purpose-driven (information>knowledge), and supposedly manageable.
- Changes in environment are not under control, they may need interpretation (data<?>information), and their consequences or use are to be explored (information<?>knowledge).
With regard to modality:
- Data associated with planned changes are directly meaningful (data>information) whatever their source (internal or external); internal changes can also be directly associated with purpose (information>knowledge);
- Data associated with unplanned internal changes can be directly interpreted (data>information) but their consequences have to be analyzed (information<?>knowledge); data associated with unplanned external changes must be interpreted (data<?>information).
Assuming with Stafford Beer that viable systems must continuously adapt their capabilities to their environment, this taxonomy has direct consequences for enterprise governance:
- Changes occurring within planned configurations are meant to be dealt with, directly (when stemming from within enterprise), or through enterprise adjustments (when set in its environment).
- That assumption cannot be made for changes occurring outside planned configurations because the associated data will have to be interpreted and consequences identified prior to any decision.
Enterprise governance will therefore depend on the way those changes are taken into account, and in particular on the capability of enterprise architectures to process the flows of associated data into information, and to use it to deal with variety. On that account the key challenge is to manage the relevancy and timely interpretation and use of the data, in particular when new data cannot be mapped into predefined semantic frame, as may happen with unplanned changes in contexts. How that can be achieved will depend on the processing of data and its consolidation into information as carried on at enterprise level or by business and technical units.
Within that working assumption, the focus is to be put on enterprise architecture capability to “read” environments (from data to information), as well as to “update” itself (putting information to use as knowledge).
Nowadays and for all practicality, it may be assumed that enterprises have to rely on the internet for “reading” their physical or symbolic environment. Yet, as suggested by the labels Internet of Things (IoT) and Semantic Web, two levels must be considered:
- Identities of physical or social entities are meant to be uniquely defined across the internet.
- Meanings are defined by users depending on contexts and concerns; by definition they necessarily overlap or even contradict.
Depending on purpose and context, meanings can be:
- Inclusive: can be applied across the whole of environments.
- Domain specific: can only be applied to circumscribed domains of knowledge. That’s the aim of the semantic web initiative and the Web Ontology Language (OWL).
- Institutional: can only be applied within specific organizational or enterprise contexts. Those meanings could be available to all or through services with restricted access and use.
That can be illustrated by a search about Amedeo Modigliani:
- An inclusive search for “Modigliani” will use heuristics to identify the artist (a). An organizational search for a homonym (e.g a bank customer) would be dealt with at enterprise level, possibly through an intranet (c).
- A search for “Modigliani’s friends” may look for the artist’s Facebook friends if kept at the inclusive level (a1), or switch to a semantic context better suited to the artist (a2). The same outcome would have been obtained with a semantic search (b).
- Searches about auction prices may be redirected or initiated directly, possibly subject to authorization (c).
Given the interconnection with their material and social environments, enterprises have to align their information systems on internet semantic levels. But knowledge not being a matter for boundaries, there are clear hazards for enterprises to expose their newborn knowledgeable systems to the influence of anonymous and questionable external sources.
EA: PAGODA ARCHITECTURE BLUEPRINT
The impact of digital environments goes well beyond a shallow transformation of digitized business processes and requires a deep integration of enterprises ability to refine data flows into information assets to be put to use as knowledge:
- Acquisition of business data flows at platform level.
- Integration of business intelligence and information models.
- Integration of information assets with knowledge management and operatioonal decision-making.
Such an information backbone supporting architecture layers tallies with the Pagoda architecture blueprint whose ubiquity all around the world demonstrates the effectiveness in ensuring resilience and adaptability to external upsets.
Since knowledge cannot be neatly tied up in airtight packages, enterprise systems have to exchange more than data with their environment. Dedicated knowledge management systems, by filtering the meanings of incoming information, insulate core enterprise systems from stray or hazardous interpretations. But intertwining KM and production systems makes the fences more porous, and the risks are compounded by the spread of intelligent but inscrutable systems across the net. As a consequence securing accesses to information is not enough and systems must also secure their meanings (inclusive, specific, or institutional), and their origin.
For that purpose a distinction has to be made between two categories of “intelligent” sources:
- Fellow KM systems relying on symbolic representations that allow for explicit reasoning: data is “interpreted” into information which is then put to use as the knowledge governing behaviors.
- Intelligent devices relying on neuronal networks carrying out implicit information processing: data is “compiled” into neuronal connections whose weights (representing knowledge ) are tuned iteratively based on behavioral feedback.
On that basis systems capability and responsibility can be generalized from enterprise to network level:
- Embedded knowledge from identified and dedicated devices can directly feed process control whenever no responsibility is engaged (a).
- Feeding explicit knowledge management with external implicit (aka embedded) knowledge is more problematic due to the hazards of mixed and potentially inconsistent semantics (b).
- Symbolic knowledge can be used (c) to distribute information, or support decision-making and process control.
As a concluding remark, it appears that the convergence of information and knowledge management systems is better apprehended in the broader perspective of a intertwined network of symbolic systems characterized by their status (inclusive, specific, organizational), and modus operandi (explicit or implicit).
- The Book of Fallacies
- Reflections for the Perplexed
- Knowledge Architecture
- Enterprise Governance & Knowledge
- EA & Documentation
- Conceptual Thesaurus
- Events & Decision-making
- Knowledge Based Model Transformation
- Data Mining & Requirements Analysis
- Abstraction & Emerging Architectures
- Sifting through a web of things
- Semantic Web: from Things to Memes
- Reinventing the wheel
- AI & Embedded Insanity
- Detour from Turing Game
- Ontologies & Models
- Ontologies & Enterprise Architecture
- Ontologies as Productive Assets
- Open Concepts | <urn:uuid:cb679fd1-926c-4606-8c62-22befaa94e96> | CC-MAIN-2021-21 | https://caminao.blog/overview/thr-systems-infoknow/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991557.62/warc/CC-MAIN-20210517023244-20210517053244-00057.warc.gz | en | 0.914698 | 4,878 | 2.796875 | 3 |
Data collection. Time and experience has proven this 8-step process to be a successful approach to dashboard development. Idea Generation: The design process begins with understanding the customers and their needs. By following the database software development lifecycle methodology, and by using the data models, the database design ideals are fulfilled and will minimize the disadvantages. A logical data model is required before you can even begin to design a physical database. Article Overview This article explains the 8 phases of a project. So, we do need integrity rules, and proper defining of keys are a means of enforcing them. ADVERTISEMENTS: Read this article to learn about the six steps involved in the process of new product design. As such, the stages should be understood as different modes that contribute to a project, rather than sequential steps. During this process, data are collected on the available files, decision points and transactions handled by the present system. Design is a creative process that occurs in many settings. Physical Design) are database-design stages, which are listed below the diagram shown above. Nach dem Bereinigungs-, Transformationsschritt oder Bearbeitungsschritt werden die Daten … The tables sketched in the ER diagram are then normalized. Estimate Costs—With this information in mind, the development team must then estimate th… T he opening of an account with a banker... F riends, in our previous post, we have discussed about the basics of MS Office along with it's packages (MS Word, MS Excel, MS Power... H i all. Architects will set their fees according to the level of involvement they have in your project, as defined by the Agreement for Architects Services. In the implementation design phase, the conceptual data model is translated into a ‘logical’ representation of the database system. This waterfall figure, seen in Figure 13.1, illustrates a general waterfall model that could apply to any computer system development. The first stage of a database development lifecycle involves collection of necessary … Knowledge will give you power, but character respect. A well-structured database: Saves disk space by eliminating redundant data. What are the most essential design process steps? The process. The physical design of the database specifies the physical configuration of the database on the storage media. Design methodology : A structured approach that uses procedures, techniques, tools, and documentation aids to support and facilitate the process of design. Remember that we are currently progressing in Six Sigma MEASURE phase. When data is allowed to be duplicated, more maintenance and system resources are required to ensure that data is always integral (Coronel & Crockett, 2008). CHAPTER 7 LOGICAL DATABASE DESIGN. Internal sources include employees, research […] This is one way of ensuring entity integrity. DATABASE DESIGN STEPS / HOW TO DEVELOP A DATABASE, database system approach to data management, how the database system will eventually be implemented, The translation from logical design to physical design, In explaining the ‘relational model’, E.F. Codd proposed that, Inglând Siget Language Phonemic Writing Dictionaries. Codd, E.F. 1982. The logical design goal is to design an enterprise-wide database based on a specific data model but independent of physical-level details. Conceptual design is the first stage in the database design process. Formulation of Working Hypothesis ... Actual gathering of data and information begins in accordance with the research design. Collecting data is the first step in data processing. Find out which are the rest of the 5 phases of a project. Monitoring. Design 5. And the logical data model grows out of a conceptual data model. & Shanks, G.G. Can it vary in number independently of other entities? Data models are diagrams or schemas, which are used to present the data requirements at different levels of abstraction. Create a short outline of about 250–500 words of the project proposal. Staging ist ein Prozess oder Bereich der Informationsintegration, in dem Daten in einem Datenbereich (engl. The database software can interact with all the data in the database. When the conceptual design phase is in progress, the basic data modeling operations can be deployed to define the high-level user operations that are noted during analysis of the functions. SDLC Phases are as follows 1. The end result of the conceptual design phase is a conceptual data model (Figure 3), which provides little information of how the database system will eventually be implemented. In the context of the database design process, the conceptual design step that determines end-user views, outputs, and transaction-processing requirements is_____ . Logical design is the second stage in the database design process. Many times, conversation about process improvement neglects the more important topic of how to change a process. Determine the order of the necessary activities. The phases of database design; Louis Davidson. For example, a library system may have the book, library and borrower entities. Gender Discrimination in India Plan... D ebt ridden Lakshmi Vilas Bank has became DBS India Bank due to it's merger with Singapore's DBS Bank. Define calculations. 8 easy steps to creating an analytics plan that works . Testing 7. Determining which tables will be included in your design is one of the most challenging steps of the design process. Properly designed database are easy to maintain, improves data consistency and are cost effective in terms of disk storage space. (iii) By creating a physical model, the designers can have a low level overview of how the database system would operate before it is actually implemented. It is the most labor-intensive for the database designer. The result of the database design process depends on the professionality of the designer and the quality of the support by a database design system. There is more control and accountability over how the data is managed because the data all resides in one database (Coronel & Crockett, 2008). Implementation or deployment 8. Note , the genius of a database is in its design . Entities are basically people, places, or things you want to keep information about. Validate the conceptual model against user transactions 10. Review the conceptual data model with user The logical data model conveys the “logical functioning and structure” of the database (Benyon, 1993) and describes ‘how the data is stored’ (Dan et al., 2007) (e.g. During the third stage of the Design Thinking process, designers are ready to start generating ideas. Database design must reflect the IS, of which the database is a part. Find and organize the information required Gather all of the types of information you might want to record in the database, such as product name and order number. Assuming that this question refers to the relational database (RDBMS) paradigm, then there are three typical stages. Identify and connect attributes with entity or relationship types 5. what tables are used. The Proposal Outline is the initial step of this assignment. 1st phase is with the project kickoff, 2nd phase is exploration, the design would be the 3rd phase of a project. This knol describes the practical steps of how to design a database. Batini et al says that database modelling strives for a “non redundant, unified representation of all data managed in an organization” (Batini et al., 1986). Superb job shivani ji....thanx a lot for such a nice clarification, please give study material for UGC NET exam....Shivam, Excellent explanation.. waiting for more of this kind posts, Want to share anything with us ? Analyze the situation Before beginning the design, sort out what problem you are trying to address. Note that some phases are database management system independent and some are dependent. You’ve grown to understand your users and their needs in the Empathise stage, and you’ve analysed and synthesised your observations in the Define stage, and ended up with a human-centered problem statement. 1986. 0 The phases of database design. And designing database schemas is one of the very first and important steps to start developing any software/website. 142 views. database design. Physical modelling deals with the “representational aspects” and the “operational aspects” (Benyon & Murray, 1993) of the database, i.e. This database was implemented effective June 1, 1999, and is used to generate invoices and reports for student athletes. Rob, P, Coronel, C. & Crockett, K. 2008. 2007. Another way of enforcing integrity of data via keys, is to ensure that, if two tables are related to each other, an attribute of one relation must be the same as the primary attribute (primary key) of the other one. The advantages of this technique are also described. The important phases of research process are as follows: 1. Build a conceptual data model 2. Knowledge based systems – London – Butterworth Scientific Limited Then Amsterdam, 6(4): pg. Data analysis and requirements b. entity relationship modeling and normalization c. data model verification d. distributed data design Generally, you will pay fees monthly or at set stages of the design process. This is unwise. With database systems, it need only be specified what must be done, the DBMS (Database Management System) does the rest. Just as any design starts at a high level and proceeds to an ever-increasing level of detail, so does database design. The database design process. Accountants have to participate in all these five stages, but their participation level may vary from stage to stage. 109 – 117. How to create a database in Excel. See Drafting Outlines. Let us start with an overview of the waterfall model such as you will find in most software engineering textbooks. Batini et al says that database modelling strives for a “nonredundant, unified representation of all data managed in an organization” (Batini et al., 1986). define the more detailed attributes of the data and define constraints if needed. The next step is to get an architect to design the home from a more structured pers… Columns. Maintains data accuracy and integrity. This phase is called conceptual design. requirement analysis; conceptual database design; logical database design; schema refinement; physical database design; security design; tuning phase (optional) requirement analysis. Database Systems: Design, Implementation & Management. 6. Enforcing this rule ensures referential integrity of data. The process of designing a database includes the following basic steps: ... After an outline and a security plan are in place, you load the database with test data to enable the later steps of the process. 7 simple steps to the web design process. The steps outlined below offer a structured format for a formal design process based on models from industry. Jul 09, 2013. How do keys relate to ensuring that changes in database states confirm to specified rules? Schematic Design (SD): During this phase, the designer/architect communicates his/her proposed design to you. (Codd, 1982). This step is sometimes considered to be a high-level and abstract design phase, also referred to as conceptual design. Essays for IBPS PO 2020 - Knowledge Without Character is Dangerous ! how many employees are there, total profit etc. This process enables us to chart various activities involved in identifying the organization’s planning requirements in order to … The first step in the Database Development Life Cycle is to draw up a requirements document. Recognize entity types 3. Maintenance. Write an outline, sketch your ideas, and rework your design on paper before you create tables in Access XP. It shows the process as a strict sequence of steps where the output of one step is the input to the next and all of one step has to be completed before moving onto the next.We can use the wa… Six stages of data processing 1. Data integrity rules are a core component of a data model. A database design that can change easily according to needs of the company is important, because it ensures the final database system is complete and up-to-date. Recognize the relationship types 4. The Database Life Cycle (DBLC) defines the five stages for creating a database as the following: Requirements analysis. Consider the use of improved modeling concepts (optional step) 8. Development 6. The stages in the database development lifecycle can be visualized as a linear waterfall model. Since the database is one logical repository, even a small error can damage the entire database and reduce the integrity of the data (Coronel & Crockett, 1975). Write SQL statements to obtain information and knowlege about the company, e.g. 2002. The requirements document can then be analysed and turned into a basic data set (as shown in Figure 2) which can be converted into a conceptual model. If there are shortcomings to database systems, its that much more powerful and sophisticated software is needed to control the database and designing the software and database can be extremely time consuming. Planning is the first step of the SDLC.Successful planning involves considering a number of factors, including: 1. Write a brief Write a short statement giving the general outline of the problem to be solved. 3. The previous phase of database design involved the development of a logical structure for the database, which … stages of database design. Modifications/Redesign. Analyze the situation Before beginning the design, sort out what problem you are trying to address. Stages in the Design Process. To design a database in SQL, follow these basic steps: Decide what objects you want to include in your database. The design process consists of the following steps: Determine the purpose of your database This helps prepare you for the remaining steps. If there were no ‘conceptual model’, the organization would not be able to ‘conceptualise’ the database design and make sure that it actually represents all the data requirements of the organization. Database Design is a collection of processes that facilitate the designing, development, implementation and maintenance of enterprise data management systems. Introduction to the Design Process 8 - try inversion . One benchmark of a good database is one, which is complete, integral, simple, understandable, flexible and implementable (Moody & Shanks, 2002). 16 December 2008. Data operations using SQL is relatively simple Database development life cycle . Critical to successfully capitalising on the incredible asset an organisation’s data represents is the creation of a data strategy and roadmap — and key to achieving that is an effective analytics plan. By following the above methodology, and by using the data models, these database design ideals are fulfilled. Determining Scope—At this stage, the development team determines just what the project will cover. 2) Logical Design Stage: This stage is part of the result of conceptual design, which is changing to fit the technology to be employed. Dan, A., Johnson, R. & Arsanjani, A. Another advantage of the database approach is that, because data is located in one single database, data in different physical locations need not be duplicated. Database Design The process of producing a detailed data model of a database containing all the needed logical and physical design choices and physical storage parameters needed to generate a design of a database. Project observation. In using the terms ‘relation’ and ‘table’ as synonyms, Codd must have implied that a table should be viewed in terms of its ‘relationship’ with other tables. first place to start when learning how to design a database is to work out what the purpose is (2001). Matt Meazey . Tweak a step here, modify a step there, but don’t stray far from the steps described above. Batini, C., Lenzerini, M. & Navathe, S.B. Requirements Analysis is the first and most important stage in the Database Life Cycle. A relational database consists of one or more tables that are related to each other in some way. One benchmark of a good database is one which is complete, integral, simple, understandable, flexible and implementable (Moody & Shanks, 2002). If you cannot ensure that individual entities are uniquely identifiable then you can’t ensure that the database is ‘integral’, which is a core property of a properly designed data model (Moody & Shanks, 2002). Requirements specification -> Analysis -> Conceptual design -> Implementation Design-> Physical Schema Design and Optimisation. We're going to follow a process called the database development life cycle. Database Design Process. They are as follows: • Systems analysis • Conceptual design • Physical design • Implementation and conversion • Operation and maintenance. One of the greatest benefits of databases is that data can be shared or secured among users or applications. Moody, B.L. 2‐fold Process • Model some part of the Real World (Miniworld) as DATA • Determine the OPERATIONS to be used on this model. Analysis 4. Software Engineering. The result of this analysis is a ‘preliminary specifications’ document (Batini et al., 1986). 4. Technicians sometimes leap to the physical implementation before producing the model of that implementation. During the first part of Logical Design, a conceptual model is created based on the needs assessment performed in stage one. The process of constructing a model of the information used in an enterprise, independent of all physical considerations.. The conceptual data model is simply a high-level overview of the database system. One of the key features of a database system is that data is stored as a single logical unit (Coronel & Crockett, 2008). Design process. A comparative analysis of methodologies for database schema integration. try reversing the ordering of things; i.e. What this means is that although the data may be spread across multiple physical files, the database conveys the data as being located in a single data repository (Coronel & Crockett, 2008). Essays for IBPS PO 2020 (Mains) Descriptive Test Is Growing Level of Competition Good for Youth ? Pressman, R.S. Taking the ‘data requirements document’, further analysis is done to give meaning to the data items, e.g. For example, when building a home, you start with how many bedrooms and bathrooms the home will have, whether it will be on one level or multiple levels, etc. Determining Resources Required—In order for the planning stage to be successful, the development team must determine just what resources are required for the project to be completed as planned. Database Design Process. The process is highly iterative - parts of the process often need to be repeated many times before another can be entered - though the part(s) that get iterated and the number of such cycles in any given project may vary. Project Initiation 2. The final step is to physically implement the, The tables come directly from the information contained in the. This includes detailed specification of data elements, data types, indexing options and other parameters residing in the DBMS data dictionary.It is the detailed design of a system that includes modules & the database's hardware & software specifications of the system. The output of this process is a conceptual data model that describes the main data entities, attributes, relationships, and constraints of a given problem domain. Incorrectly defined relationships between tables could lead to data not being updated correctly in some tables, or could cause a data item to be un-necessarily duplicated in another table. The database development life cycle includes eight steps that help guide us through the creation of a new database. Your company should have a documented system in place designed to corral the information you’ve collected, making it useful and accessible in the future. Design is a creative process that occurs in many settings. Your company should have a documented system in place designed to corral the information you’ve collected, making it useful and accessible in the future. Today we shall see the Spec... C alculation Speed plays a very very very important role in Bank Exams. 1. department is the domain of employees, and the employees are ‘owned’ by the department) rather than the relation (table) itself. As follows: • systems analysis • conceptual design is translated into a ‘ preliminary specifications ’ (! Design would be the 3rd phase of a project, rather than sequential steps the user database as following... Come up with a solution to a project, we do need integrity rules, can! We have discussed about the company, e.g familiar with design phase, but you should have a idea..., total profit etc enterprise-wide database based on a specific data model is simply a overview! Read this article explains the 8 phases of a project discussed about the actual database system less friendly... Relationships are what bind the relations/tables in a database is 'complete ', 'efficient,..., 'usable ', 'usable ', 'consistent ', 'usable ' 'efficient. Data and maintaining the integrity of data is one of the database specifies the physical configuration of the design! Data into one long flat file user should only know the “ name the! Character is Dangerous alculation Speed plays a very very very important role in Bank.!, library and borrower entities flat file, 1986 ) for designing and deploying planning models in organization. Try inversion designers are ready to start developing any software/website define calculation scripts for specialized calculations designed. Designers often think about the steps outlined below offer a structured web process! ‘ knowledge ’ MEASURE phase the needs assessment performed in stage one > implementation Design- physical. Use the database, 28 ( 6 ): 619-650 the design process is different from information! To concentrate on the needs assessment performed in stage one Limited then Amsterdam, 6 4. And rework your design on paper before you create tables in Access XP the stages should tables. Do that come up with a focus on technical matters such as wireframes, code, and define constraints needed! Implementation, consists of one or more tables that are followed when developing database systems, it need be! Connect attributes with entity or relationship types 5 how following a structured for. Is relatively simple database development lifecycle can be visualized as a database SQL! To identify whether something is an entity can not be null, then would... The primary key of an organization 28 ( 6 ): pg deciding how the interacts... Team determines just what the project will cover Growing level of Competition Good for Youth accountants have to participate all. Do need integrity rules “ implicitly or explicitly define the more detailed attributes the! Data can be visualized as a linear waterfall model that could apply to any system... Power, but you should have a general idea of the database designer decides how the items! - try inversion rules are a means of enforcing them consistency and are cost in. Different from the steps involved in the implementation design phase, also referred to as conceptual design is a step-by-step. Eams | Current Affairs 2020 - just to concentrate on the needs assessment performed in stage one shared.... F riends, in dem Daten in einem Datenbereich ( engl … during database design process 8 stages first part of design. Relationship types 5 we have discussed about the company, e.g very important role in Bank.. Important phases of a database is to use relationships between tables well, example! An actual physical database by the selected database model phase, also referred to as design! A high-level and abstract design phase, also referred to as conceptual -! Consider the use of improved modeling concepts ( optional step ) 8 that keys follow certain,. On technical matters such as employee names, ages, wages etc ),! Sql statements to populate each table with specific data ( such as employee names, ages, etc... Improving the quality of data and maintaining the DBMS ( database management system independent DBMS! Of about 250–500 words of the relationship together with its domain ” ( e.g is an:. Data ( such as you will find in most software engineering textbooks we do need integrity “... Be no way of ensuring that keys follow certain rules, you can integrity... Is independent of all physical considerations is translated into a more low-level, DBMS specific and. And the logical design requires that all objects in the database life cycle a! So that a database that is independent of all physical considerations sometimes to... From stage to stage is different from the information contained in the implementation design phase, also referred as! Model grows out of a project look at the database out of a that! Next thing that we need to do is creating a database is 'complete ', '. Structured format for a formal design database design process 8 stages with a “ collection of processes that facilitate the designing development. “ name of the acm, 25 ( 2 ): pg 3 performed in stage one of design! Project, rather than sequential steps the genius of a conceptual model be to... Database life cycle is to physically implement the, the way that the primary key of an entity can be... And knowlege about the actual database system - just to concentrate on the data the database—what information is stored analyzed! Is crucial to high performance database system specific design places, or abandon, 28 6! The “ name of the design process 8 - try inversion test formulas, and rework design! The purpose of your database this helps prepare you for the remaining steps logical. More familiar with all physical considerations more successful websites faster and more efficiently used to present the data remain! Of maintaining the integrity of the following questions can help you deliver more successful database design process 8 stages and! Easy to maintain, improves data consistency and are cost effective in terms disk. Some phases are database management system ) does the rest the quality data! Includes eight steps that engineers follow to come up with a solution to business!, but don ’ t stray far from the steps of how to design a database is a of. Ensuring that keys follow certain rules, you can ensure integrity of data maintaining. Information about that implementation a linear waterfall model that could apply to any computer system development and data... Phases are database management system independent and some are dependent questionnaire are the rest of the in. The engineering design process begins with understanding the customers and their needs Descriptive is! The book, library and borrower entities or iterative approach transforming a logical model! Required before you create tables in Access XP general idea of the 5 phases of a new.... Of ‘ data requirements at different levels of abstraction an enterprise-wide database based on from. Pg 3 illustrates a general waterfall model such as you will find in most software engineering textbooks of. These basic steps: determine the purpose of your database this helps prepare you for the remaining steps models... They are as follows: 1 proven this 8-step process to be a approach... Determine candidate, primary, and by using the data should remain.. Simply a high-level overview of the design thinking process, designers are ready to start when how... Diagram shown above DBLC ) defines the five stages, which are used to present data... Datenbereich ( engl daily follower of Gr8AmbitionZ.com to present the data, the development team just... Of logical design requires that all objects in the database design process 8 stages of transforming a logical model! A focus on technical matters such as employee names, ages, etc! Pg 3 redundant data high-level and abstract design phase, the tables sketched in the system... As any design starts at a high level and proceeds to an ever-increasing level of Good... The practical steps of how to change a process called the database designer decides how the included relate... Research design, then there would be no way of ensuring that changes in database states confirm to specified?. On technical matters such as wireframes, code, and rework your design on paper before you create tables Access... Would be the 3rd phase of a quality management framework systems: from tutoring... Think about the company, e.g physical details these basic steps: Decide what objects you want to keep about! Together, so proper understanding is needed a general idea of the process... Draw up a requirements document beginning the design thinking process, designers are to..., 'consistent ', 'consistent ', 'consistent ', 'consistent ', 'consistent,... Write a short outline of about 250–500 words of the International Workshop on systems development in SOA Environments pg. Stages should be tables and which should be tables and which should be columns within those tables in. Accountants have to participate in all these five stages for creating a database is... Relevant to a business or entity process improvement neglects the more important of... One long flat file stage, the database and the user helps prepare for! A well designed database are easy to maintain, improves data consistency and cost. Reserve Bank of India... F riends, in our previous post we have discussed the. Software development life cycle data items, e.g you deliver more successful websites faster and efficiently. Number independently of other entities there are three typical stages knowledge ’ when learning to! Illustrates a general idea of the design, sort out what the purpose is database is! Design and Optimisation on models from industry streamline the process of transforming a logical data model but of.
Chalice Of The Void Ban, Window Glass Panel Replacement, Olay Wholesale Uk, Corona Fl 3470 Replacement Parts, 3 Unit Dental Bridge Cost, Best Electric Precision Screwdriver Set, 8 Insulated Stove Pipe, Moroccan Beef Tongue Recipe, Bar Height Round Table, | <urn:uuid:1a51949d-c17c-49fe-9b71-79c1154e7bb2> | CC-MAIN-2021-21 | https://earthtoneanalog.com/graveland-a-vriu/a56c51-database-design-process-8-stages | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989012.26/warc/CC-MAIN-20210509183309-20210509213309-00215.warc.gz | en | 0.901572 | 6,070 | 2.609375 | 3 |
When speaking about the “Holocaust,” what time period are we referring to?
The “Holocaust” refers to the murder of 6,000,000 European Jews carried out in a systematically planned and executed manner 1941 – 1945. A study of the Holocaust should also include a study of the period from 1933 when Adolph Hitler became Chancellor of Germany until the summer of 1941 when the Einsatzgruppen massacres began. The period between summer of 1941 until 1945 is generally defined as the dates of the actual implementation of the Final Solution.
How many Jews were murdered during the Holocaust?
Six million is the round figure accepted by most authorities.
How many non-Jewish civilians were murdered during World War II?
It is impossible to determine the exact number. Among the groups which the Nazis and their collaborators murdered and persecuted: Gypsies, resistance fighters from all the nations, German opponents of Nazism, homosexuals, Jehovah’s Witnesses, the physically and mentally handicapped, habitual criminals , and the “anti-social,” e.g. beggars, vagrants.
Which Jewish communities suffered losses during the Holocaust?
Every Jewish community in Nazi-occupied Europe suffered losses during the Holocaust. Some Jewish communities in North Africa were persecuted, but the Jews in these countries were neither deported to the death camps, nor were they systematically murdered.
How many Jews were murdered in each country and what percentage of the pre-war Jewish population did they constitute?
(Source: Encyclopedia of the Holocaust)
- Austria 50,000 – 27.0%
- Belgium 28,900 – 44.0%
- Bohemia/Moravia 78,150 – 66.1%
- Bulgaria 0 – 0.0%
- Denmark 60 – 0.7%
- Estonia 2,000 – 44.4%
- Finland 7 – 0.3%
- France 77,320 – 22.1%
- Germany 141,500 – 25.0%
- Greece 67,000 – 86.6%
- Hungary 600,000 – 69.0%
- Italy 7,680 – 17.3%
- Latvia 71,500 – 78.1%
- Lithuania 143,000 – 85.1%
- Luxembourg 1,950 – 55.7%
- Netherlands 100,000 – 71.4%
- Norway 762 – 44.8%
- Poland 3,000,000 – 90.9%
- Romania 287,000 – 47.1%
- Slovakia 71,000 – 79.8%
- Soviet Union 1,100,000 – 36.4%
- Yugoslavia 63,300 – 81.2%
What is a death camp? How many were there? Where are they located?
A death camp is a concentration camp with special apparatus specifically designed for systematic murder. Six such camps existed: Auschwitz-Birkenau, Belzec, Chelmno, Majdanek, Sobibor, Treblinka. All were located in Poland.
What does the term “Final Solution” mean and what is its origin?
The term “Final Solution” (Endlösung) refers to Germany’s plan to murder all the Jews of Europe.
When did the “Final Solution” actually begin?
While thousands of Jews were murdered by the Nazis or died as a direct result of discriminatory measures instituted against Jews during the initial years of the Third Reich, the systematic murder of Jews did not begin until the German invasion of the Soviet Union in June 1941.
How did the Germans define who was Jewish?
On November 14,1935, the Nazis issued the following definition of a Jew: Anyone with three Jewish grandparents; someone with two Jewish grandparents who belonged to the Jewish community on September 15, 1935, or joined thereafter; was married to a Jew or Jewess on September 15, 1935, or married one thereafter; was the offspring of a marriage or extramarital liaison with a Jew on or after September 15, 1935.
How did the Germans treat those who had some Jewish blood but were not classified as Jews?
Those who were not classified as Jews but who had some Jewish blood were categorized as Mischlinge (of “mixed ancestry”) and were divided into two groups: Mischlinge of the first degree—those with two Jewish grandparents and Mischlinge of the second degree—those with one Jewish grandparent. The Mischlinge were officially excluded from membership in the Nazi party and all Party organizations (e.g. SA, SS, etc.). Although they were drafted into the German Army, they could not attain the rank of officer. They were also barred from the civil service and from certain professions. (Individual Mischlinge were, however, granted exemptions under certain circumstances.) Nazi officials considered plans to sterilize Mischlinge, but this was never done. During World War II, first-degree Mischlinge, incarcerated in concentration camps, were deported to death camps.
Did the Nazis plan to murder the Jews from the beginning of their regime?
This question is one of the most difficult to answer. While Hitler made several references to killing Jews, both in his early writings (Mein Kampf) and in various speeches during the 1930s, Nazi documents indicated that they had no operative plan before 1941 for a systematic annihilation of the Jews living under Nazi occupation. A turning point occurred in Nazi policy towards Jews in late winter or the early spring of 1941 in conjunction with Germany’s decision to invade the Soviet Union.
When was the first concentration camp established and who were the first inmates?
The first concentration camp, Dachau, opened on March 22, 1933. The camp’s first inmates were not exclusively Jewish. The first to be interned were primarily political prisoners (e.g. Communists or Social Democrats); habitual criminals; homosexuals; Jehovah’s Witnesses; and “anti-socials” (beggars, vagrants). Jewish writers and journalists, lawyers, unpopular industrialists, and political officials also were among the first people sent to Dachau.
What was the difference between the persecution of the Jews and the persecution of other groups by the Nazis?
The anti-Jewish rhetoric of the Nazi Ministry of Propaganda painted Jews as “racial enemies” of the Third Reich who threatened to “destroy the Nazi society” and therefore needed to be “eliminated.” Jews were ultimately slated for total systematic annihilation. Other victims included people whose political or religious views were in opposition to the Nazis, people of “inferior” races who could be held in an inferior position socially, or people whose social behaviors excluded them from Nazi society. None of these groups were slated for total destruction by the Nazis.
Why were the Jews singled out for extermination?
The explanation of the Nazis’ implacable hatred of the Jews rests on their distorted world view which saw history as a racial struggle. They considered the Jews a race whose goal was world domination and who, therefore, were an obstruction to Aryan dominance. They believed that all of history was a fight between races which should culminate in the triumph of the superior Aryan race. Therefore, they considered it their duty to eliminate the Jews, whom they regarded as a threat. Moreover, in their eyes, the Jews’ racial origin made them habitual criminals who could never be rehabilitated and were, therefore, hopelessly corrupt and inferior. There is no doubt that other factors contributed toward Nazi hatred of the Jews and their distorted image of the Jewish people. One factor was the centuries-old tradition of Christian anti-Semitism which propagated a negative stereotype of the Jew as a Christ-killer, agent of the devil, and practitioner of witchcraft. Another factor was the political and racial anti-Semitism of the latter half of the nineteenth and early part of the twentieth centuries, which singled out the Jew as both a threat and a member of an inferior race. These factors combined to point to the Jew as a target for persecution and ultimate destruction by the Nazis.
What did people in Germany know about the persecution of Jews and other enemies of Nazism?
Certain initial aspects of Nazi persecution of Jews and other opponents were common knowledge in Germany. The Boycott of April 1, 1933, the Laws of April, and the Nuremberg Laws were fully publicized and offenders were often publicly punished and shamed. The same is true for other anti-Jewish measures. Kristallnacht (The Night of Broken Glass) was a public pogrom, carried out in full view of the entire population. While information on the concentration camps was not publicized, a great deal of information was available to the German public, and the treatment of the inmates was generally known.The Nazis attempted to keep the murders of Jews in the death camps and the “euthanasia” of the handicapped a secret and took precautionary measures to ensure they would not be publicized. Their efforts were only partially successful. Public protests by clergymen led to the halt of the “euthanasia” program in August 1941, so many persons were aware that the Nazis were killing the mentally ill in special institutions. As far as the murder of Jews was concerned, it was common knowledge in Germany that they had disappeared after having been sent to the East. And, there were thousands upon thousands of Germans who participated in and/or witnessed the implementation of the “Final Solution” either as members of the SS, the Einsatzgruppen, death camp or concentration camp guards, police in occupied Europe, or with the Wehrmacht.
Did all Germans support Hitler’s plan for the persecution of the Jews?
Although the entire German population was not in agreement with Hitler’s persecution of the Jews, there is no evidence of any large scale protest regarding their treatment. There were Germans who defied the April 1, 1933 boycott and purposely bought in Jewish stores, and a small number who helped Jews escape and hide. But even some of those who opposed Hitler were in agreement with his anti-Jewish policies.
Did the people of occupied Europe know about Nazi plans for the Jews? What was their attitude? Did they cooperate with the Nazis against the Jews?
The attitude of the local population vis-a-vis the persecution and destruction of the Jews varied from zealous collaboration with the Nazis to some active assistance to Jews. Thus, it is difficult to make generalizations. The situation also varied from country to country. In Eastern Europe, for example, especially in Poland, Russia, and the Baltic States (Estonia, Latvia, and Lithuania), there was much more knowledge of the “Final Solution” because it was implemented in those areas.In most countries they occupied — Denmark and Italy stand out as exceptions — the Nazis found many locals who were willing to cooperate fully in the murder of the Jews. This was particularly true in Eastern Europe, where there was a long standing tradition of anti-Semitism, and where various national groups, which had been under Soviet domination (Latvians, Lithuanians, and Ukrainians), fostered hopes that the Germans would restore their independence. In several countries in Europe, there were local fascist movements which allied themselves with the Nazis and participated in anti-Jewish actions; for example, the Iron Guard in Romania and the Arrow Guard in Slovakia. On the other hand, in every country in Europe, there were courageous individuals who risked their lives to save Jews. In several countries, there were groups which aided Jews, e.g. Joop Westerweel’s group in the Netherlands, Zegota in Poland, and the Assisi underground in Italy.
What was the response of the Allies to the persecution of the Jews? Could they have done anything to help?
The response of the Allies to the persecution and destruction of European Jewry was inadequate. Prior to 1944, little action was taken. In January 1944 the War Refugee Board was established for the express purpose of saving the victims of Nazi persecution.Even after the establishment of the War Refugee Board and the initiation of various rescue efforts, the Allies refused to bomb Auschwitz and/or the railway lines leading to the camp, despite the fact that Allied bombers were at that time engaged in bombing factories very close to Auschwitz and were well aware of its existence and function.Tens of thousands of Jews sought to enter the United States, but they were barred from doing so by the stringent American immigration policy. Even the relatively small quotas of visas which existed were often not filled, although the number of applicants was usually many times the number of available places. Practical measures which could have aided in the rescue of Jews included the following:
- Permission for temporary admission of refugees
- Relaxation of stringent entry requirements
- Frequent and unequivocal warnings to Germany and local populations throughout Europe that those participating in the annihilation of Jews would be held strictly accountable
- Bombing the death camp at Auschwitz
Were Jews in the Free World aware of the persecution and destruction of European Jewry and, if so, what was their response?
Efforts by the Jewish community during the early years of the Nazi regime concentrated on facilitating emigration from Germany and combating German anti-Semitism. Unfortunately, the views on how to best achieve these goals differed and effective action was often hampered by the lack of unity within the community. Moreover, very few Jewish leaders actually realized the scope of the danger. Following the publication of the news of the “Final Solution,” attempts were made to launch rescue attempts via neutral states and to send aid to Jews under Nazi rule. These attempts, which were far from adequate, were further hampered by the lack of assistance and obstruction from government channels. Additional attempts to achieve internal unity during this period failed.
Did the Jews in Europe realize what was going to happen to them?
Regarding the knowledge of the “Final Solution” by its potential victims, several key points must be kept in mind. The Nazis did not publicize the “Final Solution,” nor did they ever openly speak about it. Every attempt was made to fool the victims and, thereby, prevent or minimize resistance. Thus, deportees were always told that they were going to be “resettled.” They were led to believe that conditions “in the East” (where they were being sent) would be better than those in the ghettos. Following arrival in certain concentration camps, the inmates were forced to write home about the wonderful conditions in their new place of residence. The Germans made every effort to ensure secrecy. In addition, the notion that human beings—let alone the civilized Germans—could build camps with special apparatus for mass murder seemed unbelievable in those days. Since German troops liberated the Jews from the Czar in World War I, Germans were regarded by many Jews as a liberal, civilized people. Escapees who did return to the ghetto frequently encountered disbelief when they related their experiences. Even Jews who had heard of the camps had difficulty believing reports of what the Germans were doing there. Inasmuch as each of the Jewish communities in Europe was almost completely isolated, there was a limited number of places with available information. Thus, there is no doubt that many European Jews were not aware of the “Final Solution,” a fact that has been corroborated by German documents and the testimonies of survivors.
How many Jews were able to escape from Europe prior to the Holocaust?
It is difficult to arrive at an exact figure for the number of Jews who were able to escape from Europe prior to World War II, since the available statistics are incomplete. From 1933-1939, 355,278 German and Austrian Jews left their homes. Some immigrated to countries later overrun by the Nazis. In the same period, 80,860 Polish Jews immigrated to Palestine and 51,747 European Jews arrived in Argentina, Brazil, and Uruguay. During the years 1938-1939, approximately 35,000 emigrated from Bohemia and Moravia (Czechoslovakia). Shanghai, the only place in the world for which one did not need an entry visa, receive approximately 20,000 European Jews (mostly of German origin) who fled their homelands. Immigration figures for countries of refuge during this period are not available. In addition, many countries did not provide a breakdown of immigration statistics according to ethnic groups. It is impossible, therefore, to ascertain the exact number of Jewish refugees.
Why were so few Jewish refugees able to flee Europe prior to the outbreak of World War II?
The key reason for the relatively low number of refugees leaving Europe prior to World War II was the stringent immigration policies adopted by the prospective host countries. In the United States, for example, the number of immigrants was limited to 153,744 per year, divided by country of origin. Moreover, the entry requirements were so stringent that available quotas were often not filled. Indeed, apart from Shanghai, China and the Dominican Republic, no countries were receptive to Jewish immigrants as a group.Great Britain, while somewhat more liberal than the United States on the entry of immigrants, took measures to severely limit Jewish immigration to Palestine. In May 1939, the British issued a “White Paper” stipulating that only 75,000 Jewish immigrants would be allowed to enter Palestine over the course of the next five years (10,000 a year, plus an additional 25,000). This decision prevented hundreds of thousands of Jews from escaping Europe.The countries most able to accept large numbers of refugees consistently refused to open their gates. Although a solution to the refugee problem was the agenda of the Evian Conference, only the Dominican Republic was willing to approve any immigration. The United States and Great Britain proposed resettlement havens in underdeveloped areas (e.g. Guyana, formerly British Guiana, and the Philippines), but these were not suitable alternatives.
What was Hitler’s ultimate goal in launching World War II?
Hitler’s ultimate goal in launching World War II was the establishment of an “Aryan” empire from Germany to the Urals. He considered this area the natural territory of the German people, an area to which they were entitled by right, the Lebensraum (living space) that Germany needed so badly for its farmers to have enough soil. Hitler maintained that these areas were needed for the “Aryan” race to preserve itself and assure its dominance.The Nazis had detailed plans for the subjugation of the Slavs, who would be reduced to serfdom status and whose primary function would be to serve as a source of cheap labor for “Aryan” farmers. Those elements of the local population, who were of “higher racial stock,” would be taken to Germany where they would be raised as “Aryans.”When Hitler made the decision to invade the Soviet Union, he also gave instructions to embark upon the “Final Solution,” the systematic murder of European Jewry.
Was there any opposition to the Nazis within Germany?
Throughout the course of the Third Reich, there were different groups who opposed the Nazi regime and certain Nazi policies. They engaged in resistance at different times and with various methods, aims, and scope.From the beginning, leftist political groups and a number of disappointed conservatives were in opposition; at a later date, church groups, government officials and businessmen also joined. After the tide of the war was reversed, elements within the military played an active role in opposing Hitler. At no point, however, was there a unified resistance movement within Germany.
Did the Jews try to fight against the Nazis? To what extent were such efforts successful?
Despite the difficult conditions to which Jews were subjected in Nazi-occupied Europe, many engaged in armed resistance against the Nazis. This resistance can be divided into three basic types of armed activities: Ghetto revolts, resistance in concentration and death camps, and partisan warfare.The Warsaw Ghetto revolt which lasted for about five weeks beginning on April 19, 1943, is the best-known example of armed Jewish resistance, but there were many ghetto revolts in which Jews fought against the Nazis.Despite the terrible conditions in the death, concentration, and labor camps, Jewish inmates fought against the Nazis at the following sites: Treblinka (August 2, 1943); Babi Yar (September 29, 1943); Sobibór (October 14, 1943); Janówska (November 19,1943); and Auschwitz (October 7, 1944).Jewish partisans units were active in many areas, including Baranovich, Misk, Naliboki forest, and Vilna. While the sum total of armed resistance efforts by Jews was not militarily overwhelming and did not play a significant role in the defeat of Nazi Germany, these acts of resistance did lead to the rescue of an undetermined number of Jews, Nazi casualties, and untold damage to German property and self-esteem.
What was the Judenrat?
The Judenrat was the council of Jews, appointed by the Nazis in each Jewish community or ghetto. The Judenrat was responsible for enforcement of Nazi decrees affecting Jews and administration of the affairs of the Jewish community. Leaders and members of the Judenrat were guided, for the most part, by a sense of communal responsibility, but lacked the power and the means to successfully thwart Nazi plans for annihilation of all Jews. While the intentions of the heads of councils were rarely challenged, their tactics and methods have been questioned. Among the most controversial were Mordechai Rumkowski in Lodz and Jacob Gens in Vilna, both of whom tried to justify the sacrifice of some Jews in order to save others.
Did international organizations, such as the Red Cross, aid victims of Nazi persecution?
During the course of World War II, the International Red Cross (IRC) did very little to aid the Jewish victims of Nazi persecution. Its activities can basically be divided into three periods:
- September 1939 – June 22, 1941: The IRC confined its activities to sending food packages to those in distress in Nazi-occupied Europe. Packages were distributed in accordance with the directives of the German Red Cross. Throughout this time, the IRC complied with the German contention that those in ghettos and camps constituted a threat to the security of the Reich and, therefore, were not allowed to receive aid from IRC.
- June 22, 1941 – Summer 1944: Despite numerous requests by Jewish organizations, the IRC refused to publicly protest the mass annihilation of Jews and non-Jews in the camps, or to intervene on their behalf. It maintained that any public action of those under Nazi rule would ultimately prove detrimental to their welfare. At the same time, the IRC attempted to send food parcels to those individuals whose addresses it possessed.
- Summer 1944 – May 1945: Following intervention by such prominent figures as President Franklin Roosevelt and the King of Sweden, the IRC appealed to Miklós Horthy, Regent of Hungary, to stop the deportation of Hungarian Jews.
The IRC visited the “model ghetto” of Terezin (Theresienstadt) at the request of the Danish government. The Germans agreed to allow the visit nine months after submission of the request. This delay provided time for the Nazis to complete a “beautification” program, designed to fool the delegation into thinking that conditions at Terezin were quite good and that inmates were allowed to live out their lives in relative tranquillity. In reality, most prisoners were subsequently deported to Auschwitz.
The visit, which took place on July 23, 1944, was followed by a favorable report on Terezin to the members of the IRC. Jewish organizations protested vigorously, demanding that another delegation visit the camp. Such a visit was not permitted until shortly before the end of the war.
How did Germany’s allies, the Japanese and Italians, treat the Jews in the lands they occupied?
Neither the Italians nor the Japanese, both of whom were Germany’s allies during World War II, cooperated regarding the “Final Solution.” Although the Italians did, upon German urging, institute discriminatory legislation against Italian Jews, Mussolini’s government refused to participate in the “Final Solution” and consistently refused to deport its Jewish residents. Moreover, in their occupied areas of France, Greece, and Yugoslavia, the Italians protected the Jews and did not allow them to be deported. However, when the Germans overthrew the Badoglio government in 1943, the Jews of Italy, as well as those under Italian protection in occupied areas, were subject to the “Final Solution.”
Until December 1941, Shanghai was an open port where Jews fleeing Nazi persecution could land without visas. After the start of the Sino-Japanese war in 1937 and until 1941, the Chinese portions of Shanghai were under Japanese occupation, as were large areas of north China. The thousands of Jewish refugees who arrived between December 1938, and summer 1939, were housed in Shanghai’s International Settlement, of which Japanese-controlled Hongkou (Hongkew) was a part. Apprehensive over the great influx, the International Settlement’s Municipal Council instituted entry controls in fall of 1939, which were reinforced with stricter measures in summer 1940. Access to Shanghai by sea nearly ceased when Italy entered the war, while Japan’s unwillingness to grant transit visas via Manchukuo prevented innumerable refugees from reaching Shanghai by land. Japanese attempts to limit the Jewish presence in predominately Japanese and Chinese Hongkou failed; cheap housing led most arrivals to settle there anyway. In 1943, after Germany had deprived its and Austria’s Jews of their citizenship, the Japanese confined these and all other stateless Jews to a segregated area, the Ghetto of Hongkou. Yet, despite overcrowding, dire food shortages, poor health, and a high mortality rate especially among the elderly, more that 20,000 Jews survived the war in Shanghai.
What was the attitude of the churches vis-a-vis the persecution of the Jews? Did the Pope ever speak out against the Nazis? The head of the Catholic Church at the time of the Nazi rise to power was Pope Pius XI. Throughout his reign, he limited his concern to Catholic non-Aryans. Although he stated that the myth of “race” and “blood” were contrary to Christian teaching, he neither mentioned nor criticized anti-Semitism. His successor, Pius XII (Cardinal Pacelli) was a Germanophile who maintained his neutrality throughout the course of World War II. Although as early as 1942 the Vatican received detailed information on the murder of Jews in concentration camps, the Pope confined his public statements to expressions of sympathy in a non-specific way for the victims of injustice and to calls for a more humane conduct of the war. Despite the lack of response by Pope Pius XII, several papal nuncios played an important role in rescue efforts, particularly the nuncios in Hungary, Romania, Slovakia, and Turkey. It is not clear to what, if any, extent they operated upon instructions from the Vatican.
In Germany, the Catholic Church did not oppose the Nazis’ anti-Semitic campaign. Church records were supplied to state authorities which assisted in the detection of people of Jewish origin, and efforts to aid the persecuted were confined to Catholic non-Aryans. While Catholic clergymen protested the Nazi euthanasia program, few, with the exception of Bernard Lichtenberg, spoke out against the murder of Jews.In Western Europe, Catholic clergy spoke out publicly against the persecution of the Jews and actively helped in the rescue of Jews.
In Eastern Europe, however, the Catholic clergy was generally more reluctant to help. Dr. Jozef Tiso, the head of state of Slovakia and a Catholic priest, actively cooperated with the Germans as did many other Catholic priests.The response of Protestant and Eastern Orthodox churches varied. In Germany, Nazi supporters within Protestant churches complied with the anti-Jewish legislation and even excluded Christians of Jewish origin from membership. Pastor Martin Niemöller’s Confessing Church defended the rights of Christians of Jewish origin within the church, but did not publicly protest their persecution, nor did it condemn the measures taken against the Jews, with the exception of a memorandum sent to Hitler in May 1936.
In occupied Europe, the position of the Protestant churches varied. In several countries (Denmark, France, the Netherlands, and Norway) local churches and/or leading clergymen issued public protests when the Nazis began deporting Jews. In other countries (Bulgaria, Greece, and Yugoslavia), Orthodox church leaders intervened on behalf of the Jews and took steps which, in certain cases, led to the rescue of many Jews.
Non-Catholic leaders in Austria, Belgium, Bohemia-Moravia, Finland, Italy, Poland, and the Soviet Union did not issue any public protests on behalf of the Jews.
How many Nazi criminals were there? How many were brought to justice?
We do not know the exact number of Nazi criminals since the available documentation is incomplete. The Nazis themselves destroyed many incriminating documents and there are still many criminals who are unidentified and/or unindicted. Those who committed war crimes include those individuals who initiated, planned and directed the killing operations, as well as those with whose knowledge, agreement, and passive participation the murder of European Jewry was carried out. Those who actually implemented the “Final Solution” include the leaders of Nazi Germany, the heads of the Nazi Party, and the Reich Security Main Office. Also included are hundreds of thousands of members of the Gestapo, the SS, the Einsatzgruppen, the police and the armed forces, as well as those bureaucrats who were involved in the persecution and destruction of European Jewry. In addition, there were thousands of individuals throughout occupied Europe who cooperated with the Nazis in killing Jews and other innocent civilians.We do not have complete statistics on the number of criminals brought to justice, but the number is certainly far less than the total of those who were involved in the “Final Solution.”
The leaders of the Third Reich, who were caught by the Allies, were tried by the International Military Tribunal in Nuremberg from November 20, 1945 to October 1, 1946. Afterwards, the Allied occupation authorities continued to try Nazis, with the most significant trials held in the American zone (the Subsequent Nuremberg Proceedings). In total, 5,025 Nazi criminals were convicted between 1945-1949 in the American, British and French zones. In addition, the United Nations War Crimes Commission prepared lists of war criminals who were later tried by the judicial authorities of Allied countries and those countries under Nazi rule during the war. The latter countries have conducted a large number of trials regarding crimes committed in their lands. The Polish tribunals, for example, tried approximately 40,000 persons, and large numbers of criminals were tried in other countries. In all, about 80,000 Germans have been convicted for committing crimes against humanity, while the number of local collaborators is in the tens of thousands. Special mention should be made of Simon Wiesenthal, whose activities led to the capture of more than one thousand Nazi criminals.
What were the Nuremberg Trials? The term “Nuremberg Trials” refers to two sets of trials of Nazi war criminals conducted after the war. The first trials were held November 20, 1945 to October 1, 1946, before the International Military Tribunal (IMT), which was made up of representatives of France, Great Britain, the Soviet Union, and the United States. It consisted of the trials of the political, military and economic leaders of the Third Reich captured by the Allies. Among the defendants were: Göring, Rosenberg, Streicher, Kaltenbrunner, Seyss-Inuart, Speer, Ribbentrop and Hess (many of the most prominent Nazis—Hitler, Himmler, and Goebbels—committed suicide and were not brought to trial). The second set of trials, known as the Subsequent Nuremberg Proceedings, was conducted before the Nuremberg Military Tribunals (NMT), established by the Office of the United States Government for Germany (OMGUS). While the judges on the NMT were American citizens, the tribunal considered itself to be international. Twelve high-ranking officials were tried, among whom were cabinet ministers, diplomats, doctors involved in medical experiments, and SS officers involved in crimes in concentration camps or in genocide in Nazi-occupied areas. | <urn:uuid:98312669-5b6b-4a9b-a59e-f61dd817ddd2> | CC-MAIN-2021-21 | https://stlholocaustmuseum.org/holocaust-history/common-questions/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.80/warc/CC-MAIN-20210507090724-20210507120724-00374.warc.gz | en | 0.977305 | 6,673 | 2.9375 | 3 |
Andy McKinnon is leading the way through grassy scrub toward a pond when a raccoon appears ahead of us. We are a few dozen metres inside Toronto’s eastern boundary, which makes this a city raccoon, but it doesn’t look or behave like one. This animal is slim, with none of the waddle that comes from gorging on lasagna leftovers. It also doesn’t have any of the habituated-to-humans boldness of its urban cousins; in fact, it doesn’t want any part of us, and scoots along a fallen tree before disappearing into shoreline reeds.
When the pond comes into view, we spot two brilliant white trumpeter swans near the far shore. McKinnon says the pair has been nesting here for a few years, but predators and floods have killed their last three clutches. If anyone knows about the mating habits and history of swans in Rouge Park, the 47-square-kilometre expanse of forests and fields that runs north from Lake Ontario into York Region, the municipality north of Toronto, it’s McKinnon. Although he works a day job for a market research company, the amateur naturalist — who today is sporting a camouflage sun hat, camera case and daypack, plus tan cargo pants, heavy boots and a T-shirt with a detailed drawing of a slug on it — visits Rouge Park about four times a week from his home in Pickering, a few kilometres to the east.
On my half-dozen visits to the park with McKinnon, we toured natural wetlands and those made by excavators. We walked along roads, railways, rivers and trails, pushing through forests full of second-growth maples and brush thick with invasive species. We skirted harvested farmers’ fields and renaturalized areas with saplings planted in abandoned furrows. At times my pen struggled to keep pace with McKinnon, but my legs never did. He moves slowly, stopping often to investigate and sometimes catalogue. “I write important finds down,” he says. “When I can’t identify something, I take a picture and look it up later. It happens every time I’m out.”
McKinnon explains that the Rouge sits in a transition zone between two forest regions — the deciduous Carolinian and the mixed, sub-boreal Great Lakes forests — which give it a diversity of life found in few other places in Canada, including 23 species at risk. In just 12 paces along the side of one farmed field we passed turkey and coyote scat and weasel and deer tracks.
So while the raccoon that greeted us at the pond may not be rare, the ecosystem it’s part of is. Also rare — at least in the fourth-largest municipality in North America — are otters. But the formerly extirpated species is exactly what has brought us here.
To an eye used to the obstructed sightlines of a city, the greenery of Rouge Park goes on forever. The park spreads north for 15 kilometres from a narrow wedge that meets Lake Ontario. In those southern reaches near the lake, the Rouge and Little Rouge rivers have cut deep ravines through wooded valleys. The park more or less stays within Toronto territory along its eastern border with Pickering. North of Steeles Avenue, in York Region, the boundaries get straighter, encompassing a corridor of mostly agricultural land that’s two to five kilometres wide (see “Parkland versus farmland” sidebar).
Despite its size and location, Rouge Park never attracted much attention, even as it grew from 24 square kilometres of provincial and municipal land after its creation in 1995 to its current 47 square kilometres. That changed in 2011, when the federal government announced plans to take it over. The 2012 budget backed that up, with a promise of $143.7 million over 10 years to set up Canada’s newest national park, which in its reincarnation will encompass 58 square kilometres, a 23 per cent increase on its current size.
Parks Canada’s sudden interest in this pocket of land — a swath of remnant green very much in the thick of the largest urban area in the country — seems to reflect an appreciation that it must adapt to stay relevant in a country with rapidly changing demographics. “Rouge Park offers an unparalleled opportunity to meet our priority to meaningfully reach an increasingly diverse urban population,” says Pam Veinotte, field superintendent of the nascent park.
Although advocates for Rouge Park were thrilled — many had been lobbying for national park status for decades — it soon became clear that what was being planned wasn’t just a straightforward national park. Instead, Rouge Park would become Canada’s first national urban park. Note the “urban.” Veinotte, though, says it will not be a sub-class of national park, but instead a new, fourth designation under the Parks Canada umbrella, joining national parks, historic sites and marine conservation areas. That designation, the legislation for which hasn’t yet been written, has some worried that increased human activity in the park could damage its fragile ecosystem.
While there may be uncertainty and questions about whether people and nature can effectively coexist in this setting — whether the right balance between human use and sound ecological management can be struck — one thing is clear: the idea of a national urban park has sparked the community’s interest. Last June, Parks Canada released a concept for the proposed park. More than 10,000 people offered input on it during four months of public consultation. “This park,” says Veinotte, “has captured the imagination of the people.”
The fact there’s still something worthy of people’s imagination here is thanks to a group of people who — against the perceived wisdom of the 1980s — said this edge of Toronto didn’t need more suburban housing.
Glenn de Baeremaeker is president of Save the Rouge Valley System and a Toronto city councillor, but he can remember a time when his voice didn’t carry the weight it does now. In 1987 he was working for the group when a motion came before Scarborough community council to change zoning so that almost all of Toronto’s lands in what is now Rouge Park would be designated for residential development. “I didn’t think it should be paved, I thought it should be public parkland,” he says. “So I took a map and spent a night hand-drawing boundaries around 25,000 acres [100 square kilometres].” It was an audacious idea at the time, and de Baeremaeker heard about it. “People thought we were crazy,” he says.
The rezoning attempt was defeated. Pauline Browes, at the time a Scarborough MP in Prime Minister Brian Mulroney’s government, then stepped in and started the land on its march toward protected status. She convinced Tom McMillan, minister of the environment at the time, to see the valley for himself. He was impressed, and in 1988 offered $10 million of federal money to establish a park around the Rouge Valley.
The land in question, however, was provincial and municipal, not federal. And so began nearly a quarter century of governments trying to do what they don’t do best: cooperate.
Six years after the offer of federal money, the province announced a park management plan and a governing body called the Rouge Park Alliance, and in 1995 the sprawling inter-municipal park was born. Much of what happened — or didn’t happen — over the next 18 years leading up to last year’s announcement can be explained by noting that the Rouge Park Alliance was made up of 14 representatives from the municipal, regional, provincial and federal levels of government, plus the Toronto and Region Conservation Authority, the Waterfront Regeneration Trust, the Toronto Zoo and the Save the Rouge Valley System.
According to McKinnon, all those voices led to decisionmaking gridlock. “It kept things from happening,” he says. But he adds that’s not necessarily a bad thing, pointing out that if no one could agree on just what to do with the land, it would stay more or less the same, with few facilities, modest promotion and little money for wide-scale renaturalization.
On the other hand, leaving the park alone has left it underserviced. There are just 16 kilometres of sanctioned trails. The only bathrooms are at Lake Ontario. There are no visitor centres, canoe rentals, drinking fountains or cycling trails.
There are, however, bus stops. The fact that public transit can get visitors from across the Greater Toronto Area to Rouge Park fits nicely with Parks Canada’s new priority of being more appealing and accessible to people who don’t have many national park landscapes as backdrops in family photo albums.
“Our protected places are difficult to get to,” says Veinotte. “But one in five Canadians lives within an hour’s drive of this park.” She adds that by 2018, visitors can expect more than the smattering of roadside signs that mark the park’s boundaries today. “You will feel a sense of arrival that befits a very special place,” she says. Though quick to repeat that plans are formative, Veinotte envisions a visitor’s centre, interpretive displays, north–south connections to link disjointed hiking trails and new multi-use trails.
Job number one, though, will be to let nature lovers know what they can find just a bus ride away. “We need better awareness of it,” says Veinotte, who was surprised at the number of people at the 30 community meetings held last summer who said they had only recently heard about the park. “Without it being top of mind, there won’t be support for conservation.”
There was a time when Jim Robb’s voice was among those calling for Parks Canada to set up shop in Rouge Park. Now he’s presenting evidence to the federal Standing Committee on Environment and Sustainable Development, arguing that Parks Canada’s concept of a national urban park will “undermine” Rouge Park’s ecological health by prioritizing people over nature.
I paddled 20 kilometres of the Rouge River with him in May 2011. Our trip started in Milne Dam Conservation Area, a non-contiguous parcel of Rouge Park in Markham, where we canoed across a reservoir first created for industry in the 1820s. Kingfishers and blue herons fished nearby as we paddled, and Robb, a forester by trade, told me he’d like to see the small lake renaturalized into a wetland. Robb has been advocating for the Rouge for almost 27 years, first with Save the Rouge Valley System. He’s been with Friends of the Rouge Watershed since 1992, and is now that organization’s general manager. As a former vice-chair of Ontario’s Environmental Assessment Board, he knows his way around conservation regimes.
As our canoe glanced off rocks, Robb lamented how upstream deforestation had lowered the water table. He told me that with five per cent forest cover, Markham is one of the least forested municipalities in southern Ontario, and detailed the resulting problems with water quality and erosion. “That’s why we need Parks Canada here,” he said. “They are the ecological specialists.”
A year and a half later, with Parks Canada having released their park concept, Robb and Kevin O’Connor, president of Friends of the Rouge Watershed, are giving me a tour by road. They have come to think that the sometimes-impotent Rouge Park Alliance — disbanded last year to make way for Parks Canada — looked pretty good.
Robb drives to the proposed northern boundary of the park. It ends abruptly at a subdivision where 19th Avenue meets York Durham Line road. “Parks Canada says Rouge Park is supposed to link Lake Ontario to the foot of the Oak Ridges Moraine,” says Robb, referencing the 160-kilometre- long stretch of rolling glacial till across the top of the Greater Toronto Area that’s provincially protected. “This isn’t the foot, this is the toe. These boundaries were based on politics, not hydrology.” He shows me a map that indicates the northern end of the park does overlap the moraine, but not by much. Then he points east across the road to more than 40 square kilometres of land within the provincial greenbelt that’s already owned by the federal government. This federal land, the map shows, extends four kilometres farther north than the proposed park, deeper into moraine territory.
The land was excluded from the study area that the Rouge Park Alliance provided to Parks Canada. Robb believes that the broad interests of a larger, ecologically healthy park fell victim to the narrower interests of farmers who are already leasing ground in these off-limits federal lands. He says he doesn’t blame Parks Canada, and that electoral politics have put a limit on the park, at least for now.
Most worrisome to Robb and O’Connor is Parks Canada’s silence on the issue of a 600-metre-wide corridor along the Little Rouge River, where ecology should trump all else, something that has been enshrined in every planning document since the park’s creation. He doesn’t see it explicitly mentioned in Parks Canada’s 18-page park concept. Robb cites a 2013 Environment Canada study called How Much Habitat is Enough?, which argues that the minimum width of a forested area needed to create interior forest habitat is 500 metres. “It’s only then that you get a natural system, with a closed tree canopy that is safe from edge effects like an abundance of raccoons, skunks, crows and invasive species,” he says.
Despite the lack of mention in the park concept, Veinotte says Parks Canada is “committed to an ecological connection,” and points out that the strategic management plan for the park, a significantly more detailed document than the concept statement, has yet to be written.
O’Connor, however, still wishes the starting point were a little more ecology-minded. “This is not what we were advocating for,” he says. “Nature isn’t the focus. If people are put first, we will overuse and abuse it. In the 1990s, the Rouge Park Alliance chose the best long-term vision for this park. We are asking for the same. I want to come here before I die and look at something magnificent, not a tattered quilt.”
But “tattered” is a word some might use to describe the current park, operating as it is with only six staff, relying largely on volunteer labour for conservation work.
Serena Lawrie is a board member of the Rouge Valley Foundation, the organization that operates the Rouge Valley Conservation Centre, which runs regular guided walks through the park. As she stands outside the centre’s farmhouse headquarters waiting for a school group, I ask her if the park is being adequately protected as is. “Not really,” she says, hesitantly. “There isn’t much money. The park hasn’t done a major, season-wide species inventory since the late 1990s. The trails are falling apart. People don’t know where to go, so they make their own trails. But nobody polices it.” Lawrie notes that the publicity generated by Parks Canada’s arrival has already resulted in more people visiting, ultimately a good thing for her organization, which tries to raise awareness for the park. “It used to be mainly retirees, but there are a lot more young people now,” she says.
Speaking of young people, two classes from Scarborough’s St. Kevin Catholic School have arrived. Teacher Neil Kulim herds them into groups. Kulim tells me how his father brought him to the valley in the 1980s to ride bikes, get lost, climb ravine walls and explore crumbling homesteads. In turn, Kulim has been bringing classes to the park since 2007.
“I thought we were going to lose most of this place in the 1990s,” he tells me, gesturing past the end of a meadow along the Lookout Trail toward houses on the far side of Meadowvale Road. “I remember knocking on doors in that neighbourhood in 1999 and handing out flyers about the park. People would ask me, ‘What’s the Rouge Valley?’”
A group leader asks if anyone knows what a dung beetle is. Half the hands go up. Kulim points to a student near the back in brand-new rubber boots and tells me that last week the student didn’t know what a canoe or coyote was. I follow one boy who stops in front of a puddle on the trail. “These Jordans cost $200,” he points out, before steering his sneakers well clear of the wet. We descend to a gravel flat beside the kneedeep Little Rouge River. The students investigate tentatively. One of them asks Kulim if they can touch the water.
“Hey, I found a turtle,” one kid yells.
“No, you found a frog,” corrects another.
As the kids explore, I ask Joanne Willock, a parent volunteer, how often she visits the park. She looks at her son standing on the bank. “It’s embarrassing,” she says. “I didn’t even really know this was here. We don’t live that far away. All these housing developments around here, they’re called ‘Valley This’ and ‘Valley That,’ but you lose sight of what a real valley is. He’s nine and this is the first time he’s walked in a river.” I look over and see that despite the initial apprehension, the childhood inclination toward getting one’s feet wet has taken over. Those in sneakers are up to their ankles. The tops of more than one rubber boot have been breached.
Even the kid who didn’t want to get his running shoes dirty has to be called back from the gravel bar he’s wandering down. When he returns, clay has caked his laces. It’s taken five minutes to turn him into a poster child for Parks Canada’s new stated aim of introducing kids, city-folk and new Canadians to nearby nature.
It’s later that same afternoon that I’m looking out over raccoons and swans at the pond with Andy McKinnon, the amateur naturalist. He points to a stand of birch trees beyond the north bank that he says were some of the first planted in the park. Last year, volunteers planted 100,000 saplings across the park. McKinnon has mixed feelings about such restorations, and believes that, ideally, nature needs only time to heal. But he knows that human influence is not going away in Rouge Park. “I’ve seen more interest in the park already,” he says. “There are more and more people around.”
That there are more visitors to the park no doubt pleases those who envision Rouge Park as a gateway to nature. And while McKinnon might worry there could be a flood of people, he hopes Parks Canada can strike the right balance. “Parks Canada has a good reputation,” he says. “It could be good for the park to be managed by one body… if they get the legislation right.”
He stops as a dark brown head surfaces beside the near shore. It dives forward, and a slender back arcs above the surface before leaving only ripples. “There she is,” says McKinnon, pointing at the otter. Before long, we spot two others — it’s a mother and her two pups — and McKinnon recalls watching the mother drag her young into the water for their first swim. There have been a few recent sightings like this, confirming that, after a 30-year absence, otters have returned to the Rouge Valley. McKinnon isn’t convinced there’s a viable breeding population yet, but it’s a good start.
The otters’ foothold might seem tenuous, but as Veinotte sees it, for Parks Canada’s urban mission to succeed, they will need to safeguard populations like this. She disagrees that more people enjoying the park will threaten its plants and animals. “The success of one depends on the other,” she says. In other words, if Parks Canada wants to introduce people to nature, there had better be nature to behold upon arrival.
Explore Canada’s national parks, national historic sites and national marine conservation areas with students using Canadian Geographic Education’s Parks Canada giant floor map. | <urn:uuid:c924f8b3-c61c-4889-af41-dfe007ed0e34> | CC-MAIN-2021-21 | https://www.canadiangeographic.ca/article/canadas-first-national-urban-park | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990584.33/warc/CC-MAIN-20210513080742-20210513110742-00574.warc.gz | en | 0.960057 | 4,427 | 2.84375 | 3 |
Center for the Study of World Religions, Harvard Divinity School
Religions of the World and Ecology Series
Mary Evelyn Tucker and John Grim, Yale University
The Challenge of the Environmental Crisis
Ours is a period when the human community is in search of new and sustaining relationships to the earth amidst an environmental crisis that threatens the very existence of all life-forms on the planet. While the particular causes and solutions of this crisis are being debated by scientists, economists, and policymakers, the facts of widespread destruction are causing alarm in many quarters. Indeed, from some perspectives the future of human life itself appears threatened. As Daniel Maguire has succinctly observed, “If current trends continue, we will not.”1 Thomas Berry, the former director of the Riverdale Center for Religious Research, has also raised the stark question, “Is the human a viable species on an endangered planet?”
From resource depletion and species extinction to pollution overload and toxic surplus, the planet is struggling against unprecedented assaults. This is aggravated by population explosion, industrial growth, technological manipulation, and military proliferation heretofore unknown by the human community. From many accounts the basic elements which sustain life-sufficient water, clean air, and arable land are at risk. The challenges are formidable and well documented. The solutions, however, are more elusive and complex. Clearly, this crisis has economic, political, and social dimensions which require more detailed analysis than we can provide here. Suffice it to say, however, as did the Global 2000 Report: “… once such global environmental problems are in motion they are difficult to reverse. In fact few if any of the problems addressed in the Global 2000 Report are amenable to quick technological or policy fixes; rather, they are inextricably mixed with the world’s most perplexing social and economic problems.”2
Peter Raven, the director of the Missouri Botanical Garden, wrote in a paper titled, “We Are Killing Our World,” with a similar sense of urgency regarding the magnitude of the environmental crisis: “The world that provides our evolutionary and ecological context is in serious trouble, trouble of a kind that demands our urgent attention. By formulating adequate plans for dealing with these large-scale problems, we will be laying the foundation for peace and prosperity in the future; by ignoring them, drifting passively while attending to what may seem more urgent, personal priorities, we are courting disaster.”
Rethinking Worldviews and Ethics
For many people an environmental crisis of this complexity and scope is not only the result of certain economic, political, and social factors. It is also a moral and spiritual crisis which, in order to be addressed, will require broader philosophical and religious understandings of ourselves as creatures of nature, embedded in life cycles and dependent on ecosystems. Religions, thus, need to be re-examined in light of the current environmental crisis. This is because religions help to shape our attitudes toward nature in both conscious and unconscious ways. Religions provide basic interpretive stories of who we are, what nature is, where we have come from, and where we are going. This comprises a worldview of a society. Religions also suggest how we should treat other humans and how we should relate to nature. These values make up the ethical orientation of a society. Religions thus generate worldviews and ethics which underlie fundamental attitudes and values of different cultures and societies. As the historian Lynn White observed, “What people do about their ecology depends on what they think about themselves in relation to things around them. Human ecology is deeply conditioned by beliefs about our nature and destiny—that is, by religion.”3
In trying to reorient ourselves in relation to the earth, it has become apparent that we have lost our appreciation for the intricate nature of matter and materiality. Our feeling of alienation in the modern period has extended beyond the human community and its patterns of material exchanges to our interaction with nature itself. Especially in technologically sophisticated urban societies, we have become removed from the recognition of our dependence on nature. We no longer know who we are as earthlings; we no longer see the earth as sacred.
Thomas Berry suggests that we have become autistic in our interactions with the natural world. In other words, we are unable to value the life and beauty of nature because we are locked in our own egocentric perspectives and shortsighted needs. He suggests that we need a new cosmology, cultural coding, and motivating energy to overcome this deprivation.4 He observes that the magnitude of destructive industrial processes is so great that we must initiate a radical rethinking of the myth of progress and of humanity’s role in the evolutionary process. Indeed, he speaks of evolution as a new story of the universe, namely, as a vast cosmological perspective that will resituate human meaning and direction in the context of four and a half billion years of earth history.5
For Berry and for many others an important component of the current environmental crisis is spiritual and ethical. It is here that the religions of the world may have a role to play in cooperation with other individuals, institutions, and initiatives that have been engaged with environmental issues for a considerable period of time. Despite their lateness in addressing the crisis, religions are beginning to respond in remarkably creative ways. They are not only rethinking their theologies but are also reorienting their sustainable practices and long-term environmental commitments. In so doing, the very nature of religion and of ethics is being challenged and changed. This is true because the reexamination of other worldviews created by religious beliefs and practices may be critical to our recovery of sufficiently comprehensive cosmologies, broad conceptual frameworks, and effective environmental ethics for the twenty-first century.
While in the past none of the religions of the world have had to face an environmental crisis such as we are now confronting, they remain key instruments in shaping attitudes toward nature. The unintended consequences of the modern industrial drive for unlimited economic growth and resource development have led us to an impasse regarding the survival of many life-forms and appropriate management of varied ecosystems. The religious traditions may indeed be critical in helping to reimagine the viable conditions and long-range strategies for fostering mutually enhancing human-earth relations.6 Indeed, as E. N. Anderson has documented with impressive detail, “All traditional societies that have succeeded in managing resources well, over time, have done it in part through religious or ritual representation of resource management.”7
It is in this context that a series of conferences and publications exploring the various religions of the world and their relation to ecology was initiated by the Center for the Study of World Religions at Harvard University. Coordinated by Mary Evelyn Tucker and John Grim, the conferences involved some 800 scholars, graduate students, religious leaders, and environmental activists over a period of three years. The collaborative nature of the project is intentional. Such collaboration maximizes the opportunity for dialogical reflection on this issue of enormous complexity and accentuates the diversity of local manifestations of ecologically sustainable alternatives.
This series is intended to serve as initial exploration of the emerging field of religion and ecology while pointing toward areas for further research. We are not unaware of the difficulties of engaging in such a task, yet we have been encouraged by the enthusiastic response to the conferences within the academic community, by the larger interest they have generated beyond academia, and by the probing examinations gathered in the volumes. We trust that this series and these volumes will be useful not only for scholars of religion but also for those shaping seminary education and institutional religious practices, as well as for those involved in environmental public policy.
We see such conferences and publications as expanding the growing dialogue regarding the role of the world’s religions as moral forces in stemming the environmental crisis. While, clearly, there are major methodological issues involved in utilizing traditional philosophical and religious ideas for contemporary concerns, there are also compelling reasons to support such efforts, however modest they may be. The world’s religions in all their complexity and variety remain one of the principal resources for symbolic ideas, spiritual inspiration, and ethical principles. Indeed, despite their limitations, historically they have provided comprehensive cosmologies for interpretive direction, moral foundations for social cohesion, spiritual guidance for cultural expression, and ritual celebrations for meaningful life. In our search for more comprehensive ecological worldviews and more effective environmental ethics, it is inevitable that we will draw from the symbolic and conceptual resources of the religious traditions of the world. The effort to do this is not without precedent or problems, some of which will be signaled below. With this volume and with this series we hope the field of reflection and discussion regarding religion and ecology will begin to broaden, deepen, and complexify.
Qualifications and Goals
The Problems and Promise of Religions
These volumes, then, are built on the premise that the religions of the world may be instrumental in addressing the moral dilemmas created by the environmental crisis. At the same time we recognize the limitations of such efforts on the part of religions. We also acknowledge that the complexity of the problem requires interlocking approaches from such fields as science, economics, politics, health, and public policy. As the human community struggles to formulate different attitudes toward nature and to articulate broader conceptions of ethics embracing species and ecosystems, religions may thus be a necessary, though only contributing, part of this multidisciplinary approach.
It is becoming increasingly evident that abundant scientific knowledge of the crisis is available and numerous political and economic statements have been formulated that reflect this concern. Yet we seem to lack the political, economic, and scientific leadership to make necessary changes. Moreover, what is still lacking is the religious commitment, moral imagination, and ethical engagement to transform the environmental crisis from an issue on paper to one of effective policy, from rhetoric in print to realism in action. Why, nearly fifty years after Fairfield Osborne’s warning in Our Plundered Planet and more than thirty years since Rachel Carson’s Silent Spring, are we still wondering, is it too late?8
It is important to ask where the religions have been on these issues and why they themselves have been so late in their involvement. Have issues of personal salvation superseded all others? Have divine-human relations been primary? Have anthropocentric ethics been all-consuming? Has the material world of nature been devalued by religion? Does the search for otherworldly rewards override commitment to this world? Did the religions simply surrender their natural theologies and concerns with exploring purpose in nature to positivistic scientific cosmologies? In beginning to address these questions, we still have not exhausted all the reasons for religions’ lack of attention to the environmental crisis. The reasons may not be readily apparent, but clearly they require further exploration and explanation.
In discussing the involvement of religions in this issue, it is also appropriate to acknowledge the dark side of religion in both its institutional expressions and dogmatic forms. In addition to their oversight with regard to the environment, religions have been the source of enormous manipulation of power in fostering wars, in ignoring racial and social injustice, and in promoting unequal gender relations, to name only a few abuses. One does not want to underplay this shadow side or to claim too much for religions’ potential for ethical persuasiveness. The problems are too vast and complex for unqualified optimism. Yet there is a growing consensus that religions may now have a significant role to play, just as in the past they have sustained individuals and cultures in the face of internal and external threats.
A final caveat is the inevitable gap that arises between theories and practices in religions. As has been noted, even societies with religious traditions which appear sympathetic to the environment have in the past often misused resources. While it is clear that religions may have some disjunction between the ideal and the real, this should not lessen our endeavor to identify resources from within the world’s religions for a more ecologically sound cosmology and environmentally supportive ethics. This disjunction of theory and practice is present within all philosophies and religions and is frequently the source of disillusionment, skepticism, and cynicism. A more realistic observation might be made, however, that this disjunction should not automatically invalidate the complex worldviews and rich cosmologies embedded in traditional religions. Rather, it is our task to explore these conceptual resources so as to broaden and expand our own perspectives in challenging and fruitful ways.
In summary, we recognize that religions have elements which are both prophetic and transformative as well as conservative and constraining. These elements are continually in tension, a condition which creates the great variety of thought and interpretation within religious traditions. To recognize these various tensions and limits, however, is not to lessen the urgency of the overall goals of this project. Rather, it is to circumscribe our efforts with healthy skepticism, cautious optimism, and modest ambitions. It is to suggest that this is a beginning in a new field of study which will affect both religion and ecology. On the one hand, this process of reflection will inevitably change how religions conceive of their own roles, missions, and identities, for such reflections demand a new sense of the sacred as not divorced from the earth itself. On the other hand, environmental studies can recognize that religions have helped to shape attitudes toward nature. Thus, as religions themselves evolve they may be indispensable in fostering a more expansive appreciation for the complexity and beauty of the natural world. At the same time as religions foster awe and reverence for nature, they may provide the transforming energies for ethical practices to protect endangered ecosystems, threatened species, and diminishing resources.
It is important to acknowledge that there are, inevitably, challenging methodological issues involved in such a project as we are undertaking in this emerging field of religion and ecology.9 Some of the key interpretive challenges we face in this project concern issues of time, place, space, and positionality. With regard to time, it is necessary to recognize the vast historical complexity of each religious tradition, which cannot be easily condensed in these conferences or volumes. With respect to place, we need to signal the diverse cultural contexts in which these religions have developed. With regard to space, we recognize the varied frameworks of institutions and traditions in which these religions unfold. Finally, with respect to positionality, we acknowledge our own historical situatedness at the end of the twentieth century with distinctive contemporary concerns.
Not only is each religious tradition historically complex and culturally diverse, but its beliefs, scriptures, and institutions have themselves been subject to vast commentaries and revisions over time. Thus, we recognize the radical diversity that exists within and among religious traditions which cannot be encompassed in any single volume. We acknowledge also that distortions may arise as we examine earlier historical traditions in light of contemporary issues.
Nonetheless, environmental ethics philosopher J. Baird Callicott has suggested that scholars and others “mine the conceptual resources” of the religious traditions as a means of creating a more inclusive global environmental ethics.10As Callicott himself notes, however, the notion of “mining” is problematic, for it conjures up images of exploitation which may cause apprehension among certain religious communities, especially those of indigenous peoples. Moreover, we cannot simply expect to borrow or adopt ideas and place them from one tradition directly into another. Even efforts to formulate global environmental ethics need to be sensitive to cultural particularity and diversity. We do not aim at creating a simple bricolage or bland fusion of perspectives. Rather, these conferences and volumes are an attempt to display before us a multiperspectival cross section of the symbolic richness regarding attitudes toward nature within the world’s religious traditions. To do so will help to reveal certain commonalities among traditions, as well as limitations within traditions, as they begin to converge around this challenge presented by the environmental crisis.
We need to identify our concerns, then, as embedded in the constraints of our own perspectival limits at the same time as we seek common ground. In describing various attitudes toward nature historically, we are aiming at critical understanding of the complexity, contexts, and frameworks in which these religions articulate such views. In addition, we are striving for empathetic appreciation for the traditions without idealizing their ecological potential or ignoring their environmental oversights. Finally, we are aiming at the creative revisioning of mutually enhancing human-earth relations. This revisioning may be assisted by highlighting the multi-perspectival attitudes toward nature which these traditions disclose. The prismatic effect of examining such attitudes and relationships may provide some necessary clarification and symbolic resources for reimagining our own situation and shared concerns at the end of the twentieth century. It will also be sharpened by identifying the multilayered symbol systems in world religions which have traditionally oriented humans in establishing relational resonances between the microcosm of the self and the macrocosm of the social and natural orders. In short, religious traditions may help to supply both creative resources of symbols, rituals, and texts as well as inspiring visions for reimagining ourselves as part of, not apart from, the natural world.
The methodological issues outlined above were implied in the overall goals of the conferences, which were described as follows:
- To identify and evaluate the distinctive ecological attitudes, values, and practices of diverse religious traditions, making clear their links to intellectual, political, and other resources associated with these distinctive traditions.
- To describe and analyze the commonalities that exist within and among religious traditions with respect to ecology.
- To identify the minimum common ground on which to base constructive understanding, motivating discussion, and concerted action in diverse locations across the globe; and to highlight the specific religious resources that comprise such fertile ecological ground: within scripture, ritual, myth, symbol, cosmology, sacrament, and so on.
- To articulate in clear and moving terms a desirable mode of human presence with the earth; in short, to highlight means of respecting and valuing nature, to note what has already been actualized, and to indicate how best to achieve what is desirable beyond these examples.
- To outline the most significant areas, with regard to religion and ecology, in need of further study; to enumerate questions of highest priority within those areas and propose possible approaches to use in addressing them.
In this series, then, we do not intend to obliterate difference or ignore diversity. The aim is to celebrate plurality by raising to conscious awareness multiple perspectives regarding nature and human-earth relations as articulated in the religions of the world. The spectrum of cosmologies, myths, symbols, and rituals within the religious traditions will be instructive in resituating us within the rhythms and limits of nature.
We are not looking for a unified worldview or a single global ethic. We are, however, deeply sympathetic with the efforts toward formulating a global ethic made by individuals, such as the theologian Hans Kung or the environmental philosopher J. Baird Callicott, and groups, such as Global Education Associates and United Religions. A minimum content of environmental ethics needs to be seriously considered. We are, then, keenly interested in the contribution this series might make to discussions of environmental policy in national and international arenas. Important intersections may be made with work in the field of development ethics.11 In addition, the findings of the conferences have bearing on the ethical formulation of the Earth Charter that is to be presented to the United Nations for adoption within the next few years. Thus, we are seeking both the grounds for common concern and the constructive conceptual basis for rethinking our current situation of estrangement from the earth. In so doing we will be able to reconceive a means of creating the basis not just for sustainable development, but also for sustainable life on the planet.
As scientist Brian Swimme has suggested, we are currently making macrophase changes to the life systems of the planet with microphase wisdom. Clearly, we need to expand and deepen the wisdom base for human intervention with nature and other humans. This is particularly true as issues of genetic alteration of natural processes are already available and in use. If religions have traditionally concentrated on divine-human and human-human relations, the challenge is that they now explore more fully divine-human-earth relations. Without such further exploration, adequate environmental ethics may not emerge in a comprehensive context.
Resources: Environmental Ethics Found in the World’s Religions
For many people, when challenges such as the environmental crisis are raised in relation to religion in the contemporary world, there frequently arises a sense of loss or a nostalgia for earlier, seemingly less complicated eras when the constant questioning of religious beliefs and practices was not so apparent. This is, no doubt, something of a reified reading of history. There is, however, a decidedly anxious tone to the questioning and soul-searching that appears to haunt many contemporary religious groups as they seek to find their particular role in the midst of rapid technological change and dominant secular values.
One of the greatest remaining challenges to contemporary religions is how to respond to the environmental crisis, a crisis that many believe has been perpetuated because of the enormous inroads made by unrestrained materialism, secularization, and industrialization in contemporary societies, especially in societies arising in or influenced by the modern West. Indeed, some suggest that the very division of religion from secular life may be a major cause of the crisis.
Others, such as the medieval historian Lynn White, have cited religion’s negative role in the crisis. White has suggested that the emphasis in Judaism and Christianity on the transcendence of God above nature and the dominion of humans over nature has led to a devaluing of the natural world and a subsequent destruction of its resources for utilitarian ends.12 While the particulars of this argument have been vehemently debated, it is increasingly clear that the environmental crisis and its perpetuation due to industrialization, secularization, and ethical indifference present a serious challenge to the world’s religions. This is especially true because many of these religions have traditionally been concerned with the path of personal salvation, which frequently emphasized otherworldly goals and rejected this world as corrupting. Thus, as we have noted, how to adapt religious teachings to this task of revaluing nature so as to prevent its destruction marks a significant new phase in religious thought. Indeed, as Thomas Berry has so aptly pointed out, if the human is to continue as a viable species on an increasingly degraded planet, what is necessary is a comprehensive reevaluation of human-earth relations. This will require, in addition to major economic and political changes, examining worldviews and ethics among the world’s religions that differ from those that have captured the imagination of contemporary industrialized societies that regard nature primarily as a commodity to be utilized. It should be noted that when we are searching for effective resources for formulating environmental ethics, each of the religious traditions have both positive and negative features.
For the most part, the worldviews associated with the Western Abrahamic traditions of Judaism, Christianity, and Islam have created a dominantly human-focused morality. Because these worldviews are largely anthropocentric, nature is viewed as being of secondary importance. This is reinforced by a strong sense of the transcendence of God above nature. On the other hand, there are rich resources for rethinking views of nature in the covenantal tradition of the Hebrew Bible, in sacramental theology, in incarnational Christology, and in the vice-regency (khalifa Allah) concept of the Qur’an. The covenantal tradition draws on the legal agreements of biblical thought which are extended to all of creation. Sacramental theology in Christianity underscores the sacred dimension of material reality, especially for ritual purposes.13 Incarnational Christology proposes that because God became flesh in the person of Christ, the entire natural order can be viewed as sacred. The concept of humans as vice-regents of Allah on earth suggests that humans have particular privileges, responsibilities, and obligations to creation.14
In Hinduism, although there is a significant emphasis on performing one’s dharma, or duty, in the world, there is also a strong pull toward moksha, or liberation, from the world of suffering, or samsara. To heal this kind of suffering and alienation through spiritual discipline and meditation, one turns away from the world (prakrti) to a timeless world of spirit (purusa). Yet at the same time there are numerous traditions in Hinduism which affirm particular rivers, mountains, or forests as sacred. Moreover, in the concept of lila, the creative play of the gods, Hindu theology engages the world as a creative manifestation of the divine. This same tension between withdrawal from the world and affirmation of it is present in Buddhism. Certain Theravada schools of Buddhism emphasize withdrawing in meditation from the transient world of suffering (samsara) to seek release in nirvana. On the other hand, later Mahayana schools of Buddhism, such as Hua-yen, underscore the remarkable interconnection of reality in such images as the jeweled net of Indra, where each jewel reflects all the others in the universe. Likewise, the Zen gardens in East Asia express the fullness of the Buddha-nature (tathagatagarbha) in the natural world. In recent years, socially engaged Buddhism has been active in protecting the environment in both Asia and the United States.
The East Asian traditions of Confucianism and Taoism remain, in certain ways, some of the most life-affirming in the spectrum of world religions.15 The seamless interconnection between the divine, human, and natural worlds that characterizes these traditions has been described as an anthropocosmic worldview.16 There is no emphasis on radical transcendence as there is in the Western traditions. Rather, there is a cosmology of a continuity of creation stressing the dynamic movements of nature through seasons and agricultural cycles. This organic cosmology is grounded in the philosophy of ch’i (material force), which provides a basis for appreciating the profound interconnection of matter and spirit. The aim of personal cultivation in both Confucianism and Taoism is to be in harmony with nature and with other humans while being attentive to the movements of the Tao (Way). It should be noted, however, that this positive worldview has not prevented environmental degradation (e.g., deforestation) in parts of East Asia in both the premodern and modern periods.
In a similar vein, indigenous peoples, while having ecological cosmologies have, in some instances, caused damage to local environments through such practices as slash-and-burn agriculture. Nonetheless, most indigenous peoples have environmental ethics embedded in their worldviews. This is evident in the complex reciprocal obligations surrounding life-taking and resource-gathering which mark a community’s relations with the local bioregion. The religious views at the basis of indigenous lifeways involve respect for the sources of food, clothing, and shelter that nature provides. Gratitude to the creator and to the spiritual forces in creation is at the heart of most indigenous traditions. The ritual calendars of many indigenous peoples are carefully coordinated with seasonal events such as the sound of returning birds, the blooming of certain plants, the movements of the sun, and the changes of the moon.
The difficulty at present is that for the most part we have developed in the world’s religions certain ethical prohibitions regarding homicide and restraints concerning genocide and suicide, but none for biocide or geocide. We are clearly in need of exploring such comprehensive cosmological perspectives and communitarian environmental ethics as the most compelling context for motivating change regarding the destruction of the natural world.
Responses of Religions to the Environmental Crisis
How to chart possible paths toward mutually enhancing human-earth relations remains, one of the greatest challenges to the world’s religions. It is with some encouragement, however, that we note the growing calls for the world’s religions to participate in these efforts toward a more sustainable planetary future. There have been various appeals from environmental groups and from scientists and parliamentarians for religious leaders to respond to the environmental crisis. For example, in 1990 the Joint Appeal in Religion and Science was released highlighting the urgency of collaboration around the issue of the destruction of the environment. In 1992 the Union of Concerned Scientists issued the statement “Warning to Humanity,” signed by more than 1,000 scientists from 70 countries, including 105 Nobel laureates, regarding the gravity of the environmental crisis. They specifically cited the need for a new ethic toward the earth.
Numerous national and international conferences have also been held on this subject and collaborative efforts have been established. Environmental groups such as World Wildlife Fund, have sponsored interreligious meetings such as the one in Assisi in 1986. The Center for Respect of Life and Environment of the Humane Society of the United States has also held a series of conferences in Assisi on Spirituality and Sustainability and has helped to organize one at the World Bank. The United Nations Environmental Programme in North America has established an Environmental Sabbath, each year distributing thousands of packets of materials for use in congregations throughout North America. Similarly, the National Religious Partnership on the Environment at the Cathedral of St. John the Divine in New York City has promoted dialogue, distributed materials, and created a remarkable alliance of the various Jewish and Christian denominations in the United States around the issue of the environment. The Parliament of World Religions held in 1993 in Chicago and attended by some 8,000 people from all over the globe issued a statement of Global Ethics of Cooperation of Religions on Human and Environmental Issues. International meetings on the environment have been organized. One example of these, the Global Forum of Spiritual and Parliamentary Leaders held in Oxford in 1988, Moscow in 1990, Rio in 1992, and Kyoto in 1993, included world religious leaders, such as the Dalai Lama, and diplomats and heads of state, such as Mikhail Gorbachev. Indeed, Gorbachev hosted the Moscow conference and attended the Kyoto conference to set up a Green Cross International for environmental emergencies.
Since the United Nations Conference on Environment and Development (the Earth Summit) held in Rio in 1992, there have been concerted efforts intended to lead toward the adoption of an Earth Charter by the year 2000. This Earth Charter initiative is under way with the leadership of the Earth Council and Green Cross International, with support from the government of the Netherlands. Maurice Strong, Mikhail Gorbachev, Steven Rockefeller, and other members of the Earth Charter Project have been instrumental in this process. At the March 1997 Rio+5 Conference a benchmark draft of the Earth Charter was issued. The time is thus propitious for further investigation of the potential contributions of particular religions toward mitigating the environmental crisis, especially by developing more comprehensive environmental ethics for the earth community.
Expanding the Dialogue of Religion and Ecology
More than two decades ago Thomas Berry anticipated such an exploration when he called for “creating a new consciousness of the multi-form religious traditions of humankind” as a means toward renewal of the human spirit in addressing the urgent problems of contemporary society.17 Tu Weiming has written of the need to go “Beyond the Enlightenment Mentality” in exploring the spiritual resources of the global community to meet the challenge of the ecological crisis.18 While this exploration has also been the intention of both the conferences and these volumes, other significant efforts have preceded our current endeavor.19 Our discussion here highlights only the last decade.
In 1986 Eugene Hargrove edited a volume titled, Religion and Environmental Crisis.20 In 1991 Charlene Spretnak explored this topic in her book, States of Grace: The Recovery of Meaning in the Post-Modern Age.21 Her subtitle states her constructivist project clearly: “Reclaiming the Core Teachings and Practices of the Great Wisdom Traditions for the Well-Being of the Earth Community.” In 1992 Steven Rockefeller and John Elder edited a book based on a conference at Middlebury College titled, Spirit and Nature: Why the Environment Is a Religious Issue.22 In the same year Peter Marshall published, Nature’s Web: Rethinking Our Place on Earth,23 drawing on the resources of the world’s traditions. An edited volume titled, Worldviews and Ecology, compiled in 1993, contains articles reflecting on views of nature from the world’s religions and from contemporary philosophies, such as process thought and deep ecology.24 In this same vein, in 1994, J. Baird Callicott published Earth’s Insights which examines the intellectual resources of the world’s religions for a more comprehensive global environmental ethics.25 This expands on his 1989 volumes, Nature in Asian Traditions of Thought and In Defense of the Land Ethic.26 In 1995 David Kinsley issued a book titled, Ecology and Religion: Ecological Spirituality in a Cross- Cultural Perspective,27 which draws on traditional religions and contemporary movements, such as deep ecology and ecospirituality. Seyyed Hossein Nasr wrote his comprehensive study, Religion and the Order of Nature, in 1996.28 Several volumes of religious responses to a particular topic or theme have also been published. For example, J. Ronald Engel and Joan Gibb Engel compiled a monograph in 1990 titled, Ethics of Environment and Development: Global Challenge, International Response29 and in 1995 Harold Coward edited the volume, Population, Consumption, and the Environment: Religious and Secular Responses.30 Roger Gottlieb edited a useful source book, This Sacred Earth: Religion, Nature, Environment.31 Single volumes on the world’s religions and ecology were published by the Worldwide Fund for Nature.32
The series Religions of the World and Ecology is thus intended to expand the discussion already under way in certain circles and to invite further collaboration on a topic of common concern—the fate of the earth as a religious responsibility. To broaden and deepen the reflective basis for mutual collaboration was an underlying aim of the conferences themselves. While some might see this as a diversion from pressing scientific or policy issues, it was with a sense of humility and yet conviction that we entered into the arena of reflection and debate on this issue. In the field of the study of world religions, we have seen this as a timely challenge for scholars of religion to respond as engaged intellectuals with deepening creative reflection. We hope that these volumes will be simply a beginning of further study of conceptual and symbolic resources, methodological concerns, and practical directions for meeting this environmental crisis.
1 He goes on to say, “And that is qualitatively and epochally true. If religion does not speak to [this], it is an obsolete distraction.” Daniel Maguire, The Moral Core of Judaism and Christianity: Reclaiming the Revolution (Philadelphia: Fortress, 1993) 13.
Return to text
7 E. N. Anderson, Ecologies of the Heart: Emotion, Belief, and the Environment (New York: Oxford University Press, 1996) 166. He qualifies this statement by saying, “The key point is not religion per se, but the use of emotionally powerful symbols to sell particular moral codes and management systems” (p. 166). He notes, however, in various case studies, how ecological wisdom is embedded in myths, symbols, and cosmologies of traditional societies.
Return to text
10 See J. Baird Callicott, Earth’s Insights: A Survey of Ecological Ethics from the Mediterranean Basin to the Australian Outback (Berkeley, Calif.: University of California Press, 1994).
Return to text
15 While this is true theoretically, it should be noted that, like all ideologies, these traditions have at times been used for purposes of political power and social control. Moreover, they have not been able to prevent certain kinds of environmental destruction, such as deforestation in China.
Return to text
18 Tu Weiming, “Beyond the Enlightenment Mentality,” in Worldviews and Ecology, eds. Mary Evelyn Tucker and John Grim (Lewisburg, Pa.: Bucknell University Press, 1993; reissued, Maryknoll, N.Y.: Orbis, 1994).
Return to text
19 This history has been described more fully by Roderick Nash in his chapter entitled, “The Greening of Religion,” in The Rights of Nature: A History of Environmental Ethics (Madison, Wisc.: University of Wisconsin Press, 1989).
Return to text
Copyright © 1997 Center for the Study of World Religions, Harvard Divinity School.
Reprinted with permission. | <urn:uuid:9972857d-3505-4735-835c-46d22e250dc9> | CC-MAIN-2021-21 | https://fore.yale.edu/Publications/Books/Religions-World-and-Ecology-Book-Series/Challenge-Environmental-Crisis?page=3 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00616.warc.gz | en | 0.93858 | 7,428 | 3.125 | 3 |
Since their invention in ancient China, kites have originally been used for measuring distances, testing the wind, lifting people, signaling, and communication for military operations. The earliest Chinese kites were often rectangular and flat as well as decorated with mythological motifs and legendary figures like dragons. Some were fitted with strings and whistles to make music while flying. But it was from China, kites were introduced to Cambodia, Japan, Thailand, India, Korea, the western world, and possibly Oceania. Though originally seen as a mere curiosity in Europe, kites would be used in the western world for scientific discovery and invention. Every American child learns about Ben Franklin and his famous kite experiment leading to the discovery of lightning as electricity and his invention of the lightning rod. Yet, the Wright brothers also used kites when developing the first airplane in the late 1800s. And they were used in for scientific purposes in meteorology, aeronautics, wireless communications, and even photography. But since the Wright brothers made their first flight and WWII, kites have mainly been used in recreation. However, all over the world you’ll find all kinds of kite festivals and competitions, especially in Asia. Nevertheless, despite living in the country, I wouldn’t be able to fly a kite in my back yard. Mostly because of safety issues with power lines. I mean the fact my back yard has a slew of power lines over it led to a bunch of trees being cut down for God’s sake, Anyway, kites come in all types, shapes, and sizes. You’ll find sport kites, power kites, weather kites, man-lifitng kites, fishing kites, underwater kites, and even fighting kites that could kill people. So for your reading pleasure, I give you a glimpse into the high flying world of kites.
- Sometimes a kite looks more magnificent on the inside.
Reminds me of one of those paper fortune teller contraptions. But I do love the colors.
2. On this kite, 6 hearts make a rainbow.
Then again, they’re not in rainbow order. But none of that matters to me.
3. Nothing dazzles in the sky like a rainbow 8 pointed star.
Yes, you can even have a kite star like this. Yes, I know the kites I showed so far have similar color schemes.
4. And you thought you’d never see a dragon fly.
Well, a different kind of dragon fly, anyway. But yes, they have kites of dragons. And this one is sensational.
5. Seems like this kite is made from hexagonal proportions.
Seems to be anything barely on this kite. But you have to admire looking at it in the sky.
6. At some festivals, it pays to go big and spectacular.
This kite is from a festival in Guatemala featuring these. Though you’d almost think it’s a parachute.
7. A large star should always get a decent lift.
Yes, these kites can get quite large as you can see. But I really love the colors which I think are perfect for Easter.
8. With a kite, you can send a rainbow soaring.
Yes, rainbows are a common theme in these. But I really like how this dances in the sky.
9. A butterfly kite should always spread its wings.
Because there’s no better spring kite than that of a butterfly. You also have to love the colors and tails.
10. A kite should always fly like a bird.
And there you have a bird kite. But there’s a bird following it. Wonder if it thinks it’s another bird or a decoy.
11. A beautiful kite comes with many layers of color.
Not sure what this kite is supposed to be. But maybe it’s built for function, not aesthetic effect.
12. This rainbow plane always flies high in the sky.
Once again, you see rainbows. This time on a plane kite which is somewhat charming.
13. This large star kite has all kinds of colors and tails.
This one goes by a 2 color diamond pattern as you can see. Some might find it tacky. But I find it wondrous.
14. A colorful ship looks even more magnificent in the air.
Yes, a rainbow ship always ventures seas of skies. Love it.
15. A large kite should be light enough to fly.
I guess this is a kite from the island nations. Looks quite pretty.
16. A rainbow bird is always a colorful sight.
As you might read in my mythical creature series, you might find a rainbow chicken from the Philippines. Though this dazzles wonderfully.
17. You can put in a lot of different pictures in a kite.
You can find everything on here from mythical creatures to pop culture icons. You can even find Jimi Hendrix so excuse him while he kisses the sky.
18. You can fit a lot of triangles in a pyramid kite.
Yes, these kites do exist. Though I kind of wish this one had more color like the others.
19. Swirls always look better when up in the air.
I’ll probably feature many geometrically designed kites on this post. Though I really like the pentagon shape and tail on this.
20. Is that supposed to be a deity or a mythical creature.
Well, the art is squarely from Asia. But it’s also quite dazzling in the sky.
21. Some of these kites in the sky can have very long tails.
As you can see it’s an Asian design. Nevertheless, kite flying is very big in many Asian countries, especially in China and India.
22. Nothing says you can’t have a bunch of sharks on the line.
Well, as long as they’re flown from the air and are of different colors. Because shark hunting shouldn’t be encouraged.
23. A colorful kite should at least have wings and a tail.
Now this is a rather strange design. But I really like the tails and colors. Lovely.
24. With this kite, we can test whether a cow can actually jump over the moon.
Okay, it’s probably not possible. But fly this one in the sky, you might have people questioning their mental state.
25. There’s something strange about this kite.
This may be a traditional kite shape. The crayon face on this is creepy.
26. A blue owl kite is always a hoot.
Of course, owls have to serve as motifs as well. Since they’re birds of prey after all.
27. A Chinese dragon in the sky is a magnificent sight.
By the way, these Chinese dragon kites can be more than 100 feet long. Definitely not something I can fly where I live.
28. Sometimes kites are flown to denote special occasions.
You can easily tell what this kite’s celebrating. Give you a hint, it was held in Rio last summer.
29. Two cranes are sometimes better than one.
One has pink wings while the other has bluish green. But together they fit on a kite banner quite nicely.
30. And I thought I had to worry about sharks on the water.
Best you don’t fly this kite during your trip to Amity Island beach. This is especially when there’s a man eating shark at the shallows.
31. I’m sure nobody could resist a high flying rainbow fish.
Doesn’t hurt if it’s flown in the snow. Though I’d proceed with caution in winter weather.
32. Sometimes a Chinese dragon has to have a rainbow tail.
Yes, these Chinese dragons can be quite elaborate as you can see. Though I really love this one.
33. A kite like this looks quite foxy in the sky.
Guess this is what you’d call a fox kite. Has a nice cute little face to it.
34. A hexagonal design always impresses.
This one has 2 pegasus unicorns with rainbow wings. Love how its rods stick out.
35. Looks like the eyes have it from above.
Sure seeing these eyes might make you feel like you’re being watched. But I’m sure they don’t see anything.
36. This kite is all string and wings.
Yet, I’m sure it’s able to fly. Though I’m not exactly sure how. Love the rainbow design.
37. This diamond kite comes in a few pieces as I recall.
Like some of the others, this is in an Asian design. But it has a nice, red, white, black, and blue pattern.
38. On some kite chains, you’ll find all kinds of shapes put together.
You have 2 diamonds in the front and a few other weird shapes in the back. And they’re all in different colors just the same.
39. Sometimes the sky is home to a monster kite or two.
And I think this one was conceived during a bad acid trip. How else could I explain the eye and fangs?
40. Now that is an interesting box kite.
Normally box kites usually have rectangles on each side. But this one takes the box kite to an artistic dimension.
41. Is this a fancy hypodermic needle or a fishing lure?
Maybe it’s a shape from Asian art or mythology. That can explain a lot.
42. Even a small kite can sport some long tails.
Once again, you see a rainbow pattern on the kite. Guess rainbows on kites are quite popular.
43. Even the sky has its share of scary clowns.
Sure it might look funny now. But as Lon Chaney said, a clown is never funny in the moonlight.
44. 3-D hexagon patterns can always dazzle in the sky.
Each of these consist of different colors and patterns. Still, wonder how someone could fly this.
45. A white bird always makes a graceful presence in flight.
You can see the white bird in a kite like this. Not sure if it’s supposed to be a seagull or a dove.
46. Hexagons can have all kinds of patterns.
You can see this from this hexagon kite chain. Each one features a different color.
47. Almost any work of art can be shown on a kite in the air.
And this kite of a woman is no exception. Of course, on rectangular kites, you can have any image you want.
48. With this kite, you can color your own world.
Helps if the tails resemble pencils. Though I’d guess this design is quite delicate.
49. You’d almost swear this was a rainbow parachute.
Yes, this is a kite. I know people may not agree with me. But it is a kite. Love it.
50. You’d almost swear this kite was a large fancy dart board.
Yes, this is another large Guatemalan kite. I don’t think you can fly it. But it’s quite nice to look at.
51. In Malaysia, you’ll find a very special kind of kite reflecting their national pride.
This is called a wau bulan or Malaysian moon kite. And they can come in all kinds of designs.
52. String diamond kites together and you’ll have high flying spectacle.
By themselves, they wouldn’t amount to much. But together, they’re a worthy sight to see.
53. A rainbow tube can always fly swept by the wind.
You can see these on a beach. Each has their unique pattern blowing in the wind.
54. Nobody could resist an enormous flower in the sky.
Particularly if it’s a colorful one made with turned squares. Love it.
55. Even an octopus can take to the skies.
Saw a lot of these on Pinterest. And yes, they’re widely available. Though an octopus in the air is strange for me.
56. You can never miss a colorful bird in flight.
There are quite a few kits like this. Yet, I chose to post the one I liked best.
57. You can even see fish take to the skies.
These fish kites are all on a line as the wind blows through them. And all of these probably come from Asia.
58. This diamond kite is a perfect prism, indeed.
Well, at least this one has a rainbow on all sides. And it’s in a simple shape.
59. With this kite, you’d find a rainbow in a weave.
Yes, it’s another rainbow kite with an unremarkable shape. But at least its pattern is quite interesting.
60. A flamingo kite can always remain up in the air.
Yes, they have flamingo kites, too. And I’m sure they’re popular in Florida just the same.
61. Is that a kite or a spiked parachute?
It’s actually a kite. Because that’s not an appropriate parachute design. Still, it’s quite stunning.
62. A bird of many colors should always soar.
Apparently, you can’t help but look at this colorful bird. Though it’s actually a kite held by a line.
63. Bet you’d never see a kite of a black puffer fish.
Yes, it might look cute when all puffed up. But remember, puffer fish are poisonous and can kill you.
64. A rainbow kite can always show off its colors.
This one even has clouds and is tied with a string at the center. Lovely.
65. Sometimes a kite can be designed so intricately, you can’t tell what the shape is.
There’s a blue version of this, too, by the way. But as far as I know, I don’t have the slightest idea what it’s supposed to resemble.
66. With this kite, you can color the skies.
Like how the kite is decked with crayons and its tails are squiggles. Wouldn’t mind seeing this in my neighborhood.
67. Never thought I’d ever see a colorful tulip fly.
At first, I didn’t exactly think it was a tulip. And then I saw the stem and leaves.
68. Now this kite is quite an angler.
Well, that’s just the kite shape for a fish design. Helps if the fish has rainbow colors, too.
69. Hope you enjoy some bears from the sky.
No, they’re not the Care Bears. But they’re just as cute and cuddly.
70. Some kites take to the wind better than others.
Guess this is one of those sport kites. Still, when the wind blows, it probably moves in a wondrous way with the air.
71. This blue kite almost blends in with the sky.
That is, unless it’s being flown in a Chinese city. Nevertheless, I think it will fly quite nicely.
72. An 8-pointed star can have its own colorful ring.
Well, it’s a lovely design. Still, it probably makes an impression in the skies just the same.
73. 4th of July kites should be in stars and stripes.
However, it’s best you keep them from fireworks. Or power lines for that matter.
74. This centipede really loves to show off its legs.
Yes, they have insect kites, too. But this centipede’s legs surely stun.
75. You’d almost think this kite is from another world.
If it glows in the dark, you can use it to prank your neighbors. Then again, maybe not.
76. A kite can never have too many propellers.
Then again, it probably can. Nevertheless, since it’s a very unique design, it goes on the post.
77. With all these planes, you’d think there was a whole squadron.
Relax, these are simple made planes all strung together. And they’re all in light blue and lavender.
78. For some reason, seems like I’ve seen a ghost.
Then again, it might be a ghost. Or it might be some other mythical Asian creature. Not sure which.
79. Wonder what this large insect is supposed to be.
Then again, it certainly has very colorful wings. And the bug has a whimsical grin.
80. Stick limbs don’t keep these triangle folks from flying.
Well, these do seem rather aerodynamic. Also, like their outfits.
81. You never know what you’d find on a kite line.
Though these people seem to have a more conceptual design. Nevertheless, each has a unique charm.
82. With this kite, you can spread the love.
Perhaps we should a heart kite in every place. Sure it might seem mushy, but we all need some love in our lives.
83. Intricate designs can go together like birds of a feather.
Each of these is made in a ring with a square center. All in all, they’re lovely.
84. Now this is a whale of a kite you’d find in the air.
Wouldn’t want your kite swallowed in that. Still, it’s kind of a sight to see.
85. Nothing amazes you like a kite ring in the sky.
Yes, it’s certainly spectacular. Like how it’s near other kites as well. Love it.
86. This kite will surely light the way for you.
Yes, I know a lighthouse kite is strange. But so are fish, whale, and shark kites, too.
87. Say hello to a spiked ball in the sky.
Never imagined seeing a kite like this. Though I’m not sure about the rainbow spikes on it.
88. A dragon kite should fly in a fiery blaze.
No wonder people love dragons. Still, looks amazing in the sky.
89. A peacock kite always has a fine feather display.
After all, peacocks are beautiful birds. Though I’d prefer to use a fancier peacock kite for this post.
90. Nobody could resist this little bug.
This one is really cute. Love the beady little eyes and fancy body.
91. A hexagon box kite is as good as any other.
Most box kites are square. But this one is a hexagon since it has all rainbow colors.
92. A rectangular kite can sometimes serve as an artistic canvas.
This one depicts Japanese art as you can see. Nevertheless, it looks amazing in the sky.
93. Butterflies always grace the sky with their presence.
This one has the rainbow colors melting in with it. So beautiful. Love it.
94. A dragonfly kite always delights.
It’s not as glamorous as a butterfly. But you can always do worse.
95. A ghostly Flying Dutchman always haunts the sky.
A Flying Dutchman is a ghost ship that’s doomed to sail the ocean forever and can never make port. Seeing one is an omen of doom.
96. Always helps if a rainbow kite comes with a tail.
I call this design, the sting ray. Mostly because it resembles a ray. And a mere ray doesn’t capture the image for me.
97. Seems like we find ourselves a rather happy manta ray.
Now a manta ray is a larger ray which isn’t poisonous. And they don’t usually come in rainbow colors either.
98. Check out this fancy bird in the skies.
This is a traditional Chinese style kite of a bird of prey. And it’s one of the fanciest bird kites I’ve ever seen.
99. Hope you don’t fly this kite too close to the sun.
This is an Icarus kite based on Greek mythology. Of course, he probably didn’t wear a shirt and a pair of pants.
100. With this kite, you’ll always have lift off.
This kite is of the space shuttle which NASA no longer uses. However, it’s still pretty cool. | <urn:uuid:1225d2e7-9e61-40a0-9ecf-d6bf21fbc679> | CC-MAIN-2021-21 | https://historymaniacmegan.com/2017/04/07/the-high-flying-world-of-kites/?shared=email&msg=fail | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988724.75/warc/CC-MAIN-20210505234449-20210506024449-00417.warc.gz | en | 0.939458 | 4,481 | 2.796875 | 3 |
So you’ve spotted or heard a bird in your garden, and would like to work out what it is. Whether you’re a beginner birdwatcher, or an advanced one that’s refreshing your knowledge, our guide to garden birds will help you identify what you’ve seen or heard, including how to differentiate between males and females where possible.
We’ve also included a bit more information about each bird, so you can learn about what food they eat both in the wild and in your garden, or fascinating insights into their behaviour.
In this expert guide, we have covered some of the most common garden birds you’re likely to spot, but each garden is different and you may have other avian visitors, such as siskins, redpolls, blackcaps and more.
If you’d like to learn more, discover nine of the best British bird identification books in this article by BBC Countryfile Magazine.
How to attract birds to your garden
An easy way to attract more bird species to your garden is to offer food and shelter. Installing a nestbox (and providing nesting materials) can be invaluable for your garden birds, encouraging them to set up home in your plot.
Simple tips such as hanging bird feeders (such as a pine cone feeder, an orange bird feeder, or a fat cake) and providing access to clean drinking water (such as a bird bath or bird splash) can also encourage them to visit when food is scarce. Countryfile.com have a handy garden bird care guide if you’d like more advice.
Our wildlife gardening guides are also a useful source of advice.
How to identify common garden bird species
Common blackbird (Turdus merula)
In the UK and British Isles, the common blackbird is usually just to referred to as the blackbird. A familiar bird to many, the blackbird can be found in a variety of habitats, including gardens, and can be easily identified by it’s black (male, pictured above) or dark brown (female, pictured below), and yellow eye-ring and bill. Juveniles are a warm brown with spotted plumage.
Blackbirds are found across the UK and are resident all year round. In winter, our resident birds are joined by migrant birds from Fennoscandia (also known as Fenno-Scandinavia, consisting of the Scandinavian and Kola peninsulas, Finland, and Karelia).
The male’s song is a fluty, melodic whistle. The song is sometimes heard after dark, as birds sing under artificial light. Calls are very loud and involve various clucking and rattling sounds.
Blackbirds belong to a group of birds called the thrushes (all in the Turdus genus), which also includes song and mistle thrushes (both discussed below), redwings, fieldfares, and ring ouzels.
Song thrush (Turdus philomenus)
It is very easy to confuse a song thrush and a mistle thrush (below). Song thrushes are smaller and generally have warmer brown tones on their upperparts, whereas the mistle thrush is paler. The key, however, is in the spots. Song thrush spots on their underparts look more like inverted ‘arrowheads’ which are sparsely spread on the breast and flanks..
There are many reasons that people love song thrushes, but its song has made it a well-known part of the countryside. While the song is less rich than that of a blackbird, its bold, bell-like clarity has a penetrating quality. It consists of a series of phrases with many repeated three times in succession, making it a recognisable song.
The song thrush used to be a very common bird in the UK. In fact, in the early 20th century it was more abundant than the blackbird. However, since the 1940s, blackbirds have flourished, making them a more familiar bird these days.
This may be due to the reduced availability of snails and earthworms, particularly in gardens and farmland where pesticides are used. Other causes suggested for the overall decline include changes to farming practices, land drainage and woodland management.
Mistle thrush (Turdus viscivorus)
While song thrushes are more common in gardens, mistle thrushes use them too, especially during the autumn and winter. Their diet mostly consists of invertebrates and berries, but they will come into gardens for windfall apples.
Mistle thrushes are noticeably larger than song thrushes, with a longer tail. They have pale grey-brown upperparts, and their white underparts are heavily spotted, with the spots on the belly and flanks more rounded in appearance.
One of the earliest breeders of the year, mistle thrushes will lay eggs as early as the end of February. This doesn’t mean that they stop early though, as each pair rears up to three broods of chicks, and may continue through to the end of June.
Mistle thrushes have a rattling call, particularly when alarmed or disturbed.
Woodpigeon (Columba palumbus)
Male and female woodpigeons look alike, and adults can be told from other pigeons by the large white patches on the sides of the neck, below smudge of iridescent green. Juveniles lack the white neck patches.
Woodpigeons eat seeds, leaves and fruit (especially ivy berries), plus buds and various agricultural crops. Woodpigeons are able to take in a lot of food in a sitting because they can store it in a crop, which is a muscular pouch in the upper chest.
The weight of food in a woodpigeon’s crop is often over 70g, and weights of up to 155g have been recorded, including crops containing 725 ivy berries, 758 wheat grains, 30 cherries and 38 acorns. Once all this food has been taken on board the bird has to sit and rest while digestion takes place; this is why woodpigeons spend much time perched and inactive.
Eurasian collared dove (Streptopelia decaocto)
Collared doves are smaller and more delicate-looking than woodpigeons, with creamy grey-buff plumage. Adults have a black half-collar on the back of their necks. Their typical call is a clear and persistent three note ‘coo coo cuk’ which some people think sounds like ‘un-i-ted’.
The Eurasian collared dove bred for the first time in Britain in 1955 in Norfolk. Before 1930 it was confined to Turkey and the Balkans in Europe, although it was found as far east as China. In the next 20 years, it rapidly expanded its range northwest, quickly colonising most of Europe, and now lives north of the Arctic circle in Norway and as far south as Morocco and the Canary Islands.
So collared doves have only lived and bred in the UK for a few decades, but they weren’t introduced – they spread to new areas on their own, as their young have a tendency to disperse far and wide.
One factor behind the collared dove’s success is its ability to breed year-round if the weather is mild. They may also start a new nest before the previous young are independent, with the female using breaks from incubation to feed recently fledged offspring.
European robin (Erithacus rubecula)
The robin’s red breast is part of what endears it to us, providing a welcome flash of colour on a winter’s day. But its evolutionary purpose is for a more serious role, with male robins using it to settle territorial disputes, especially during the breeding season.
Robins are very territorial birds and will viciously attack other robins that on their patch. A dispute starts with males singing at each other, trying to get a higher perch in order to show off their breast most effectively. This usually ends the challenge, with one individual deferring to the other.
Sometimes it can escalate to a fight, which can result in injury or death.
Robins eat a wide variety of food, including worms, seeds, nuts, suet, invertebrates and fruit. They’ll readily come to garden bird tables, especially in winter, and a combination of suet, mealworms and seeds will go down particularly well.
Dunnock (Prunella modularis)
Also known as ‘hedge sparrows’, dunnocks are commonly mistaken for female house sparrows. Dunnocks are streaked black and brown above and below, with a lead-grey head. They have fine, dark bills. Young juvenile dunnocks have pale bills and light speckled feathers, but they soon develop into their adult features.
Dunnock song consists of a series of undistinguished rushed phrases, less melodious than a robin. Their alarm call is a single, flat piping note.
Dunnock nests are often targeted by cuckoos in the countryside. Despite the dunnocks’ bright blue eggs looking quite different from the pale, speckled egg of the cuckoo, it doesn’t seem that dunnocks are capable of ejecting these ‘imposter’ eggs. This suggests that dunnocks are a relatively recent host that hasn’t evolved a response to this act of parasitism.
House sparrow (Passer domesticus)
Male house sparrows have a black bib, black face mask and a chestnut brown head with a grey crown. They also have a broad white wing bar.
Female and juvenile house sparrows are dusky brown with greyish-white undersides and dull-brown, but streaked backs. They lack the black bib of the male and have pale brown crowns with a buff line above the eye.
House sparrows mate for life, like many bird species. But they are fairly unusual in that the male and female live in each other’s pockets all year round. This model of stability is echoed in their wider social life.
Each pair lives within a loose colony of, say, 10–20 birds, and all of the pairs and unattached members of the colony know each other well and undertake many of life’s chores – foraging, preening, dust-bathing and roosting – together.
England’s house sparrow population fell by 70% between 1977 and 2016, and have declined in gardens since BTO Garden BirdWatch began. Although reasons behind this decline are not known, the availability of food and nest sites in urban areas is thought to be a significant contributing factor, and one study showed that 74% of the house sparrows tested in London carried avian malaria.
Wren (Troglodytes troglodytes)
The wren is a familiar but fleeting sight in our gardens, flitting beneath the undergrowth in search of insects, recognisable by its tiny size, brown body and the fact that it often holds its short, stubby tail erect. When seen close, it has a long bill and pale-coloured line above the eye.
While wrens generally don’t use feeding stations, when the weather is particularly severe they will be tempted by high energy foods such as fat cakes. You can also scatter small cake crumbs or cheese around the bases of bushes so that they don’t have to come out from cover to feed.
You’re more likely to hear a wren than see it. They have a surprisingly loud song which is well-structured and consists of a series of clear, but shrill notes. They also have a familiar scolding alarm call consisting of a rapid chittering.
The wren’s scientific name, Troglodytes troglodytes, is a tautonym, where the genus and specific name are the same.
Goldcrest (Regulus regulus)
The goldcrest weighs around 5g, about the same as a 5p coin. Associated with coniferous forests, goldcrests are almost entirely insectivorous and make the most of their light weight by foraging in places where larger birds can’t go, like at the very end of small branches. It is similar in appearance to the firecrest, and one of the key differences to look out for is whether there is a black stripe across the eye (present = firecrest, absent = goldcrest).
Given their size, it is hard to believe that some goldcrests migrate to the UK from northern Europe, making them one of the lightest bird species in the world to migrate over the sea.
Your chance of seeing a goldcrest increases during the spring and autumn when migrants are passing through, but goldcrests are found in gardens all year round according to the BTO Garden BirdWatch survey.
However, as they generally ignore our bird feeders, we rarely see them. During winter they need to keep their energy reserves up, so if you’re lucky you might see one feeding on a fat ball.
The goldcrest’s scientific name, Regulus regulus, is a tautonym, where the genus and specific name are the same.
Great tit (Parus major)
Great tits are easy to identify. They have a black head and chin, white cheeks and a yellow breast.
The key marking to look out is the black stripe running down their front. This is also how you can identify the sex of the bird. The stripe on females narrows and stops halfway down the belly whereas on males it extends between the legs. The bolder and wider the stripe on the male, the more dominant the bird.
Great tits thrive on seeds in the winter, and caterpillars during the breeding season. To help with the change from seed to caterpillar, the shape of great tit bills undergo a subtle change between seasons, becoming more suitable to each food type. When feeding on caterpillars, the bills become longer and less deep and when they start to feed on seeds, the bills become shorter and deeper.
The basic song of a great tit is easy to learn. You’ll hear them singing two notes over and over again saying ‘tea-cher’.
Blue tit (Cyanistes caeruleus)
Blue tits have small blue caps with a white head and a black eye-stripe. They also have yellow underparts and a bluish back, becoming brighter blue at the wings. Male and female look similar but juveniles are duller in appearance, with no blue cap or white cheeks.
The song of blue tits is an urgent sounding trill and they have various scolding calls.
Blue tits eat insects and spiders. They will also take fruit and seeds in winter, and they regularly visit garden bird feeders.
Coal tit (Periparus ater)
If the only tits that you can confidently identify are blue and great tits, you are not alone. Marsh, willow and coal tits all have similar colouring making them harder to tell apart. But out of the three, the coal tit is the easiest to spot.
It has a larger black bib than the others, giving it obvious white cheeks. If you wait for the bird to turn around, there is one key giveaway – if it has a distinct white stripe running down the back of its head, it is a coal tit.
Due to their small size (they weigh about the same as a fifty pence coin) they struggle to defend feeding perches against bigger, more dominant species. Therefore, they regularly take their food ‘to go’, eating in nearby bushes and shrubs.
Long-tailed tit (Aegithalos caudatus)
Long-tailed tits are very small, mainly black and white birds, which have almost spherical bodies and an oversized tail. In flight, these proportions give these birds the resemblance of lollipops undulating through the air.
Also, look out for attractive, pinkish markings on the breast and pink eye rings. Both sexes look alike.
Balls of these tumbling, see-sawing birds bounce from one garden to the next during winter, their high-pitched, rolling si-si-si-si-si calls, punctuated with more percussive, clipped notes, announcing their arrival.
These vocalisations help flock members, which tend to be close relatives, keep in touch with each other as they move restlessly through trees and bushes, gleaning invertebrates and dropping down onto garden feeders.
The small beak of the long-tailed tit is not proficient at handling large seeds. However, this species will swarm over suet-based products, which provide a quick calorific hit. Small seeds, bread crumbs, finely grated cheese and peanut fragments will also be taken.
Nuthatch (Sitta europaea)
To identify a nuthatch look out for steel grey upper parts and a rusty buff colour below. Males are dark red-brown around the flanks and vent. Nuthatches are small, about the size of great tits but have a profile similar to that of a woodpecker.
The nuthatch is a bird full of character, a brash and often noisy interloper at garden feeding stations, whose unseemly behaviour has its origins in the need to find food.
During autumn and winter, nuts, seeds and invertebrates form an important part of the diet of nuthatches. At garden feeding stations, peanuts and sunflower hearts are particularly popular.
Greenfinch (Chloris chloris)
Greenfinches are about the size of house sparrows. Males are dull-olive green with greenish-yellow on the breast and rump, and have bright yellow wing flashes. Females are duller in appearance, with less yellow in their plumage. Juveniles are paler overall, and have streaked plumage.
Greenfinches rely on seeds and their large bills allow them to be able to eat a wide variety. They show some preference for seeds held within fleshy fruits, such as rosehips, though often ignore the fruit and just eat the seed.
During the autumn, seeds from yew and hawthorn are important, and in the winter, bramble. In gardens, they like sunflower seeds, whether they are black or hearts.
The greenfinch’s scientific name, Chloris chloride, is a tautonym, where the genus and specific name are the same.
European goldfinch (Carduelis cardeulis)
Adult goldfinches have red, clown-like faces with buff breast and shoulders. Their wings are mainly black, with white spots but are characterised by a broad, gold wing-bar on each wing.
Sexes look alike, although the red mask of the male, not the female, extends slightly past the eye towards the back of the head. Recently fledged birds do not have a red face, which moults through during late summer and autumn.
At garden feeding stations, goldfinches are particularly partial to small, oil-rich seeds, like nyger seed and sunflower hearts. These foods have become much more widespread in gardens over recent decades, attracting goldfinches in from the surrounding countryside.
Gardeners can also help goldfinches by fostering a number of plant species, mostly in the family Asteraceae, such as groundsels, which provide alternative ‘natural’ foods.
Teasels are another favourite – goldfinches are the only UK finch with a long enough beak to be able to extract its seeds.
The goldfinch’s scientific name, Carduelis cardeulis, is a tautonym, where the genus and specific name are the same.
Common chaffinch (Fringilla coelebs)
The male chaffinch (above) is unmistakeably handsome with a blue-grey cap, pink cheeks and breast and a reddish-brown mantle. Females and juveniles are much duller, consisting of grey-brown upper-parts and dull greyish-white underparts. All chaffinches, however, have distinct wing bars.
The chaffinch is one of the most common bird species in the UK and one of the top 10 most reported birds in Garden BirdWatch gardens. In Britain, the highest breeding densities are found in southern, central and eastern England, and on upland edges in northern England and Scotland.
Bullfinch (Pyrrhula pyrrhula)
The bullfinch is a medium-sized finch that has quite a round body, with a large robust bill. Males and females have a black cap that extends forward around the bill, a grey back, black wings with a grey-white wing bar, a black tail and a white rump. However, while the female and juveniles are pinkish-grey, the male stands out with rose-red underparts.
You are much more likely to see a bullfinch than to hear one. They have a very soft and very subtle call which is a low, short whistle ‘peu’.
The male breeding song is very quiet as well, consisting of a descending series of notes, repeated at intervals. In addition, they are a skilful mimic and were popular cage birds at one point, with people determined to teach them different tunes that were played to them.
The bullfinch’s scientific name, Pyrrhula pyrrhula, is a tautonym, where the genus and specific name are the same.
Eurasian sparrowhawk (Accipiter nisus)
Sparrowhawks are small, broad-winged raptors with long tails and long, thin yellow legs. Adult males have slate-grey upperparts and fine rufous barring underneath. Females have brownish-grey upperparts and less rufous barring than the male. They have a more prominent white line above the eye.
While the sparrowhawk is now one of the most widespread birds of prey in Britain, until a few decades ago it was more or less extinct in many eastern counties. This was partly due to persecution, but also due to pesticide use in agriculture. As well as killing individuals, the accumulation of harmful compounds thinned their eggs, which increased breakages during incubation. This led to a population decline.
Sparrowhawks rely on the element of surprise and as such will often follow a regular route to get close to potential prey, which in gardens means using the cover of a hedge or shed. By moving feeding stations around your garden and keeping them close to cover, you can reduce the chance of a sparrowhawk attack.
Eurasian starling (Sturnus vulgaris)
While starlings appear black at a distance, close up they have glossy green and purple iridescent plumage. In the breeding season, adults have yellow bills with different colour bases depending on their sex; in males this is blue and in females pink.
Quite a few garden birdwatchers complain about starlings because they seem to clean out a feeding station in minutes. Starlings do this as they evolved to feed quickly in flocks, rather than because they are greedy. It’s not their fault but it can get expensive so if this is a problem, try providing food, especially fat products, in feeders that exclude larger birds.
Great spotted woodpecker (Dendrocopos major)
Identifying a great spotted woodpecker is relatively easy. There are just two black and white woodpeckers in Britain: the great spotted woodpecker and the lesser spotted woodpecker.
The former are about the size of starlings and are fairly widespread, while the latter are only about the size of greenfinches and are worryingly – and increasingly – scarce.
Great spotted woodpeckers take a wide variety of foods in gardens, happily hammering peanuts, chiselling suet-blocks or clearing trays of mealworms. As long as they can get a good grip on a feeder and can reach the food inside, they will sample most things.
Green woodpecker (Picus viridis)
Male and female green woodpeckers look similar, but adult males will have a lot of red in the moustachial stripe, while there is none in that of an adult female. All ages and sexes have bright green plumage with yellow rumps and red caps, but in juvenile green woodpeckers, the plumage is streaked with grey.
If you are lucky enough to have green woodpeckers visiting your garden, then you will most likely have seen them on the lawn. This is because the green woodpecker diet consists mainly of ants – adults, larvae and eggs. They will eat other invertebrates, pine seeds and fruit, but usually only in the winter when ants become increasingly hard to find.
Green woodpeckers are very vocal and have a recognisable loud, laughing call known as a ‘yaffle’, which is often the only way you know a green woodpecker is nearby, as they tend to be quite wary birds. The yaffling is by far the most distinctive sound that green woodpeckers make, but you could also hear their song, which is a series of slightly accelerating ‘klü’ sounds.
Eurasian magpie (Pica pica)
With its pied (black and white) body, blue wings and long tail, the magpie is a very distinctive bird and one of the easiest corvid species to identify.
The magpie’s scientific name, Pica pica, is a tautonym, where the genus and specific name are the same.
Eurasian jay (Garrulus glandarius)
The jay a very colourful bird with a pinkish body, black and white wings with a patch of bright blue, a white rump, black tail, a pale streaked crown, black ‘moustache’ and pale chin. Despite their unmistakable, gaudy plumage, these colourful members of the corvid family are more often heard than seen. Their Welsh name, Ysgrech y Coed, means ‘shrieker of the woods.’.
Jays are well-known for caching food, particularly acorns, to eat at a later date – usually burying them in autumn, and retrieving them in winter.
Jays are skilful mimics of other birds and animals. When threatened, they are likely to imitate the calls of tawny owls, sparrowhawks and even domestic cats.
The British Trust for Ornithology (BTO) is a UK charity that focuses on understanding birds and, in particular, how and why bird populations are changing. Our vision is of a world where people are inspired by birds and informed by science.
Main image: Blue tit perched on the branch of a spring flowering crab apple tree © Jacky Parker Photography/Moment/Getty | <urn:uuid:9597f413-1649-486c-94fd-bd7753c3553f> | CC-MAIN-2021-21 | https://www.discoverwildlife.com/animal-facts/birds/british-garden-birds-guide-how-to-identify-different-species-and-attract-them-to-your-garden/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989018.90/warc/CC-MAIN-20210509213453-20210510003453-00574.warc.gz | en | 0.957474 | 5,679 | 3.421875 | 3 |
Dialogue tags can cause headaches for many authors, but it doesn’t have to be that way. Dialogue tags are (usually) essential when writing fiction, and good use can really elevate the prose. Perhaps the most important thing to consider here is that the core function of a dialogue tag is to indicate which character is speaking. It will be helpful to keep this in mind as we explore my advice on how to use dialogue tags – and how to use them effectively.
Dialogue tags can be used before, after, or in the middle of direct speech. Here’s an example of each style:
Melissa said, ‘That’s my chocolate cake.’
‘That’s my chocolate cake,’ said Melissa.
‘That,’ Melissa said, ‘is my chocolate cake.’
Note the placement of the commas and full stops in relation to the quote marks. I’m sure you will also have detected the pacing change brought about by the third example (helped along by the removal of the contraction) – this is a good tool to have in your pocket, especially when you want to create emphasis.
I know there is writing advice out there that will tell you to avoid ‘said’. I think that’s a mistake. ‘Said’ is so common, so conventional, that it is almost invisible to most readers. That’s what you want – most of the time. We don’t want the reader to be thinking about the dialogue tags – we want them to be thinking about the content of the dialogue and what it means for the story and the characters.
[…] too many variants on ‘said’ can become noticeable; the reader ends up focusing on the author’s language, rather than on what’s being said. ‘Said’, on the other hand, is so commonly used in both speech and writing that it’s virtually invisible. On Editing: How to edit your novel the professional way (2018), Helen Corner-Bryant and Kathryn Price, p. 125
‘Said’ might be essentially invisible when used sparingly but that doesn’t mean that it should be attached to every single bit of speech. Take this as an example:
‘I’ve put the pie in the oven,’ he said. ‘Thank you,’ she said. ‘I think the blueberries will be nice,’ he said. ‘Yes. I’ll make some custard to go with it,’ she said.
I know that’s not the most riveting dialogue, but it gives you an idea of how dreary and frustrating that sort of use can be. It wouldn’t be much better if we used a variety of tags. In fact, it might be worse. Imagine:
‘I’ve put the pie in the oven,’ he stated. ‘Thank you,’ she responded. ‘I think the blueberries will be nice,’ he declared. ‘Yes. I’ll make some custard to go with it,’ she trilled.
A good way to spot if your use of dialogue tags has become distracting is to read the text aloud or use a text-to-speech program (the latest version of Word has one built in – it’s called Read Aloud and you can find it on the Review tab).
As I said earlier, the core function of a dialogue tag is to indicate which character is speaking. If there are no more than two characters in the scene, you can usually trust the reader to keep track of who is saying what, with only the occasional tag or action beat to act as a reminder. Limit your use of tags to where they are actually needed.
Beware of double-tell
If the dialogue tag is repeating what the reader already knows, that is double-tell. You can get away with this, usually, with ‘asked’ and ‘replied’ – like ‘said’, they are so common that they are generally invisible.
‘What an incredible sight!’ Joey exclaimed.
‘I will make sure that we find the culprit. You have my word,’ Emma promised.
In both of these examples, the dialogue has done the work already and the tag is redundant. Trust in the dialogue you have constructed and reduce the signposting for the reader – it will make for a much more immersive experience.
Avoid tags that steal focus
Many double-tell tags also fall into this category: dialogue tags that are obtrusive and overwhelm the dialogue they are supposed to be supporting.
‘You see! I told you he was a villain!’ Gregory trumpeted.
‘I am extremely displeased! Who do you think you are? I will have your job, Perkins,’ he vociferated.
The reader is now thinking about ‘vociferated’ and not about how poor old Perkins is going to get out of the pickle he finds himself in.
You don’t have to – and shouldn’t – always stick with the reliably unobtrusive ‘said’, but I would recommend thinking very carefully about whether you need to use those more ostentatious tags. Why use ‘vociferated’ when ‘shouted’ will do?
Make sure they are about speaking
Dialogue tags should be about the mechanics of speaking – they should reflect something about the speech, not what the speaker’s body is doing.
‘I absolutely love it,’ she smiled.
‘I see what you mean,’ Gareth nodded.
That’s not to say that you shouldn’t convey this sort of thing around the dialogue – in fact, you definitely should. It will help ground the dialogue and bring life to your characters.
‘I absolutely love it.’ She smiled.
‘I see what you mean,’ Gareth replied, nodding.
Here are some other words it’s best to avoid using as dialogue tags: laughed, snorted, sneered, giggled, frowned, grinned, gesticulated, wept, glowered, smirked, gulped, shrugged, swaggered.
There are some that I consider occasionally acceptable in very limited circumstances (although other editors would disagree), such as laugh and sigh. You certainly can’t laugh or sigh a whole sentence, but you might be able to do it for a single word.
‘Yes,’ Penelope sighed. ‘We got the news.’
Why does it matter?
Dialogue tags are mechanical – they exist to serve the story by indicating who is speaking. They are essential for good writing, but if they overwhelm or distract from the dialogue, they can damage the storytelling. The aim to keep in mind is that the dialogue tags should support the dialogue and allow the reader to remain immersed in the experience you have created for them.
Punctuating dialogue can feel a lot more difficult than it is. It is probably one of the things I spend the most time on when I’m editing. But once you understand the basic principles, you should be able to wield punctuation confidently and effectively in your dialogue. Here’s my advice on the following:
Styling pauses or trailing off
Styling interrupted speech
Punctuating tagged speech
Punctuating broken-up speech
Indicating faltering speech
Authors usually indicate speech by using quote marks (also called quotation marks or speech marks) and that is the method that will be most familiar to the reader. It’s usual in UK fiction to use single quote marks, while US fiction tends to use double quote marks. (UK children’s fiction does often use double quote marks, though.) Of course, this is a style choice and not a matter of right and wrong. However, it is worth considering what the reader will expect to see. Whichever you choose, the key is to be consistent.
Here are the two styles in action in published novels:
Single quote marks
Double quote marks
The Watchmaker of Filigree Street (2016) by Natasha Pulley, p. 111
Children of Blood and Bone (2018) by Tomi Adeyemi, p. 369
‘I … my God, you were serious?’ ‘Quite.’ ‘Thank you.’
“What would Baba say?” “Leave Baba out of this—” “Or Mama?” “Shut up!”
It’s worth pointing out the placement of the closing punctuation here – it sits within the quote marks.
Nested quote marks
If you need to place speech within speech, use the opposite style for the internal (nested) quote marks. It should look like this:
Single quote marks with nested doubles
Double quote marks with nested singles
Good Omens (2014 edition) by Terry Pratchett & Neil Gaiman, p. 320
The Starless Sea (2019) by Erin Morgenstern, p. 333
‘One of those blue ones,’ said Brian, eventually, ‘saying “Adam Young Lived Here”, or somethin’?’
“‘It’s dangerous to go alone,’” Zachary quotes in response […]
It is typical in publishing to use smart (curly) quote marks, not unidirectional (straight) ones. The same thing applies to apostrophes.
You can make sure that Word always produces curly quotes when you are typing. This is how you do it in the latest version of Word:
Go to File and scroll down to Options
Click on Proofing and find the AutoCorrect Options… button (it should be at the top)
Make sure there’s a tick in the box next to “Straight quotes” with “smart quotes”
If it’s too late and you already have a manuscript full of straight quotes, you can change them quickly by doing a global Find and Replace. There are two ways to access this. On the Home tab, select Find and then Advanced Find… or simply hit Ctrl and H on your keyboard.
Type a quote mark into the Find what box
Type the same quote mark into the Replace with box
Hit the Replace All button
New paragraphs within speech
If a character’s speech moves on to a new paragraph, use an opening quote mark at the start of the new line but don’t use a closing quote mark at the end of the previous paragraph.
The Bone Season (2017 edition) by Samantha Shannon, p. 414 ‘[…] One of our clairvoyants has displayed such disobedience that she cannot be allowed to live. Like the Bloody King, she must be banished beyond the reach of the amaurotic population, where she can do no more harm. ‘XX-59-40 has a history of treachery. She hails from the dairy county of Tipperary, deep in the south of Ireland – a region long since associated with sedition.’
The em dash
There is another way to display speech. It’s rarely done and it can be difficult to use effectively, especially if there are more than two speakers. But it is an option. Roddy Doyle is well known for using this style. Here’s an example from page 1 of The Guts, 2013:
—How’s it goin’? —Da? —Yeah, me. —How are yeh? —Not too bad. I’m after gettin’ one o’ the mobiles.
Styling pauses or trailing off
An ellipsis is the best way to indicate a pause or that speech is trailing off.
The Lost Future of Pepperharrow (2020) by Natasha Pulley, p. 134: ‘I think I’m normally mannered, but Mr Vaulker seems like a …’ You couldn’t really say ‘unreserved cock’ in Japanese. ‘Difficult person. May I sit down?’
When you use an ellipsis in this way, you don’t need to tell the reader that the character’s speech trailed off. The ellipsis is telling the reader that, and the reader can be trusted to interpret it.
You can use the fixed ellipsis symbol or three full stops (or periods) separated by non-breaking spaces. It’s a matter of style. But I prefer the fixed symbol – it’s simple to insert.
You can insert the fixed-symbol ellipsis by typing 0133 while pressing the Alt tab on your keyboard. You can also find it by going to Word’s Insert tab and selecting Symbol and then More Symbols…
If you opt to use three separate full stops, put non-breaking spaces between them. Non-breaking spaces keep together the elements they are placed between – you won’t end up with one full stop on the line below the other two. You can create a non-breaking space by pressing Ctrl+Shift+Space.
Spacing around ellipses is a matter of style. As with most things, it is just important to be consistent and clear. However, this is how it is typically done:
Mid-sentence ellipsis: space both sides
Ellipsis at the beginning of the sentence: space after
Ellipsis at the end of the sentence: space before
‘No … I don’t think that was your gateau.’
‘… I thought I put it there.’
‘They might have another one at the bakery …’
Styling interrupted speech
If you want to show that a character has been interrupted while speaking, the way to do that is to use an em dash.
Six of Crows (2018 edition) by Leigh Bardugo, p. 481 “I didn’t tell Pekka Rollins anything. I never—” “You told one of the Dime Lions you were leaving Kerch, but that you’d be coming into big money, didn’t you?” Jesper swallowed. “I had to. They were leaning on me hard. My father’s farm—” “I told you not to tell anyone you were leaving the country. I warned you to keep your mouth shut.”
The use of the em dash means you don’t have to tell the reader that the character was interrupted. The punctuation does the job for you. You can also use the em dash to signify a sort of self-interruption, where the speaker breaks off suddenly for some reason.
Punctuating tagged speech
The comma is your friend here. Unless you use an exclamation mark or a question mark, a comma will do the job before a dialogue tag. Here are a few examples:
Dialogue tag after a complete sentence
‘I put the milk in the fridge,’ she said.
Dialogue tag after a question
‘Did you put the milk in the fridge?’ she asked.
Dialogue tag after an exclamation
‘There’s no milk in the fridge!’ she yelled.
Dialogue tag before a complete sentence
She whispered, ‘I don’t even like milk.’
Dialogue tags always take lower case (not an initial capital letter) – it doesn’t matter if the closing punctuation is a comma or a question mark or an exclamation mark. Also note the positioning of the comma before a dialogue tag – it is placed within the quote marks.
Punctuating broken-up speech
If a character hasn’t finished speaking, but you’ve broken up their speech with a dialogue tag, action beat, or stage direction, you should indicate this using commas or dashes.
Spellslinger (2017) by Sebastien de Castell, p. 387 ‘Perhaps,’ An’atria said, her dark eyes peering out from a thick halo of grey hair as she stared at me, ‘but do we still pretend this one comes to pass his mage’s trial?’
It’s typical to add a comma before the first closing quote mark and after the speech tag or additional material.
If you want to break up the speech with description of some kind, rather than a dialogue tag, it is often effective to use dashes. Dashes are a useful way to indicate that an action is taking place at the same time as the speech. US publishing tends to use closed up em dashes; UK publishing tends to use spaced en dashes.
The Long Way to a Small, Angry Planet (2015) by Becky Chambers, p. 31 ‘ […] On a long haul, this’ – she tapped the top of Rosemary’s head – ‘needs to be the most important thing you take care of.’
Vocative expressions identify who is being addressed. It’s not always a character’s proper name – it could be a title or a term of endearment, or something less pleasant. Commas are used to make it clear that it is a vocative expression in action, and this is how to do it:
If the vocative expression is at the beginning of the sentence, it needs a comma after it
If the vocative expression is at the end of the sentence, it needs a comma before it
If the vocative expression interrupts the sentence, it needs a comma before and after it
‘Barry, is the pizza here yet?’
‘That’s my slice of pizza, Barry.’
‘If you wanted pepperoni, Barry, you should have ordered it.’
Vocative expressions need to be punctuated correctly to prevent ambiguity. Missing commas lead to sentences like the classic ‘Let’s eat Grandma!’ mistake.
Indicating faltering speech
Sometimes you’ll need to indicate faltering speech – your character is out of breath, scared, surprised… Well, there are lots of ways to style this, and what you choose will depend on the effect you wish to achieve. Options include ellipses, hyphens, repeated letters, en dashes and em dashes.
Ellipses – an effective way to show distress and uncertainty
Crooked Kingdom (2019 edition) by Leigh Bardugo, p. 268 “I come for job, yes?” Nina said. “To make sugar.” “We don’t make it here, just store it. You’ll want to go to one of the processing plants.” “But I need job. I … I …” “Oh, hey now, don’t cry. There, there.”
Repeated letters – good for conveying fear
Rotherweird (2017) by Andrew Caldecott, p. 230 Salt took Oblong by both shoulders and shook him twice, firmly. ‘There’s what?’ ‘L – l – legs by the lantern …’ stuttered Oblong. ‘Show me.’
Em dashes – great for extreme shock/awe/terror
The Priory of the Orange Tree (2019) by Samantha Shannon, p.95: Melaugo was clinging to the ratlines, one eye to a spyglass. ‘Mother of—’ She lowered it, then lifted it again. ‘Plume, it’s— I can’t believe what I’m seeing—’ ‘What is it?’ the quartermaster called. ‘Estina?’ ‘It’s a— a High Western.’ Her shout was hoarse. ‘A High Western!’
Why does it matter?
Getting the punctuation right allows the reader to concentrate on the content of the dialogue, on what it means for the characters, on how it feels. Fiction can accommodate flexibility with punctuation use, but sticking to the general conventions is often the best way to serve the story and the reader. Our aim is for the reader to stay caught up in the story, not for them to become distracted by the punctuation. | <urn:uuid:0f3f2557-94aa-44e7-b596-509ed546b9b8> | CC-MAIN-2021-21 | https://blackcatedit.com/category/writing-advice/fiction-essentials/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989705.28/warc/CC-MAIN-20210512193253-20210512223253-00177.warc.gz | en | 0.933055 | 4,335 | 2.578125 | 3 |
Ghana and Côte d’Ivoire Receive a Strict-Equidistance Boundary
In a unanimous ruling delivered on September 23, 2017, a Special Chamber of the International Tribunal for the Law of the Sea (ITLOS) fixed the course of the single maritime boundary between Ghana and Côte d'Ivoire both within and beyond 200 nautical miles (NM) from their coastlines. Situated in an area in the Gulf of Guinea that is rich in hydrocarbons, the boundary represents a relatively rare "strict," or un-adjusted, equidistance line favoring Ghana. The non-appealable ruling will require adjustments to practically all existing oil and gas blocks along the new line.
In September 2014, following significant hydrocarbon discoveries, particularly in the Deepwater Tano Block containing the TEN (Tweneboa, Enyenra, Ntomme) fields, Ghana, relying on Annex VII of the United Nations Convention on the Law of the Sea (UNCLOS), commenced proceedings against Côte d'Ivoire before an ad hoc arbitral tribunal seeking delimitation of their common maritime boundary in the Atlantic Ocean. The two countries subsequently entered into a Special Agreement transferring the case to a Special Chamber of ITLOS in Hamburg.
The parties not only advanced different methods to fix their maritime boundary, they also held differing views regarding the points along their coasts from which the boundary was to be measured (so-called base points) and the marine charts to be used in the delimitation. While the coasts of Côte d'Ivoire and Ghana are concave and convex, respectively, they otherwise are straight (i.e., not indented) and near-equal in overall length, as Figure 1 below shows.
Fig. 1: Overview of parties' coastal geography (Sketch-map No. 1 from Judgment)
The Parties' Positions
Ghana submitted that both countries in issuing licenses for offshore mineral resource activities had mutually recognized and applied an equidistance-based boundary in the 12-NM territorial sea and their Exclusive Economic Zone (EEZ) and continental shelf within 200 NM from their coasts for more than a half century. Ghana also claimed that the countries' 2009 submissions to the Commission on the Limits of the Continental Shelf (CLCS) pointed to their tacit agreement regarding the boundary beyond 200 NM.
In Ghana's view, the starting-point for the demarcation of the allegedly agreed maritime boundary was the common land boundary terminus at border post 55 (BP55). The geographic coordinates of BP55 had been agreed upon by the two countries nine months before Ghana filed the case against Côte d'Ivoire.
Ghana disagreed with Côte d'Ivoire that the disputed area features geographical circumstances capable of influencing the equidistance-based boundary advocated by it. Ghana submitted that, if the Special Chamber rejected its argument that there was an agreed boundary, the provisional equidistance line to be drawn by the judges should be based on a 10-kilometer segment on Ghana's coast and was to be adjusted to the west on account of the oil activities of both countries. In Ghana's view, those activities, taking the form of oil concession agreements and limits, legislative instruments, maps, and statements by public officials over a fifty-year period, reflect a modus vivendi regarding an equidistance boundary between the parties. The resulting boundary coincides with the western limit of Ghana's offshore oil blocks. Ghana also asked the Special Chamber to rule that the parties' boundary beyond 200 NM follows an extended equidistance boundary along the same azimuth as the boundary within 200 NM, to the limit of national jurisdiction.
Côte d'Ivoire's Position
Côte d'Ivoire denied that there was an expressly agreed or "customary" boundary in place between the two countries. It rejected Ghana's reliance on oil activities in the disputed area and asked the judges to rule that the maritime boundary between Ghana and Côte d'Ivoire follows the 168.7˚ azimuth line, which both parties agreed starts at BP55 and extends to the outer limit of the continental shelf. Côte d'Ivoire relied on the angle bisector method to construct the boundary. Like equidistance, the bisector method is a geometry-based method for delimiting boundaries.
Côte d'Ivoire submitted that application of the bisector method was appropriate in this case in light of the limited number of base points and their location on what it presented as an unstable coastline that is not representative of the overall coastal geography. It also pointed to the orientation of the countries' coastlines and the circumstance that the base points (typically, low-water line points) employed for the construction of the provisional equidistance line are all located on Jomoro on a small portion of the countries' coastlines. Jomoro is a narrow strip of Ghanaian land blocking the seaward projection of part of the Ivorian land mass.
Côte d'Ivoire also complained that the configuration of the narrow coastal segment employed for the construction of the equidistance line advocated by Ghana would cause a cut-off for its maritime area and that the resulting line was contrary to the objective of achieving an "equitable solution," the result dictated by UNCLOS for boundaries in maritime areas lying beyond the territorial sea.
Côte d'Ivoire maintained that the same line could be construed based on either a bisector method or modified equidistance using the equidistance/relevant circumstances method favored by Ghana.
The Course of the Maritime Boundary
The Special Chamber noted that, over time, Côte d'Ivoire had invoked various methods for delimiting the maritime boundary, including the method favored by Ghana, whereas Ghana advocated an equidistance line modified to account for the parties' oil activities. Finding no compelling evidence of a tacit agreement or estoppel regarding the parties' maritime boundary or of a modus vivendi between them affecting the boundary, the Special Chamber observed that both parties agreed, in principle, on the internationally established three-stage approach in applying the equidistance/relevant circumstances methodology invoked by both parties. This approach and methodology can be summarized as follows:
After identifying the relevant coasts with a view to determining the parties' overlapping claims and the relevant area within which the delimitation is to be effected and in which the projections of the parties' coasts overlap, a provisional delimitation line is established, which in the majority of cases has been an equidistance line, by reference to appropriate base points
The provisionally constructed line is examined in the light of equitable factors, called "relevant circumstances," so as to determine whether it is necessary to adjust or shift that line in order to achieve an "equitable solution"
A final proportionality check is applied to verify the equitableness of the tentative delimitation and to ensure that the ultimate result is not tainted by some form of gross disproportion
The Special Chamber concluded that only a portion of the mainland coast of Côte d'Ivoire, measuring 352 kilometers, constituted the relevant coast of which the seaward projection overlaps with Ghana's coastal projection.
The Special Chamber was satisfied that delimitation of the disputed area could be achieved by constructing a provisional equidistance line. Significantly, it identified a starting point and base points for the line that differ from those advanced by the parties. Given that the agreed border post 55 is located some 150 meters from the low-water line from which an equidistance line is to be measured, the Special Chamber, guided by the parties' coastline, fixed the starting point (BP55+) by extending the direction of the land border from border post 54 to BP55 until it reaches the low-water line.
The Special Chamber identified base points by re-digitizing the relevant coastline reflected on British Admiralty Chart 1383, which dates back to the nineteenth century. It then reduced the resulting high number of base points by using, for each party, only those points furthest from and nearest to the land boundary terminus and the points in the middle, yielding five base points for each country. The resulting boundary, which starts from BP55+, features six turning points at which the direction of the line changes and which are connected by geodetic lines. From the southern-most turning point, the equidistance boundary continues as a geodetic line until it reaches the outer limits of the continental shelf beyond 200 NM. Details of the line are presented in Figure 4 below.
Having confirmed that it had the power to fix the course of the line delimiting the continental shelf beyond 200 NM, the Special Chamber emphasized that "there is in law only a single continental shelf." While it did not fix a termination point, the Special Chamber found that this segment of the boundary runs in the same direction as the line within 200 NM.
Given that the boundary line thus fixed by the Special Chamber begins south and east of border post 55 (i.e., commences at a new starting point) and has its own turning points, plus its own azimuth, for the remainder of the single line, the final boundary line does not coincide with the equidistance line claimed by Ghana. The final boundary starts out east of Ghana's claimed line, crosses to the west (offshore) and then crosses again east nearing the 200-NM EEZ limit, before continuing to the extent of the continental shelf. Details of this can be seen in Figure 3 below.
The Special Chamber dismissed all of the parties' arguments that an adjustment or shifting of the provisional equidistance line was called for in the present case based on relevant circumstances such as concavity/convexity of the parties' coasts, the location of natural resources and the conduct of the parties. It found that strict application of the equidistance method resulted in an equitable solution in this case. The Special Chamber also determined, in Stage 3, that the line constructed by it "does not lead to an inequitable result owing to a marked disproportion between the ratio of the respective coastal lengths and the ratio of the relevant maritime area allocated to each Party."
The final delimitation line fixed by the Special Chamber is shown in Figure 2 below.
Fig. 2: Delimitation line (Sketch-map No. 7 from Judgment)
As this ruling underscores, maritime boundary delimitation involves a mixture of law and science, with geography prevailing over resource-related criteria such as the presence and location of hydrocarbons. The Special Chamber's choice of nautical chart BA 1383, which reflects very old near-shore mapping that only shows the coastal low-water line in some places and otherwise falls short of the UN's scale guidelines, apparently stems from the fact that both parties had relied on that chart until 2014 and had not jointly surveyed their coasts. The boundary's starting-point (BP55+) resulting from the use of this chart sits 159.87 meters (geodetic) south and 18.43 meters east of the parties' agreed land terminus point (BP55), thereby favoring Côte d'Ivoire near-shore, as Figure 3 below shows. The fact that this new position, which originates from the use of the low-water line associated with the equidistance method applied by the Special Chamber, differs from the bilaterally agreed upon starting point means that the boundary line fixed by the Special Chamber nowhere coincides with the lines claimed by Ghana and Côte d'Ivoire.
Fig. 3: Near-shore impact of Special Chamber's choice of boundary starting point
While the Special Chamber's chosen starting point of the maritime boundary favors Côte d'Ivoire near-shore along an initial segment measuring approximately 30 NM, Ghana received a boundary that is even more favorable than the boundary claimed by it along a segment of approximately 164 NM. The boundary fixed by the Special Chamber is actually seen to be sub-divided into three segments, with the final line initially lying east, then west and again east of Ghana's claimed equidistance line in the deepest waters as it approaches its approximate termination point at the outer portion of Ghana's extended continental shelf (ECS) claim, as recently accepted by the CLCS. See Figure 4 below for details.
Fig. 4: Segment-by-segment difference between final boundary and parties' claimed lines
The table below shows the difference in distance between the parties' claimed lines and the Special Chamber's maritime boundary with reference to the key points constituting the boundary, using CARIS LOTS boundary software. The two far-right columns underscore the extent of Ghana's victory.
Table 1: Difference (geodetic) between final boundary and parties' claimed lines
The maritime boundary resulting from the Special Chamber's ruling will require adjustments to practically all existing blocks for mineral resource activities licensed by the parties along the adjudicated boundary. This is a direct result of the Special Chamber's line not coinciding at any point with the parties' claimed lines. Therefore, the ruling, while binding only for Ghana and Côte d'Ivoire, directly impacts oil companies holding offshore oil and gas blocks in the hitherto disputed area. Figure 5 below shows the issued and open blocks licensed by Côte d'Ivoire and Ghana, respectively, that are affected by the ruling, highlighting block gains and losses resulting from the ruling.
Fig. 5: Affected offshore oil & gas blocks on either side of the final boundary
Following this latest ruling, which was immediately welcomed in a joint statement by the agents representing Ghana and Côte d'Ivoire, some 70 percent of Africa's lake and ocean boundaries remain to be delimited. This affects practically all of Africa's maritime waters, which hold offshore hydrocarbon reserves (EEZ and ECS waters) of approximately 95 billion Barrels Oil Equivalent (BBOE) (discovered) and, most importantly, yet-to-be-found reserves estimated at 70-80 BBOE, subject to basin limits. Many of these areas, which are believed to harbor more than half of Africa's total reserves, are in near proximity to the present-day unresolved boundaries.
About the Authors:
Pieter Bekker, LL.M., Ph.D. (Int'l Law), an ASIL member, holds the Chair in International Law at the University of Dundee's Centre for Energy, Petroleum and Mineral Law and Policy (CEPMLP), where he is Founding Director of the Dundee Ocean and Lake Frontiers Institute and Neutrals (DOLFIN). He is also a Partner at CMS Cameron McKenna Nabarro Olswang LLP.
Robert van de Poll, B.Sc. (Earth Sciences), M.Sc.Eng. (Geodesy & Geomatics), is Global Manager Law of the Sea for Fugro Group. He is also the creator of CARIS LOTS, the leading Law of the Sea and maritime boundary software used by the United Nations and by international courts and tribunals, and serves as DOLFIN's Geology Director.
The views expressed herein are solely those of the authors. Errors are the authors' responsibility.
Case Concerning Delimitation of the Maritime Boundary Between Ghana and Côte d'Ivoire in the Atlantic Ocean (Ghana/Côte d'Ivoire), Case No. 23, Judgment of 23 September 2017, ¶ 660, https://www.itlos.org/fileadmin/itlos/documents/cases/case_no.23_merits/C23_Judgment_23.09.2017_corr.pdf [hereinafter Judgment].
An equidistance line is a line every point of which is equidistant from the nearest points on the baselines from which the breadth of the territorial sea and other maritime zones of each coastal state is measured.
This transfer apparently was motivated by cost savings: In contrast to Annex VII proceedings, in which the parties must pay the fees and expenses of the arbitrators, the cost of an ITLOS Special Chamber is not borne by the parties. ITLOS Vice-President Boualem Bouguetaia of Algeria presided over the proceedings. The other members were former ITLOS President Thomas Mensah of Ghana (appointed by Ghana), International Court of Justice (ICJ) President Ronny Abraham of France (appointed by Côte d'Ivoire) and ITLOS Judges Rüdiger Wolfrum of Germany and Jin-Hyun Paik of the Republic of Korea.
The CLCS is an UNCLOS body charged with making recommendations to coastal states on matters relating to the establishment of the outer limits of the continental shelf beyond 200 NM, without prejudice to the delimitation (i.e., fixing the course of the lateral limits) of maritime boundaries. See Convention on the Law of the Sea Art. 76 & Annex II, Dec. 10, 1982, 1833 U.N.T.S. 397. This was the first maritime boundary case in which one of the parties (Ghana) had already received and accepted recommendations on the outer limits of its continental shelf from the CLCS.
According to the ICJ, "'demarcation' . . . presupposes the prior delimitation – in other words definition – of the frontier." Territorial Dispute (Libya/Chad), Judgment, 1994 I.C.J. Rep. 6, ¶ 56 (Feb. 3), available at http://www.icj-cij.org/files/case-related/83/083-19940203-JUD-01-00-EN.pdf.
See Case Concerning Delimitation of the Maritime Boundary Between Ghana and Côte d'Ivoire in the Atlantic Ocean (Ghana/Côte d'Ivoire), Case No. 23, Memorial of Ghana, ¶ 2.2, https://www.itlos.org/fileadmin/itlos/documents/cases/case_no.23_merits/pleadings/Memorial_of_Ghana_Vol._I.pdf.
As used in this analysis, "azimuth" means "the bearing of a geographical position, measured clockwise from north through 360 degrees." George K. Walker, Defining Terms in the 1982 Law of the Sea Convention III: Analysis of Selected IHO ECDIS Glossary and Other Terms, in Proceedings of the American Branch of the International Law Association 2003-2004, 214 (2004).
As the ICJ has observed, "[t[he bisector method ... seeks to approximate the relevant coastal relationships, but does so on the basis of the macro-geography of a coastline as represented by a line drawn between two points on the coast." Territorial and Maritime Boundary between Nicaragua and Honduras in the Caribbean Sea (Nicaragua v. Honduras), 2007 I.C.J. 659, ¶ 289 (Oct. 8), available at http://www.icj-cij.org/files/case-related/120/120-20071008-JUD-01-00-EN.pdf.
See UNCLOS, Arts. 74, 83 ("the delimitation . . . between States with opposite or adjacent coasts shall be effected by agreement on the basis of international law, . . . in order to achieve an equitable solution.").
Judgment, supra note 1, ¶ 586.
See id. ¶ 360.
See id. ¶ 379. The relevant Ghanaian coast was found to measure some 139 kilometers, resulting in a ratio of approximately 1:2.53 in favor of Côte d'Ivoire. The ratio of the allocated areas is approximately 1:2.02 in favor of Côte d'Ivoire. See id. ¶¶ 536–37.
The Special Chamber observed that the angle bisector method favored by Côte d'Ivoire has been applied by international courts and tribunals only occasionally when it was not feasible to construct an equidistance line due to the special geographical circumstances of the case. See id. ¶ 285.
British Admiralty, Chart 1383, scale 1:350,000, published July 1, 2004 (Latest Edition dated June 15, 2017). This chart contains no low-water line information in general proximity to the land terminus point to be used by both coastal states in constructing their maritime boundary. The relevant coastline as depicted on this chart, which was last surveyed between 1837 and 1846, lacks any precision when compared to modern-day satellite imagery.
The geodetic line starts at an azimuth of 191˚ 38' 06.7"
Judgment, supra note 1, ¶¶ 399–401.
Id. ¶ 490.
Id. ¶¶ 402–80.
Id. ¶ 533.
According to the Special Chamber, "a de facto line or modus vivendi related to oil practice [of the disputing coastal states] cannot per se be a relevant circumstance" in maritime delimitation. Id. ¶ 477.
See Baselines: An Examination of the Relevant Provisions of the United Nations Convention on the Law of the Sea, Ch. I.A., Art. 5, sub (8), ("It is recommended that in general the scale should be within the range 1:50,000 to 1:200,000"), Art. 5, sub (10) ("States . . . will usually select the low-water line shown on existing charts" (emphasis added)), UN Doc. E.88.V*.
See Judgment, supra note 1, ¶ 342.
See Ghana, Cote d'Ivoire Make Joint Pledge on ITLOS Ruling, Modern Ghana (Sept. 25, 2017), https://www.modernghana.com/print/805019/1/ghana-cote-divoire-make-joint-pledge-on-itlos-ruling.html.
See Robert van de Poll & David Bishopp, "Unlocking Trapped Subsurface Resource Value in Disputed African Maritime Boundaries," Presentation Delivered at the 21st Africa Oil Week/Africa Upstream Conference, Cape Town (Nov. 2014), available at www.dundee.ac.uk/cepmlp/research/researchinstitute-dolfin/. | <urn:uuid:1e07a0a7-9fa0-4dc8-ad32-ff29e781ebdb> | CC-MAIN-2021-21 | https://www.asil.org/insights/volume/21/issue/11/ghana-and-cote-divoire-receive-strict-equidistance-boundary | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989819.92/warc/CC-MAIN-20210518094809-20210518124809-00336.warc.gz | en | 0.925264 | 4,637 | 2.671875 | 3 |
Legislative framework of gambling regulation
◊ Status: Law stated as at 01-Oct-2016 | Jurisdiction: Spain
Spain has a central government and 17 autonomous regions, and two autonomous cities. Gambling was only decriminalised in Spain in 1977. The Spanish Gaming Act (Ley 13/2011, de 27 de mayo, de regulación del Juego), which came into force on 29 May 2011, regulates gaming. The main aim of the federal Gaming Act is to harmonise the regulation of online gambling in Spain, an activity beyond regional boundaries (see below, Background to Gaming Act). At the same time, regional gambling continues to be regulated by each autonomous region.
The Act provides that:
- Online games, that are played at the federal level require a federal licence or authorisation. The federal licence is sufficient for operating in each region where the game operates.
- Games (including online games) played within a specific region require the relevant licence of the autonomous region.
Background to Gaming Act
The Gaming Act was a result of the fundamental changes that had occurred to gambling following its decriminalisation in Spain in 1977, as a result of the arrival of new electronic communication services, including interactive internet gaming. Technology’s rapid development in the 21st century had rendered the traditional regulatory instruments in Spain inadequate for operators, participants and regulatory bodies.
Prior to the Gaming Act, for example, online games that were developed in more than one autonomous region (usually at federal level) were previously unregulated. In contrast, the regulatory authority for land-based gambling had been delegated to the respective autonomous regions during the 1980s and 1990s. In addition, cross-border gaming was not expressly prohibited, which meant that although unauthorised or unlicensed gaming businesses were prohibited from operating from Spain, the legislation did not consider unauthorised or unlicensed gaming businesses operating from abroad.
Change started in late 2007 with the passing of the Law on Measures to Develop the Information Society (Ley 56/2007, de 28 de diciembre, de Medidas de Impulso de la Sociedad de la Información) by the Spanish Parliament. Its 20th additional provision obliged the Spanish Government to regulate online betting and gambling in Spain, and included a list of principles, which would govern the then future Gaming Act.
Although these principles were expected to be enacted in 2008 or 2009, this process did not start until September 2010 when the state lottery entity, State Lotteries (Loterías y Apuestas del Estado) (LAE), started to lead the regulatory process. In the absence of a formal regulator, LAE started to lead the regulatory process until the General Directorate for Gambling Regulation (Dirección General de Ordenación del Juego) was duly incorporated.
The sixth version of the draft Gaming Act became a Bill when the Spanish Congress published it in its Gazette in February 2011. The Bill was approved by the Congress on 12 April 2011 and was passed to the Senate, the Spanish upper house. Some amendments were introduced on 5 May 2011, and were definitively approved by the Congress on 12 May 2011.
The Gaming Act was published in the Spanish State Gazette (Boletín Oficial del Estado) on 28 May 2011 and came into force the day after (see above, Gaming Act).
Definitions of gambling
The Gaming Act defines gambling as an activity where amounts of money or economically measurable objects are put in risk on uncertain or future events, dependent at least to some extent on chance. Chance is present even where there is some degree of skill involved, not only in games that are exclusively or primarily games of luck but games of pure skill fall outside the definition (see Question 12). The prizes can be in cash or in-kind, depending on the type of game. However, prizes cannot be in virtual currency with no monetary value (for example, currency used to purchase additional plays of the game).
There are therefore three essential elements of gambling:
- The existence of a prize.
If one or more of these elements is absent, the game will be outside the scope of gambling and the Gaming Act and/or any secondary gaming regulation will therefore not apply.
Online gaming refers under the Gaming Act to games performed through electronic channels, IT and interactive systems, when any device, equipment or system is employed to produce, store or transfer documents, data or information, including through any public or private communication network. The communication network can include television, internet, land lines, mobile phones or any other interactive communication system, either in real time or recorded. The definition, therefore, includes where on-site means have a secondary role.
There is no separate definition for land-based gambling (see above, General definition).
For federal licences pursuant to the Gaming Act, the competent body for issuing licences, supervising games and enforcement powers is the General Directorate for Gambling Regulation (Dirección General de Ordenación del Juego) (previously the National Gaming Commission).
For regional authorisations, the respective regional competent body is in charge of issuing licences, supervision and enforcement. In most cases, regional bodies will be subject to the supervision of the regional departments of finance or interior.
The Gaming Act and the regional regulations do not prescribe a set limit on the number of available licences. However, this is a condition that can be included on the basis of the public tender for gaming licences, once a call for it has been launched by the corresponding gaming authority.
See box, The regulatory authorities.
All operators that are interested in developing federally or regionally regulated games (that is, exclusively or essentially games of chance or mixed with a player’s skill), must obtain the corresponding licence or authorisation. Games that are exclusively games of skill (not officially approved) do not need a gaming licence.
Regulation is divided between online gaming that is offered at federal level and land-based gaming or online games that are offered at regional level (see Question 1, Gaming Act). Therefore, the gambling products that are identified depend on whether the regulation is at federal or regional level. If a game is not identified/regulated, it is forbidden.
Federal regulation. The Gaming Act does not establish a catalogue of games and each type of game must have its own regulation if it is subject to a licence. The gambling regulations at federal level have approved the following games, providing specific regulation for each of them:
- Pools on sports betting.
- Fixed-odds sport betting.
- Pools on horse racing.
- Fixed-odds horse racing.
- Other fixed-odds betting.
- Exchange betting games.
- Complementary games.
LAE maintains the monopoly on lotteries offered at federal level together with the National Organisation of Spanish Blind People (ONCE). Only charity organisations are allowed to organise lotteries.
See also Question 7, Available licences.
Regional regulation. Spain has 17 autonomous regions. Each regional regulation contains its own catalogue of games of regulated gambling products that are allowed to be offered by an operator. Each of the 17 autonomous regions in Spain has its own licensing regime to allow companies to operate if authorised. The regions have their own catalogue of games approving the gaming products that can be developed and offered by the operator.
With slight differences between the regions, the gaming products within the regional catalogues of games are generally the following:
- French roulette.
- American roulette.
- Trente et quarante (thirty and forty).
- Poker (different forms).
- Slots and other machine gaming.
- Tombola or charity raffles.
- Sports betting.
- Horse racing.
- Wheel of fortune.
Poker is subject to a licence and is regulated both at federal and regional level (see above, Overview).
Betting is subject to a licence and approved both at federal and regional level. Betting games are those placed on the result of one or more events included in the programs previously established by the operator (such as events related to society, mass media, economy, shows, culture and so on). A betting event can never be related to sport or horse racing since there is a special licence for these type of bets.
Sports betting is subject to a licence and is regulated both at federal and regional level (see above, Overview).
Horse racing is subject to a licence and is regulated both at federal and regional level.
Casino games are subject to a separate licence for each type (roulette, blackjack, baccarat or complementary games and federal level or the different casino games approved at regional level). Casino games are regulated both at federal and regional level (see above, Overview).
Slot and other machine gaming
Slots are subject to a licence and are regulated at federal level. Slots and other machine gaming (that is, other slots and gaming devices) are regulated at regional level and subject to a licence (see above, Overview). Slots are regulated both at regional and federal level and other machine gaming (that is, gaming devices) are regulated only at regional level.
Terminal-based gaming is subject to a licence for the game both by the regulator at a federal level and in the autonomous region where the land-based terminal will be placed. There is no specific licence required for the terminal, only official approval of the technical systems used, provided that the operator is duly licenced for the specific game which is intended to be offered by this means.
Bingo is subject to a licence and is regulated both at federal and regional levels (see above, Overview).
LAE and ONCE maintain the monopoly on lotteries offered at federal level. Only charity organisations are allowed to organise lotteries (see above, Overview).
The licensing regime for land-based games depends on each regional regulation (see Question 4, Overview: Regional regulation). None of the regional regulations include a limit on the available number of licences.
The companies that are interested in organising and commercialising games in any of the regions must first apply for the necessary administrative authorisation or licence (depending on the regulatory regime), which must sometimes be done through a public tender process. Commercialising refers to a commercialising gaming activity, such as determining the prizes or tournaments, managing the gaming platform, registration of users or players’ rules, transactions, and payment settlements. One of the common requirements in all regional regulations is that the company must be registered in the general registry of gaming companies in the respective region.
For a company to obtain regional authorisation, it must submit to the regional competent authority a long list of documents. As with the documents required in the federal regulation, these are required mainly to prove legal, economic and technical solvency.
Among the list of documents, the most significant are:
- Economic guarantees. The requested guarantees in some regions are substantial, for example in the Madrid region an operator must submit an economic guarantee of EUR12 million for an authorisation to organise and commercialise betting games.
- Certificate that the operator has no outstanding social security or tax payments.
- Documents and certification reports proving that the operator complies with all technical requirements.
- Documents proving the applicant’s economic and legal solvency (for example annual accounts, bank certificates, articles of association, deeds of incorporation of the company, and so on).
Together with the application for authorisation to commercialise and exploit games, companies can also apply for authorisation for gaming establishments and businesses (if there are different and additional requirements to comply with). Some regions have a more restrictive approach and some requirements are set, such as the number of slots/betting terminals, establishments or installations, minimum size (in square metres) of the gaming premises, and so on.
Only the applicants who comply with all requirements of regional regulation can successfully apply for the respective licence.
The duration of the application process depends on each regional regulation and on the conditions of the public tender process (if applicable). It generally takes between three and six months from the date of submission by the operator.
Duration of licence and cost
The duration of licences depends on each region and type of licence. A common practice is to grant licences for ten years, renewable for another ten years. The cost varies depending on the:
- Type of licence.
- Region where the application is made.
Generally, land-based gambling operators are only subject to the prohibitions on gambling that apply to:
- Minors under the legal age.
- People who have been declared disabled by law or court decision.
- People who have voluntarily requested to limit their access to gaming.
Online gambling at a regional level is restricted (for example, only tax residents of the Madrid region are allowed to play there). In theory, a tax resident of Madrid is entitled to play on the website of a Madrid operator from any other location within Spain, only by virtue of their tax residency. However, autonomous regions, such as Madrid, have authority only in their own territory and have no authority in others regions where their tax residents might go. Therefore, there is a clear conflict between the authorities of the different regions.
Anti-money laundering legislation
Land-based casinos must identify and check the identification of persons that enter their establishments (Article 7.5, Spanish AML Act ( Law No. 10/2010)).
Casinos must also identify all persons requesting any of the following:
- Delivery of cheques to customers as a result of the exchange of chips.
- Transfer of funds.
- Certificates of the gains obtained.
- Purchase or sale of gambling chips for EUR2,000 or more in total.
Online gambling in Spain can be offered in one specific autonomous region, in more than one region, or at federal level. The latter is the most common case for online gaming. The federal licence covers every region, so an operator that holds a federal licence does not need to apply for a regional one.
The aim of the Gaming Act is to regulate all types of games at federal level to:
- Protect public order.
- Combat fraud.
- Prevent addictive behaviours.
- Protect minors.
- Safeguard players’ rights.
This should be done without affecting the requirements of the regional statutes.
There are no issues as to the legality of the local law at EU level.
According to the Gaming Act, entities interested in operating must obtain both:
- A general licence per category of game.
- A single licence for each specific type of game within the category.
The Royal Decree 1614/2011 of 14 November 2011 developing the regulation of the Gaming Act 13/2011 on Licences, Authorisations and Gaming Registries, together with the ministerial orders approving the specific regulation of each kind of game, establish the categories for the single licences in accordance with each type of general licence.
There following general licences are available:
- Betting. The single licences available in this category are for:
- pools on sports betting;
- fixed odds sports betting;
- pools on horse racing;
- fixed odds horse racing;
- other fixed odds betting; and
- exchange betting.
- Raffles. No single licences are available for raffles.
- Contests. A single licence for contests is also available.
- Other games. The single licences available in this category are for:
- complementary games; and
For example, if an operator intends to offer fixed-odds sports betting and poker games, it must obtain:
- Two general licences for betting and other games.
- A single licence for each game that it is going to commercialise, in this case two single licences for fixed odds sports betting and poker.
Operators can only obtain a general licence during the public tender announced by the Minister of Finance. So far, there have been public calls in November 2011 and in December 2014. Single licences can be applied for together with the general licence or at any other time, if the applicant already holds the relevant general licence.
The number of licences or operators was not limited by the tenders and all operators meeting the tenders’ requirements have obtained a licence. The unsuccessful ones are banned from operating in Spain, at least until the relevant public tender is held.
There is a long list of requirements and documents that the applicants must file.
The main requirements that an online gaming operator must comply with are:
- Incorporation in Spain or elsewhere in the EEA as a Sociedad Anónima (SA), or the EEA equivalent.
- If the operator is not based in Spain, it needs, as a minimum, a permanent representative in Spain with capacity to receive notifications and e-notifications (that is, a physical address and an email account are necessary).
- The company’s objects must be restricted to the organisation, commercialisation and exploitation of gaming/betting activities.
The documents filed with the applications fall into three groups:
- Legal solvency. This includes:
- corporate documents;
- certificates that it has no outstanding tax and social security payments;
- the relevant administrative fees.
- Economic solvency. This includes:
- the annual accounts of the last three years;
- a description and the origin of the own and external resources planned to be used for the development of the relevant activity;
- economic guarantees of significant amounts (for example EUR2 million for betting, EUR2 million for other games and EUR500,000 for contests).
These amounts are required for the initial period, which commence on the date of the licence application and end on 31 December of the year after the licence has been granted. After this, the total amount in guarantees is amended (in most of the cases reduced), and instead, the gaming operator must pay one amount for all its general and single licences. This amount is calculated based on the figures for the previous year according to a specified percentage. This percentage is established by law on the gross or net income of all the single licences that the operator has been granted and varies depending on the licence. Once the calculation is made, the guarantee will be the highest amount between EUR1 million and the sum of the net or gross income of each game.
As an example, an operator offering the games of roulette, blackjack and slots during the previous year must make the following calculation: the legal percentage established by the regulation for the calculation of the guarantee for these games is 8% on gross gaming revenue (GGR). Assuming the GCR for these games for the last year was EUR1 million, EUR500,000 and EUR1 million respectively. To update the guarantee, the operator must calculate how much 8% on the GGR of these games is:
- if the sum of the results for these three games is more than EUR1 million, the amount for the guarantee covering all the licences of the operator;
- if the result of the calculation for these three games is less than EUR1 million, the guarantee will be EUR1 million since this is the minimum amount for the guarantee established by the regulation.
Following this example, the result of the calculation is EUR200,000 and, therefore, the new guarantee will be EUR1 million. The minimum guaranteed amount for the operator’s single licences must be at least EUR1 million in total for all of the operator’s single and general licences except for the contest games licence (where the minimum amount must be at least EUR250,000).
Technical solvency. This should be proved through a number of documents and certifications from independent bodies. For example the operator’s technical project must be submitted together with a preliminary certification report from an independent laboratory, certifying that the technical project meets the licence requirements in Spain for software, security and connections. Operators must also submit a certification by an independent laboratory that the operator’s internal control system complies with the technical specifications and that the gaming operations data is stored in Spain. Additionally, other documents regarding the basis of the tender are necessary, such as:
- the anti-money laundering policy;
- operating plan; and
- definitive agreements executed with third parties.
The window to apply for licences is open for 30 working days from the day after the call for public tender was published. Once the applications are submitted, the General Directorate for Gambling Regulation has up to six months to either issue the licences or refuse the application.
Duration of licence and cost
General licences are granted for a term of ten years, which can be extended to ten more years. For the licence to be renewed, the operator must submit certain documents certifying its compliance with the renewal conditions to the General Directorate for Gambling Regulation 12 to four months before the licence expires. The main condition is the continued exploitation for six years of any of the single licences that are under the corresponding general one. Single licences are granted for a minimum term of one year and a maximum of five (depending on each single licence), and can be extended for identical periods of time. Similarly, to apply for a renewal, the operator must submit certain documents to the General Directorate for Gambling Regulation. For both general and single licences, if the operator will be granted the renewal if it meets the necessary conditions. Therefore, licences could be perpetually extended.
The administrative fees payable when a gaming licence application is made are:
- EUR10,000 per each general and single licence.
- EUR2,500 per each general and single licence that must be entered into the Gaming Registry.
- EUR38,000 for the certification of all licences applied for, which is payable once.
Operators must also pay by 31 January every year the annual levy for the General Directorate for Gambling Regulation’s regulatory activities. This amounts to 0.075% of the previous year’s annual turnover.
Online gambling operators are prohibited from (Gaming Act):
- Organising, commercialising and exploiting games that, due to their nature or purpose:
- violate people’s dignity, right to honour, personal/family privacy, reputation, rights of young people and children, or any constitutionally-recognised right or freedom;
- are based on committing offences, misconduct or administrative infringements; or
- are concerned with events forbidden by the legislation in force.
- Allow any of the following to participate in gambling:
- minors and people who have been declared disabled by law or judicial resolution;
- people who have voluntarily requested their access to gaming be restricted;
- shareholders, owners, management staff and employees directly involved in the development of the games, as well as their relatives;
- athletes, trainers or any other person who directly participates in the event or activity on which bets are placed; or
- certain members and staff of the General Directorate for Gambling Regulation, and their relatives.
The main restrictions are related to whether the type of game is approved, and whether the operator is authorised to offer it (see Question 7). If a game is not approved, it cannot be offered.
Anti-money laundering legislation
To prevent money laundering, the online operator must identify and verify the identity of the winners of prizes of EUR2,500 or more through reliable documents and must keep copies of the records.
B2B and B2C
Both B2C and B2B operators are covered under the “gaming operator” (Royal Decree 1614/2011 on 14 November 2011, developing the regulation of the Gaming Act 13/2011 on Licences, Authorisations and Gaming Registries). Whether a company that organises, exploits or commercialises gaming activities is a gaming operator depends on the specific services that each operator renders and the conditions under how it does so. Gaming operators are defined as individuals or legal persons that both (Article 3.2, Royal Decree 1614/2011):
- Run a gaming activity so that their incomes from it are linked to the gross or net revenue, commissions and any other amounts.
- Perform any commercialising gaming activity, such as determining the prizes or tournaments, managing the gaming platform, registration of users or players’ rules, transactions, and payment settlements.
If a gaming operator manages a gaming platform that it is a member of, or which other gaming operators join and pool together stakes from their respective users, it is also a gaming co-organiser. In accordance with the terms of the public tender call or the applicable regulation, the General Direction on Gambling may adapt or establish exceptions to certain requirements for gaming operators if these rules are not directly applicable to the activity performed as gaming co-organisers, as well as limit their liability regime, (Article 3.3). The General Direction on Gambling has not yet officially implemented the liability regime of B2B operators. This provision will allow the General Direction on Gambling to limit the responsibility of B2B operators in case of a breach of any regulatory provision, when appropriate according to the characteristics of the activity of a B2B.
In practice, therefore, there is no distinction between the law applicable to B2B operations and to B2C operations in online gambling. If an operator meets the requirements established by the regulation, it will be considered a gaming operator requiring a licence.
There are no specific technical measures set out in the regulation to protect consumers from unlicensed operators. However, the General Direction on Gambling can request ISPs and financial entities to adopt blocking measures within sanctioning procedures initiated against illegal operators. Blocking access from Spanish IP addresses and payment blocking are the most common measures.
Mobile gambling and interactive gambling
There are no differences between the regulation of mobile gambling and interactive gambling on television since the Gaming Act is equally applicable to these activities.
Games of pure skill/for no monetary value
The concept of gambling requires the existence of three essential components which are:
- The existence of a prize.
See Question 2, General definition.
If one of these factors is not in place, the game falls outside the definition of gambling and the Gaming Act does not apply. This means that no gaming licence is necessary if a game is free to play or if there is no prize. The same rule applies to games using virtual currency of no monetary value, even if credits can be purchased for real money but the player cannot win anything of value other than additional game plays, chips, coins or other virtual items. Games of pure skill also fall outside the scope of gambling as they do not involve a degree of chance and therefore do not require a licence or a regional authorisation.
Regulation of complementary games
With regard to social games, there is a regulation at federal level for complementary games (incorrectly called “social games” in the previous draft versions of the regulation). By definition, there are different complementary games that combine chance with skill, culture and knowledge. What they have in common is that their main purpose is fun and they are not ultimately based on economic profit.
The maximum amount that a player can risk on such a game is EUR1 per hand. The maximum prize that the player can win is 40 times the amount risked (that is, EUR40).
For complementary games, there is no general catalogue list of games: each operator must produce a catalogue containing all complementary games it intends to commercialise within the platform. This catalogue must be notified to the General Directorate for Gambling Regulation at least 15 days before the start date of commercialising the games, together with the specific rules of each proposed complementary game. Once the General Directorate for Gambling Regulation analyses the catalogue, it may suspend it or demand certain changes to guarantee the participants’ protection.
Gambling debts are enforceable in Spain. At the time of applying for a licence, operators must execute an economic guarantee to cover the payments of prizes to players, fines and gaming taxes in case the operator cannot meet such payments on time.
Taxes differ according to whether the applicable legislation, the applicable autonomous region or whether they are regulated by the federal government (see Question 4).
The applicable taxes to land-based games are approved by each regional regulation. For example, in the Madrid region, they are:
- 20% on gross gaming revenue (GGR).
- 15% on GGR for bingo, and 30% on GGR for electronic bingo.
- 10% on GGR for online games developed in the Madrid region.
- 13% on GGR for betting games and 10% on GGR for sports betting, horse racing and other betting games.
The tax rate applicable to casino games depends on the tax base. The tax base is the gross income derived from the amounts that the players dedicate to the participation of the games (turnover). GGR is the total amount that the players dedicate to the participation of the games minus the winnings/prizes paid by the operator, and it is:
- 22% if the tax base is less than EUR2 million.
- 30% if the tax base is from EUR2 million to EUR3 million.
- 40% if the tax base is from EUR3 million to EUR5 million.
- 45% if the tax base is more than EUR5 million.
There are fixed rates that apply to operating betting terminals or automatic appliances that are suitable for the development of games. The amount is dependent on the applicable regulation.
The tax rates for online games subject to the Gaming Act at federal level are as follows:
- Pool betting on sports: 22% on turnover. Turnover is the gross income, that is, the total amount dedicated by the players for the participation of the games. GGR is the total amount dedicated by the players for the participation of the games minus the winnings/prizes paid by the operator.
- Fixed-odds sport betting: 25% on GGR.
- Betting exchanges for sports: 25% on GGR.
- Pool betting on horse racing: 15% on turnover.
- Fixed-odds horse betting: 25% on GGR.
- Other forms of pool betting: 15% on turnover.
- Other forms of fixed-odds betting: 25% on GGR.
- Other forms of exchange betting: 25% on GGR.
- Raffles: 20% on turnover or 7% on turnover for non-profit associations.
- Contests: 20% on turnover.
- Other games (poker, casino, bingo, slots, and so on): 25% on GGR.
- Random combination games: 10% of the prize market value.
There are advertising rules for land-based games in each autonomous region. Most regions require a prior specific authorisation for advertising activities. Other regions seek to relax the regulatory burden and are removing this separate authorisation for advertising activities. One of the most recent regional regulations is in the regions of Canary Islands and Valencia. Instead of the prior separate authorisation, these regulations only establish general principles and prohibitions to ensure that the use of the advertising tool respects the rights of citizens, and in particular protects young persons, children and consumer health.
Carrying out advertising activities, by any means, without authorisation or outside the limits established in the certificate or regulation, is a criminal offence (see below, Online gambling).
Since licences were granted on 1 June 2012, the general rules are as follows:
- Advertising and sponsoring activities are only allowed for operators with a Spanish licence and an authorisation to advertise.
- The General Directorate for Gambling Regulation has drafted a self-regulatory code of conduct, which the majority of gambling operators, advertising agencies, media operators and communication services providers have subscribed to. This code of conduct and the general Spanish law regarding advertising and ecommerce also apply to online gambling.
- Unauthorised gaming advertising is prohibited. Carrying out advertising activities, by any means, without authorisation or outside the limits established in the certificate or regulation, is a criminal offence (Article 40(d), Gambling Act). This is subject to fines from EUR100,000 to EUR1 million and cessation of the activity in Spain for six months.
Developments and reform
Probably the most significant regulatory change within land-based gambling in recent years is the increase of the sports-betting regulations: out of the 17 regions, 15 have developed regulations for land-based gambling.
Overall, the legal status of land-based gaming has not changed significantly in recent years. However, different regional regulations include a number of minor changes that make them less restrictive and more accessible to operators. A convergence between the territorial regulations and the federal ones is therefore likely in the medium term.
The legal status of online gambling has changed significantly in recent years since the sector was unregulated until May 2011 and online gaming operators had almost no obligations (see Question 1). In contrast, they are now prohibited from operating without a licence and the ones that are legally operating have duties and obligations imposed by the gaming legislation.
Most of the regional regulations are being amended to implement the appropriate legislative requirements to establish the procedure to authorise licensed operators at federal level to commercialise online gaming activities through physical terminals placed in the territory of each autonomous region.
The General Directorate for Gambling Regulation is reviewing the federal gaming tax regime. The currently applicable tax rate to online casino games and fixed odds betting games at the federal level is 25% on GGR. In comparison to the 10% rate that applies to gaming regionally, 25% is considered excessive by the industry and it results in a lack of competition. This rate should be synchronised with the regional rates applicable to the online games or at least reduced to the much more economically viable rate of 15%. However, even though the General Directorate for Gambling Regulation is reviewing the tax rate, a different body within the Ministry of Finance will then need to approve it. The changes in consideration are unlikely to be adopted in the short term.
The regulatory authorities
General Directorate for Gambling Regulation (Dirección General de Ordenación del Juego)
Description. The General Directorate for Gambling is responsible for:
- Developing basic gaming regulations.
- Proposing to the Minister of Economy and Finance new calls for public tender and granting licences.
- Establishing the technical and functional requirements of the games, to approve the technical systems, and to issue general instructions to operators.
- Supervising, controlling, inspecting, penalising gaming-related activities and prosecuting illegal gambling, where necessary.
- Ensuring that the interests of the participants and of vulnerable groups are protected.
- Resolving claims that participants file against operators.
General Directorate for Gambling Regulation (Direccion General de Ordenación del Juego)
Description. This is the official site of the General Directorate for Gambling Regulation with any relevant information, such as the applicable regulation, latest news, list of legal operators and different sites, guidance and assistance to players. English versions (www.ordenacionjuego.es/en) are only for guidance and information purposes.
Source: Practical Law | <urn:uuid:e4d120fb-8dbe-481f-9dec-1734ac80448f> | CC-MAIN-2021-21 | https://www.responsiblegambling.eu/regulation/gaming-in-spain-overview/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989030.87/warc/CC-MAIN-20210510033850-20210510063850-00296.warc.gz | en | 0.928762 | 7,197 | 2.890625 | 3 |
Silk is a natural protein fiber, some forms of which can be woven into textiles. The protein fiber of silk is composed mainly of fibroin and produced by certain insect larvae to form cocoons. The best-known type of silk is obtained from the cocoons of the larvae of the mulberry silkworm Bombyx mori reared in captivity (sericulture). The shimmering appearance of silk is due to the triangular prism-like structure of the silk fibre, which allows silk cloth to refract incoming light at different angles, thus producing different colors. Use of Spider silks to produce novel, protein-based, eco-friendly materials for use in medical, cosmetic, electronic, textile, industrial, and other applications is also under research.
- Quotes are arranged alphabetically by author
A - FEdit
- Silk and Lace
Stronger than an evening fire
My heart yearns with desire
To build with you a burning pyre
That rivals the beauty of angelic choir
In you I see both beauty and grace
Each time I gaze upon your brilliant face
For you are Beauty draped in silk and lace
And in your eyes my heart has found its place
To speak with you is like Autumn wind
The world is kind when you and I see the day begin
Together embraced in silk and lace a passion that knows no end
Though we are not lovers loving under covers I am glad to know you as a friend.
- John Allen in: Silk and Lace, poetrysoup.com
- She maketh herself coverings of tapestry; her clothing is silk and purple. It looks as if she did not neglect herself. The coverings mentioned are for her bed. She took time to decorate and adorn her bedroom with beautiful bedspread, pillows etc. her clothing is attractive and made of silk. Silk was one of the finest linens from Egypt.
- HANDKERCHIEF, n. A small square of silk or linen, used in various ignoble offices about the face and especially serviceable at funerals to conceal the lack of tears. The handkerchief is of recent invention; our ancestors knew nothing of it and entrusted its duties to the sleeve. Shakespeare's introducing it into the play of "Othello" is an anachronism: Desdemona dried her nose with her skirt, as Dr. Mary Walker and other reformers have done with their coattails in our own day --an evidence that revolutions sometimes go backward.
- It's funny how worms can turn leaves into silk.
But funnier far is the cow:
She changes a field of green grass into milk
And not a professor knows how.
- Dorothy Caruso in: Enrico Caruso his life and death, 1945, p.42
- The Spirit of the Gift
It is not the weight of jewel or plate,
Or the fondle of silk or fur;
'Tis the spirit in which the gift is rich,
As the gifts of the Wise Ones were,
And we are not told whose gift was gold,
Or whose was the gift of myrrh.
- Edmund Vance Cooke in:The Kindergarten for Teachers and Parents, Volumes 17-18, Alice B. Stockham & Company, 1904, p. 211
- BLACK SILK DRESS
Her black silk dress
Fitted her like a sheath
The taut lines showed
Her nakedness beneath
Save for black-stockings
Gartered at the thigh
Stimulating to the loins
And pleasing to the eye
She turned every head
With her glamorous allure
Filled each one with thoughts
None of which were pure
- According to an ancient Chinese legend, one day in the year 240 BC, Princess Si Ling-chi was sitting under a mulberry tree when a silkworm cocoon fell into her teacup. When she tried to remove it, she noticed that the cocoon had begun to unravel in the hot liquid. She handed the loose end to her maidservant and told her to walk. The servant went out of the princess's chamber, and into the palace courtyard, and through the palace gates, and out of the Forbidden City, and into the countryside a half mile away before the cocoon ran out. (In the West, this legend would slowly [[w:Mutate|mutate over three millennia, until it became the story of a physicist and an apple. Either way, the meanings are the same: great discoveries, whether of silk or of gravity, are always windfalls. They happen to people loafing under trees.
- They don't farm silk in America. They wear clothes, don't they? Or do they go around naked? If they wear clothes, they need silk. And they can buy it from me. “Okay, whatever you want. Just hurry.”
- Jeffrey Eugenides in: "Middlesex: Reissued", p. 43
- My family might never have become silk farmers if it hadn't been for the Emperor Justinian, who, according to Procopius, persuaded two missionaries to risk it. In 550 AD, the missionaries snuck silkworm eggs out of China in the swallowed condom of the time: a hollow staff. They also brought the seeds of the mulberry tree. As a result, Byzantium became a center of sericulture. Mulberry trees flourished on Turkish hill sides. Silkworms are the leaves, Fourteen hundred years later, the descendants of those stolen eggs filled my grandmother’s silkworm box in Guilin.
- Jeffrey Eugenides in: "Middlesex: Reissued", p. 71
- Orthodox monks smuggled silk out of China in the sixth century. They brought it to Asia Minor. From there it spread to Europe, and finally traveled across the sea to North America. Benjamin Franklin fostered the silk industry in Pennsylvania before the American Revolution. Mulberry trees were planted all over the United States.
- Jeffrey Eugenides in: "Middlesex: Reissued", p. 396
- The most widely raised type of silkworm, the larva of Bombyx mori, no longer exists anywhere in a natural state. The legs of the larve have degenerated, and the adults do not fly.
- Jeffrey Eugenides in: "Middlesex: Reissued", p. 397
- A Silk Rose
I run my hands down your silk shoulders
till our hands are in each others.
As our bodies lightly touch,
these lips can only kiss.
I whisper softly
my only words,
I love you
- Michael J. Falotico in: A Silk Rose, poetrysoup.com
G - LEdit
- Your thought advocates fame and show. Mine counsels me and implores me to cast aside notoriety and treat it like a grain of sand cast upon the shore of eternity. Your thought instills in your heart arrogance and superiority. Mine plants within me love for peace and the desire for independence. Your thought begets dreams of palaces with furniture of sandalwood studded with jewels, and beds made of twisted silk threads. My thought speaks softly in my ears, "Be clean in body and spirit even if you have nowhere to lay your head." Your thought makes you aspire to titles and offices. Mine exhorts me to humble service.
- Narrated Hudhaifa: The Prophet said, "Do not drink in gold or silver utensils, and do not wear clothes of silk or Dibaj, for these things are for them (unbelievers) in this world and for you in the Hereafter."
- HADITH Sahih Bukhari 7:69:537
- Narrated 'Abdullah bin Umar: Umar bought a silk cloak from the market, took it to Allah's Apostle and said, "O Allah's Apostle! Take it and adorn yourself with it during the 'Id and when the delegations visit you." Allah's Apostle (p.b.u.h) replied, "This dress is for those who have no share (in the Hereafter)." After a long period Allah's Apostle (p.b.u.h) sent to Umar a cloak of silk brocade. Umar came to Allah's Apostle (p.b.u.h) with the cloak and said, "O Allah's Apostle! You said that this dress was for those who had no share (in the Hereafter); yet you have sent me this cloak." Allah's Apostle said to him, "Sell it and fulfill your needs by it."
- Sahih Bukhari 2:15:69, See also: Sahih Bukhari 3:47:782, and Sahih Bukhari 8:73:104
- Narrated 'Ali: The Prophet gave me a silken dress as a gift and I wore it. When I saw the signs of anger on his face, I cut it into pieces and distributed it among my wives."
- Sahih Bukhari 3:47:784, See also: Sahih Bukhari 7:64:279
- Narrated Anas: A Jubba (i.e. cloak) made of thick silken cloth was presented to the Prophet. The Prophet used to forbid people to wear silk. So, the people were pleased to see it. The Prophet said, "By Him in Whose Hands Muhammad's soul is, the handkerchiefs of Sad bin Mu'adh in Paradise are better than this." Anas added, "The present was sent to the Prophet by Ukaidir (a Christian) from Dauma."
- Sahih Bukhari 3:47:785, see also: Sahih Bukhari 4:54:471
- Narrated Abu 'Amir or Abu Malik Al-Ash'ari: that he heard the Prophet saying, "From among my followers there will be some people who will consider illegal sexual intercourse, the wearing of silk, the drinking of alcoholic drinks and the use of musical instruments, as lawful. And there will be some people who will stay near the side of a mountain and in the evening their shepherd will come to them with their sheep and ask them for something, but they will say to him, 'Return to us tomorrow.' Allah will destroy them during the night and will let the mountain fall on them, and He will transform the rest of them into monkeys and pigs and they will remain so till the Day of Resurrection."
- Sahih Bukhari 7:69:494
- For me, each day begins and ends with wanting to learn a little more about the secrets of spider silk. Spiders have been around for over 300 million years and are found in nearly every terrestrial environment. There are more than 40,000 species living today and each spins at least one type of silk. However, most spiders spin more than one type of silk. For example, the orb-web weaving spiders that are commonly seen in gardens during the day or near porch lights at night, typically make seven kinds of silk. Each silk is chemically and functionally distinctive.
- An individual spider can produce multiple varieties of silk because it has numerous silk glands inside its body. Some silk glands make one type of silk, another set of silk glands makes a second type of silk, and so forth. One of the unforgettable moments in my life was the first time I dissected a spider and saw its stunningly beautiful, translucent silk glands.
- Cheryl Hayashi in: "The secrets of spider silk"
- What's the difference between spider silk and silkworm silk, the kind of silk in a typical silk scarf or blouse. Silk used in textiles is spun from the mouths of caterpillars to form cocoons that protect them while they transform into moths. A silkworm has only one pair of silk glands and can make one type of [[w:Fiber}fiber]].
- Cheryl Hayashi in: "The secrets of spider silk"
- Spiders, in contrast, have many silk glands, and the silk emerges from spinnerets located towards the rear of their bodies. Spiders are also able to spin silk from when they are very young and continue to do so throughout their lives.
- Cheryl Hayashi in: "The secrets of spider silk"
- Researchers are drawing inspiration from spider silks to produce novel, protein-based, eco-friendly materials for use in medical, cosmetic, electronic, textile, industrial, and other applications. The potential is enormous, especially considering the mind-boggling diversity of spiders and their silks.
- Cheryl Hayashi in: "The secrets of spider silk"
- Golden silk with deep red tie
Folded in golden silk
with deep red tie of the same
tucked inside a bedside drawer
this is for what she came...
holding to her heart these now opened letters
with tear stains she has no shame
the i love yous wrapped in golden silk
with deep red tie of the same.
- Melisa Karpinske in angel4eva1976]] in:Poetry, gotpoetry.com
- So now our dear old Santa, he's a picture of health.
Now that he's so much thinner, he moves with greater stealth.
Please don't leave him cookies, forget about the milk.
Perhaps some sexy boxers, Mrs. Claus prefers them silk.
- Richard Lamoureux in: Silk For Santa Claus, poetrysoup.com
- The eighteenth century was "The Age of Silk”. It was the fabric of power and class command. Gainsborough painted not people so much as displays of silken extravagance.
- Peter Linebaugh in: "The London Hanged: Crime and Civil Society in the Eighteenth Century", p. 256
- The lace man might then sell or put out the purl to the silver-thread-spinner, who, by intertwining purl and silk, made an embroiderer's thread called 'sleysy'. The lace man's shop had equipment consisting of wheels and spindles much like those at a rope-walk.
- Peter Linebaugh in: "The London Hanged: Crime and Civil Society in the Eighteenth Century", P. 229
M - REdit
- I have been much amused at ye singular φενόμενα [phenomena] resulting from bringing of a needle into contact with a piece of amber or resin fricated on silke clothe. Ye flame putteth me in mind of sheet lightning on a small—how very small—scale.
- I looked at the inchworm dangling from the silk in my hand and said: "Think how nature makes things compared to how we humans make things." We talked about how animals don't just preserve the next generation; they typically preserve the environment for the ten-thousandth generation. While human industrial processes can produce Kevlar, it takes a temperature of thousands of degrees to do it, and the fiber is pulled through sulfuric acid. In contrast, a spider makes its silk - which per gram is several times stronger than steel - at room temperature in water.
- Bill Powers in: Twelve by Twelve: A One-Room Cabin off the Grid & Beyond The American Dream, New World Library, 6 October 2010, P.73
- ...touched the silk thread which the caterpillar makes benignly from the protein fibroin...think of its metamorphosis in its cocoon, a churning of natural juices, enzymes – and out comes a butterfly. Where are the toxics in that?
- Bill Powers in: "Twelve by Twelve: A One-Room Cabin off the Grid & Beyond The American Dream", p. 74
- We are all Adam's children, but silk makes the difference.
- Proverb in: Peter Linebaugh The London Hanged: Crime and Civil Society in the Eighteenth Century, Verso, 2003, p.257
S - ZEdit
- “Do not wear silk, for one who wears it in the world will not wear it in the Hereafter” (5150).
- Queen Elizabeth owned silk stockings. The capitalist achievement does not typically consist in providing more silk stockings for queens but in bringing them within the reach of factory girls in return for steadily decreasing amount of efforts.
- Yuichi Shionoya in: Schumpeter and the Idea of Social Science: A Metatheoretical Study, Cambridge University Press, 19 July 2007
- Well, we realised that we have to move with the times, adapt to change. Also, this is a way of capturing a larger segment of the market. The new designs will mean more takers among the younger age groups, who look for trendy designs, and new looks. The older age group will now have something different-looking to add to their existing classic-design collection. …All these innovations are being done without in any way tampering with the purity and uncompromising quality that has characterised Mysore silk fabrics - including saris — for decades.... Although we are giving the body of the sari an element of interest with these innovations, we are seeing to it that it doesn't kill the inherent beauty of the fabric.
- This is one element [kasuti-embroidery fusion] I always missed in a Mysore silk saree. So, I had to go for Kancheepuram, Peddapuram [saris] or Banaras when I needed to wear a very heavy-looking sari. Now, I have bought one and even gifted another to my sister-in-law as part of her wedding trousseau.
- Nandana Roy, on Kasuti-embroidery fusion of silk, quoted in: "Modern MYSURU".
Chinese Art & CultureEdit
Clare Hibbert in: Chinese Art & Culture, Heinemann-Raintree Library, 2005
- China has been famous for its silk for thousands of years. The main trade route linking China to the West was even known as the Silk Road. The ancient Romans prized Chinese silk and imported both thread and cloth. The Chinese kept their methods of silk production a closely held secret, and so Westerners were unable to make their own. Knowledge of silk making gradually spread west after two Persian monks smuggled some silk worm eggs out of China in the 6th century C.E. However, China remained the world’s key producer.
- In: p. 30
- Silk. production. The ancient Chinese method of silk making, or sericulture, involved hatching many silk moth eggs at the same time. The caterpillars were then kept on bamboo trays and fed hand-picked mulberry leaves. Some cocoons were allowed to develop into adult moths so that they could produce more eggs. The rest were dropped into boiling water, which made each cocoon unwind to produce a single fiber that could be over half mile (nearly a kilometer long).
- In: p. 31
- Silk tapestry, or kesi appeared in China during the Tang dynasty around the 7th century C.E. In silk tapestry, both the background [[w:Fabric|fabric and the foreground threads are made of silk. Tapestry artists favored big, bold designs without repeats.
- In: p.32
- Plain silk fabric can be tie-dyed or printed. Artisans use simple printing blocks to create colourful, repeated designs. For more luxurious reults, silks can be hand-painted or embroidered.
- In: p .32
- Summer robes were made from light, cool silk. Those for winter wear were [[w:Quilt|quilted — two layers of silk were stitched together, with a thick layer of warm padding in between. Quilting is still a popular technique in modern China for creating cozy dresses and jackets.
- In: p. 33
- The tradition of painting on silk emerged in the 3rd century BC., with painters producing banners and scrolls....Between the 4th and the 10th centuries silk painted concentrated on human figures. They depicted their clothes and movements with graceful brush strokes.
- In: P. 34
- It [qin] has seven silk or metal strings and a long soundbox, with marks showing the positions of thirteen particular pitches. The qin was a favorite instrument of scholar-poets because its plucked strings create delicate, magical notes.
- In: P. 43
- China's earliest contact with the rest of the world was via the Silk Road, along which Chinese silks were transported through the Middle East and into Europe. In return, traders brought foreign goods, such as wool, glass beads, silver, and gold into China.
- In: p.48
Global Silk Industry: A Complete Source BookEdit
R.K.Datta in: Global Silk Industry: A Complete Source Book, APH Publishing, 1 January 2007
- The history of silk development spans through centuries and can be traced around the world's very ancient trade route called 'Silk Road'. A UNESCO inspired team trekked this obscure yet historical caravan tract called ‘Silk Road’, which began in China, passed through Tashkent, Baghdad, Damascus, Istanbul and reached European shores. Since the beginning of the Christian era (by 126 BC) silk has been the most coolourful of world caravans. Fabulous silks from China and India were carried to Europe through this 6400 km long road.
- In: p. 13
- It seems sericulture entered Europe during 140-86 BC. India also has long 3000-year history of silk development and at present ranks second to China in multi-varieties of silk production.
- In: p. 13
- China zealously guarded the secret of silk for about 3,000 years and plied a prosperous silk trade with the rest of the world. The merchant navies and the Chaldees carried fabulous silks from China to the courts of Babylon and Nineveh.
- In: p. 15
- In India the silk culture dates to antiquity. According to historians, mulberry culture spread to India by about 140 BC from China through Khotan.
- In: p. 15
- Even though mulberry culture may have come to India overland from China, the references in old scriptures definitely point out that India cultivated some kind of wild silks independently of China from the time immemorial. The ancient religious scripture, Rigveda, mentioned ‘urna’, generally translated as some sort of silk.
- In: p. 16
- Francis I did all he could to encourage and support sericulture and is the first French King to wear pure silk stockings. After Francis l, Both Henry II and Henry III patronized the silk weaving industry but it was Henry IV who introduced silkworm rearing into France.
- The saga of silk success and shortcomings during the twentieth century is just astonishing. For the first 40 years until the nylon hit the market silk trade reached an all-time high demand mainly for the women's stockings.
- In: p. 23
- ... the important factor affecting the growth of global silk trade during 1990s was the imposition of quotas by European Union and the United States on imports of
100 percent silk products. This was done to contain the explosive rise in imports silk garments, mostly sand-washed from China and Hongkong which raised eyebrows of the manufacturers in Europe and USA.
- In: p. 26
- Though silk production is less than 0.2 per cent of the world textile output, its production base is spread over 60 countries in the world with Asian nations bagging lion's share of over 90 per cent of mulberry production and almost 100-percent of non-mulberry silk....India is the world’s second largest producer with unique output of four varieties of silk – mulberry, tasar, eri, and moga.
- In: p. 31
- Sericulture, silk reeling and weaving have been practiced in the ancient trading capital of Shanghai. Shanghai has several modern silk processing plants and is also serving as captive units of American large buying houses.
- In: p. 35
- Karnataka, earlier known as Mysore, abound with silk, sandal wood and gold, the three most sought-after natural commodities . Karnataka is now the main mulberry silk producing state in India, contributing about two thirds of the output. Over 1 million families earn their living by cultivating bush mulberry, rearing silkworms and harvesting cocoons five to six times a year. Bangalore is the silk capital of India with the headquarters of CSB and its affiliated research located there.
- In: p. 41
- Silk processing in EU is concentrated in Milan and Como in Italy, Lyon in France and Zurich in Switzerland; same high quality silk weaving, jacquard, and printing are undertaken in the United Kingdom. Italy and France specialize in designer fabrics and scarves from famous fashion houses.
- In: p. 63
- Silkworm Rearing, Cocoon Harvesting The silk caterpillar, belong to the Order of Lepidoptera winged insects, genus Bombyx. The species Bombyx mori, which can be cultivated indoors, produces over 90 per cent of the world output of raw silk used commercially. There are other types of wild silkworms under the genus Saturnidae.
- In: p. 73
- Silk is crystalline, homogeneous in structure, [[w:Hygroscopic|hygroscopic in nature, light in weight, and is the longest and the strongest of all natural fibers. Soft, lustrous, and hygienic and also has an excellent affinity to dyes. Silk does not catch fire as easily as nylon and wool.
- In: p. 103
- Success of silk processing depends on quality of silkcocoons, which form integral part of raw silk production and sericulture...Silk is produced and secreted from the external secreting gland (exocrine gland). It is derived from the ecoderm and appears at about 36 hours prior to the rotation (blaktokinesis)...Silk gland can be distinctly divided into three divisions, anterior, middle and posterior....its composition includes spinneret, anteriror division, middle division, and posterior division.
- In: p. 103
- spinneret: It is a delicate tube inside the posterior part of the primary stage of silk gland. It comprises three parts, viz. spinning area, thread press and common tube. On the dorsal, lateral and ventral sides of thread press, six sets of musculus fibers develops which makes use of their expansion, contraction, flexibility for regulating the flow of silk substance, coarseness of silk as well as pressure in silk formation.
- In: p. 104
- The weight of the silk filament is decided by the weight of the cocoon shell and silk percentage of cocoon shell. In the reeling factory, raw silk percentage of cocoons and reeling discount of dried cocoons are used. The weight of the cocoon shell and the uniformity of cocoon are considered important commercial factors in raw silk reeling that are closely related to the raw silk yield to be obtained.
- In: p. 121
- The neatness and some cleanness of raw silk are directly influenced whereas size deviation, tenacity and cohesion of raw silk are indirectly influenced by the breed characteristics of the cocoons. The degree of neatness of raw silk is determined on the basis of incidence of the occurrence of defects which are smaller than those classified as “minor cleanliness defects”.
- In: p. 123
- The silk reeled on small reels is soaked in permeation chamber kept at low pressure [vacuum up to 400mm of (Hg) mercury/torr before re-reeling. The emulsion medium is water with non-ionic wetting agent and lubricating oil. The permeation of liquor is effected three times to facilitate easy unwinding of silk from the small reels.
- In: p. 137
- Before spinning a cocoon, the silk builds a hammock, an anchorage to hold its cocoon. This material is known as floss or blaze (Italian:Shelia, Japanese: Kebab). The quantity is small and the quality is poor, but it can be used for noil spinning.
- In: p. 142
- From Filament to Fabric Silk seems to have played an important role in the development of loom and weaving technology. Traces of primitive looms and woven fabrics are found in excavations in Egypt, China, India and Peru (2500 to 400 BC)...The silk weavers of China innovated the use of heddle and draw loom, a revolutionary development over the primitive tribal loom. India invented a foot tradle for silk weaving, a technical innovation over the ancient loom.
- Raw silk: The silk thread produced by the reeling together of the baves of several cocoons. Raw silk has no twist.
Poil: A silk yarn formed by twisting raw silk. The twist may be very slight or exceed 3,000 per meter.
Tram: A silk yarn formed by doubling two or more raw silk threads and then twisting them slightly, generally 80 to 150 TPM.
Crepe: Silk yarn made by several raw silk threads and twisting them to very high levels in the range of 2000 to 4000 TPM.
- In: p. 149
- Silk can be woven into all the three basic weaves: plain or tabby twills and [[w:Satin|satin]s. Tabby silk weave produces, among others, taffeta and poplin. Tapestry weaves in silk have been used to weave ceremonial and decorative dresses and tapestries.
- In: p. 154
- Fabrics are degummed and bleached, if they are woven with yellow raw silk. Silk fabrics like other fabrics (cotton, manmade, wool and others) can broadly divided into mechanical and chemical finishing. The objective of mechanical finishing is to impart or improve certain desirable qualities like drape, fall or handle, feel stiffness, weight etc; but most of the mechanical finishes are only temporary.
- In: p. 159
- Silk sateen: The sateen is usually woven with degummed silk yarn but sometimes there are also fabrics woven with raw silk, which are degummed after weaving. The former is called glossed silk sateen and the latter is silk sateen.
- In: p. 176
- Although the bulk of world raw silk supply is spun by the domesticated silk moth, Bombyx mori, there are other sericigenous insects which spin cocoons and their yarn is equally pure silk. There silks are known as wild silk as differentiated from cultivated ‘mulberry silk’. The wild silk moths are abundantly found in remote regions, hilltops and forest interiors in [Burma, China, India, Korea, equatorial Africa and Southeast Asia. In fact, according to one authority, there are between four to five hundred different types of wild moths spinning silk cocoons; but only a few of these moths have commercial values.
- In: p. 179
- World Raw silk Trading: Raw silk is an important international trade commodity traded at the main commodity markets of New York, Lyon and London. Japan used to dominate the world silk market with her leading position as the top exporter, followed by China and Korea. Since 1869s China has taken over the market and Japan has reversed her role, emerging as a leading importer of raw silk
- In: p. 243
- The USA, which used to be the biggest single importer of raw silk until the 1960s has now grown into a leading importer of finished silk cloth by reducing the import of raw silk and waste silk. With the rising weaving costs, the USA prefers to import finished fabric to raw silk.
- In: p. 246 | <urn:uuid:ba659515-10fe-4225-839b-1b4c36479cf6> | CC-MAIN-2021-21 | https://en.m.wikiquote.org/wiki/Silk | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989705.28/warc/CC-MAIN-20210512193253-20210512223253-00177.warc.gz | en | 0.929476 | 6,687 | 2.921875 | 3 |
— H. Sündermann
I. Idea and Civilization
Every great culture, every civilization — every human order of any significance, in fact — has a polar ideology, or mythos, which furnishes the emotional, suprarational foundation for that particular order. The life and destiny of a culture are inseparable from such a nuclear idea. It serves as a formative pole, which during a culture's vital period provides for a unity of political, religious and cultural expression.
There are numerous examples. In ancient Egypt, the singular concept of the ka found its cultural elaboration in the construction of the pyramids. In a similar manner, Taoism combined with Confucianism and Buddhism to form the spiritual core of traditional Chinese culture, just as the cult life of the Japanese revolved around Shinto, and just as Islam furnished the spiritual matrix for a cultural flowering in the Near East during the Middle Ages. Among Indo-Europeans, it was the Vedic tradition which formed the basis for an exquisite Hindu civilization, while a pantheon of Classical gods and heroes presided over the destinies of ancient Hellas and Rome.
If one now turns to the West,* one cannot avoid the conclusion that it is the Christian worldview which stands at the heart of this particular culture.* Indeed, its very symbol is the towering Gothic cathedral. In its art, its architecture, its music, literature and philosophy, the West is pervaded by the omnipresence of Christianity. In the magnificent frescoes of Michelangelo, in the polyphonic rhythms of Vivaldi and Bach, the literary masterpieces of Dante, Chaucer and Milton, the philosophy of Thomas Aquinas, Kant and Hegel — in all of this, the heavy backdrop of Christianity looms unmistakably against the cultural horizon.
Even figures such as Shakespeare, Rembrandt, Mozart, Beethoven, Wagner and Schopenhauer — even Voltaire and Nietzsche! — whose creative daemon transcended Church dogma in noticeable fashion — even they are witness to the ineluctable presence of the Christian idea as a cultural fact. And even if one contends that the works of these personalities had nothing to do with Christian doctrine as such, but derived their ultimate inspiration from other sources, the very fact that such an argument is put forth at all constitutes the most conclusive proof that Christianity is, indeed, the mythos of Western culture, the core idea around which all cultural expression revolves. For even when its fundamental tenets have been challenged and disbelieved, it has continued to qualify the cultural milieu and furnish the central reference point for thought and action.
It is not without significance that those two major languages of Western thought — German and English — should have received their modern form from a translation of the Christian Bible; that the main function of the first Western universities was to teach Christian theology; and that natural science — that domain so uniquely fascinating to the Aryan intellect, which has come to challenge the very foundations of traditional faith itself — began very humbly as the quiet, conscientious study of the world of the Christian creator. All of this is but eloquent testimony that the Christian worldview does, indeed, form the spiritual matrix — the nuclear center — of Western culture.
II. Christianity and the West
When Christianity in its Nicene form first made its appearance amongst the Germanic peoples of Northern Europe, the future progenitors of the West greeted the new doctrine with considerable suspicion and less than full enthusiasm. For their part, they felt more comfortable with their own indigenous gods and beliefs than with the strange new import from out of the East. Even with the accretion of Hellenistic and Roman elements during its migration from Judea, Christianity — with its underlying Oriental/Semitic character — remained essentially alien to the personality and disposition of the proud Teuton. Within the soul of our ancient forebears, the very concept of original sin was perceived as unreasonable and perverse, just as calls for pacifism and self-abnegation were regarded as demeaning to their inherent dignity.
The inborn religiosity — Frömmigkeit — of these men of the North involved values of personal honor and loyalty, upright manliness, courage and heroism, honesty, truthfulness, reason, proportion, balance and self-restraint, coupled with pride of race, a questing spirit and a profound respect for the natural world and its laws — ideas representative of a worldview which the early Christian missionaries found incompatible with their own doctrine and which they proceeded to condemn as heathen.
If they displayed but little inclination to embrace the new faith, these early Teutons were by the same token not unaccommodating in their attitude. With characteristic Nordic tolerance in such matters, they were perfectly willing to permit the peaceful coexistence of a foreign god alongside the natural deities of their own folk.
For its part, however, the intruding new doctrine — impelled by a hitherto-unknown Semitic spirit of hatred and intolerance — commenced to demand the elimination of all competitors, insisting that homage be rendered to but one jealous god, the former Jewish tribal god Yahweh, or Jehovah, and to his son. Alien in its doctrine, the Creed of Love now felt obliged to employ equally alien methods to achieve its purposes. Under the auspices of the sword and accompanied by mass extermination, Christian conversion now made great strides where formerly peaceful persuasion had failed. In this manner, for example, were the tender mercies of the Christian savior disclosed to Widukind's Saxons and Olaf Tryggvason's Norsemen. If it was hypocritical and inherently contradictory, it was nevertheless effective, and all of Europe was thereby saved for Christianity.
* * *
It would be a mistake, however, to assume that only through force and violence did Christianity prevail. In the propagation of its doctrine and the fulfillment of what it considered to be its holy mission, the Church displayed amazing flexibility and suppleness. It was not loath, for instance, to adopt and adapt for its own purposes as it deemed appropriate certain aspects of ancient heathendom, particularly those which were most firmly footed in the folk experience of our early forebears. Not only did this serve as an aid in the conversion process, making the Christian notion more palatable to the Nordic prospect, but it was also useful in inducing greater conformity and submission on the part of those already converted.
Especially during the reign of Pope Gregory did this policy receive definitive sanction. Former heathen holy places were appropriate as sites for the new chapels, churches and shrines. The Northern winter solstice celebration, Yule, was arbitrarily selected as the official birthday of the Christian savior. The spring celebration of reawakening Nature, Easter, was designated as the time of the Christian resurrection following the Jewish Passover. The summer solstice celebration, Midsummer, was transmogrified into the Feast of St. John, accompanied by the traditional rites of fire and water. In similar manner were other ancient festivals taken over and transformed: Whitsuntide, or High May, became the Day of Pentecost; the Celtic festival of Samhain became All Hallows' Eve; and Lent, acquiring Christian coloration, recalled a former season of the same name.
Not only was Christian adaptation confined to sacred days alone, however; it extended to heathen deities, customs and symbols, as well. A multiplicity of saints and angels, for example — came to replace the various gods and heroes of pre-Christian times. Ritual infant-sprinkling became Christian baptism, or "christening," just as the salubrious effect of holy water generally was quickly discovered by the new faith. Similarly, the lighted tree and evergreen decoration at Christmas time were taken over virtually intact from previous heathen custom.
Even the Cross itself was adapted from pre-Christian sources, replacing the Fish, Dove and Star as the emblem of the faith — a fact which led to considerable distress and controversy when it was first introduced in the early Church!
And so, in addition to those Hellenistic, Roman and Babylonian elements which already overlaid an original Jewish nucleus, a Northern component was now added to the spiritual mélange which was to become medieval Christianity. With all of these accretions, however,it was essentially the outer form of the faith which was affected and modified; the inner substance of the doctrine retained its basically Oriental/Semitic character. If the new creed was not particularist like its Judaic parent, this had to do with its conceived leveling function among non-Jews. For what had originally been an exclusively Jewish sect had become — at the instance of the erstwhile Pharisee Saul/Paul — a universal creed directed at the Aryan world, denying the validity of all racial, ethnic and personal distinctions.
Thus it was, that out of this alien germ, there emerged the faith which was to form the spiritual mold of Western culture.
* In referring to the West, we mean that manifestation of European culture which emerged following the collapse of the Classical civilizations of Greece and Rome and which assumed definitive form in the time of Charlemagne around AD 800.
III. The Decline of Christianity
The imposition of Christianity on the Aryan peoples of Northern Europe had one lasting effect. It resulted in an inner tension, a disquiet—an angst—which has been a protruding feature of Western culture from its inception. Throughout the history of the West, there has always existed a soul struggle keenly felt by the more perceptive spirits of the race, occasioned by the contradiction between the inverted values and tenets of an Oriental/Semitic belief system on the one hand and the natural religious feeling of Nordic/Aryan man on the other. If the former furnished the ideological matrix of the culture, it was the latter which provided the creative inspiration, the divine spark. Indeed, the greatest moments of Western culture as a manifestation of Aryan genius—whether expressed in a specifically Christian or extra-Christian form—occurred despite the stricture of Church dogma, rather than because of it. Dante, Chaucer, Spenser, Shakespeare, Milton, Goethe, Schiller, Shelley, Wordsworth, Keats, Byron, Leonardo, Michelangelo, Raphael, Botticelli, Dürer and Rembrandt all testify to this, no less than do Vivaldi, Bach, Handel, Haydn, Mozart, Beethoven, Wagner and Bruckner.
As we have seen, the external character of Christianity was greatly modified in its metamorphosis from a small Jewish cult into the mighty religion of the West. The medieval institution known as chivalry, in fact, with its refined honor code—which save for its Christian trappings more properly reflected the outlook and mores of a pre-Christian time—resulted from this very process, and provided a modus vivendi for opposing spiritual interests during the Middle Ages. Thus, through a mutual accommodation of sorts was the underlying contradiction largely contained. And yet despite any institutional adjustment, the unease deriving from an alien idea remained latent within the fabric of the culture.
The social and intellectual response to this inner tension varied. For their part, the kings, emperors and other secular rulers tended to treat the matter with cynical detachment, accommodating and offering resistance as political requirements dictated.
Among scholars and thinkers, on the other hand, there were those who, like Giordano Bruno, rose in open revolt against Church dogma. More often, however, the stirrings of disquiet were manifested in subtle attempts to orient Christian doctrine toward innate Aryan religiosity. This was particularly true of the mystics of the Middle Ages, like Scotus Erigena, Amalric of Bena and Meister Eckhart, who—going beyond the theology of the Church—looked inward into their own souls and to Nature itself to discover the kingdom of God.
It was with the Renaissance, however, that there appeared the most significant movement to challenge Church doctrine—a movement which would, in fact, set in motion an irreversible chain of events leading ultimately to the discrediting of that very doctrine as the core idea of a culture. Now, for the first time, was the Promethean impulse able to break out of the clerical mold. Art came to express, not merely a sterile Semitic outlook, but the feelings of a Northern racial soul—a most notable development, which announced that creative vitality had stepped beyond the mythic prescriptions of the culture. The entire Judeo-Christian cosmology was called into question by new discoveries in the natural and physical sciences. Exploration across unknown seas commenced.
Perhaps the most revolutionary single development of this time, however, was the discovery of movable type by Johann Gutenberg, which enabled a much wider circulation of knowledge—knowledge other than that bearing an ecclesiastical imprimitur, knowledge transcending the basic ideology of the culture.
The most important consequence of the Gutenberg invention is to be seen in the Protestant Reformation, to which it was a contributing factor and whose development it greatly influenced. Up until the time of Martin Luther, the focus of Christian authority was the Papacy, whose word was unquestioned in matters of faith and dogma. Now, with the great schism in Christendom, a direct challenge was presented to ecclesiastical authority. It certainly was not, of course, the intent of Luther and the other dissenters to undermine or eliminate the Christian faith; rather the opposite. They merely wished to reform it. And yet, by challenging the one unifying institution of Christendom and causing a split in Christian ranks, they inadvertently opened the door to disbelief in the Christian mythos itself.
To replace papal authority in matters religious, Luther proposed to substitute the authority of the Book; and so, with the prospect of employing the Gutenberg invention, he undertook the prodigious task of translating obscure Hebrew scriptures into the German language—to the everlasting misfortune of Christianity. It is ironic that in his quest for spiritual freedom, the Great Reformer should have rejected the despotism of the Papacy only to embrace the tyranny of the Torah and the ancient Jewish prophets. The arcane texts which had remained on musty shelves behind cloistered walls arid accessible only to priests and theologians now became universal property. And now, instead of one single authority in matters of Christian exegesis, everyone—and no one—became an authority. Out of this there could be but one result: contradiction and confusion.
The effect on intelligent minds, of course, was devastating. For here it was now possible—in the best Talmudic fashion—to prove mutually exclusive points of view by reference to the same Semitic texts. Not only that, but critical examination of biblical literature gave rise to serious doubt concerning the veracity and validity of the subject matter itself, not to mention the peculiar mentality of its various authors. For the first time, perceptive minds could observe the obvious contradiction between empirical reality and what was claimed as holy writ.
Gradually there grew the inner realization that the faith itself was flawed, and creative genius began to look beyond the ideology of the Church for inspiration and direction. Even in those instances where Christian motifs continued to provide the external form for artistic expression—such as in the works of Bach, Corelli and Rubens, for example—the vital daemon which spoke was clearly extra-Christian and of a religious order transcending Church dogma.
And so even the Counter Reformation, and the stylistic mode it inspired, succumbed to widening skepsis. A lessening of traditional belief had set in, and Aryan creativity now began to look increasingly in other directions for the divine. At the intellectual level, philosophy—which had long separated itself from theology—pursued its own independent quest for truth, while at the artistic level a succession of stylistic periods— impelled by irrepressible inner tension—sought ever newer forms of expression. Thus, the Baroque, having exploited all of its possibilities, gave way to the Rococo and the Classical, which in turn yielded to the Romantic of the 19th century and to the Impressionist, which has now been succeeded by the Modern era—which concludes the historical experience of the West.
For the modern Church, this poses an impossible dilemma. The more it adheres to its fundamental doctrines, the more preposterous they must appear and the quicker will be its demise. On the other hand, once it attempts to reconcile itself with the findings of science by reinterpreting and redefining its basic tenets, it automatically concedes its moral position and its very reason for existence as an arbiter of truth.
The fact is that Christianity, as the dominant ideology of the West, has failed. It has exhausted all of its historical possibilities. No longer does it carry the emotional, mythic, polarizing force necessary to direct the spiritual life of a culture. Indeed, it is a spent cultural force no longer capable of adapting successfully to new organic realities.
All of this can be readily seen in the emptiness and sterility of modern cultural expression—reflecting the absence of any real spiritual values—as well as in the secularization of the Christian idea itself into liberal democracy and Marxism. Especially is this to be noted in the self-devaluation process of ecumenism and interfaith/inter-ideological dialogue, which constitutes the clearest concession by Christianity that it has failed and no longer has anything vital to offer. For once the Church admits that its doctrines are coequal with those of the nonbeliever, then what reason is there to be a believer?
It is not without significance that while the influence of Christianity is waning in the West, it is—through the sheer force of demographic pressure—gaining souls and expanding among nonwhites. Not only is this particularly true in Latin America, but also in Africa and—to a lesser extent—in Asia as well. This development has, of course, not escaped the notice of the Church, which—with obsequious interracial posturing and attempts to divorce itself from its historical Western setting—has chosen to redirect the Christian appeal toward the colored world as the primary area of its interest and concern. In abandoning its Western role, however, Christianity has announced its conclusion as a cultural force. And so, whatever it may have traditionally represented for past generations of Europeans and North Americans no longer obtains.
Accordingly, it would be a mistake to assume that the Judeo-Christian idea has anything to offer the white peoples in their contemporary struggle for survival—that it might in any way be capable of addressing the vital needs and concerns of endangered Aryan life on this planet. What now exists in the name of Christianity—apart from certain nostalgic, retrograde attempts to revive a historical corpse in a world of uncertainty and personal insecurity—is nothing more than fossil formalism and sterile nominalism without genuine vitality or substance, reflecting the marginal relevance of this particular ideology in today’s society. For in the face of modern realities, the Christian worldview simply has nothing more to say. It has fulfilled its historic role; it is now moribund. At best, it is irrelevant. At worst, it is an avowed enemy, a deadly menace to the Aryan race and its survival.
It may well be argued that the worst consequences of such ideological and spiritual error were far less conspicuous before the Second World War. Does the same hold true today, however, when the final effects of that error can be plainly seen? For well over a millenium now, Christianity has held a monopoly as the self-proclaimed custodian of the spiritual and moral well-being of an entire cultural order—for which one must reasonably assume that it has accepted concomitant responsibility. What, then, are the fruits of its spiritual regime? We see them all around us. They are the symptoms of a diseased civilization: decadence, degeneracy, depravity, corruption, pollution, egoism, hedonism, materialism, Marxism and ultimately—atheism. Yes, atheism. By destroying whatever natural religious feeling once existed in the hearts of our people and substituting alien myths and superstitions, it must now bear full responsibility for the diminished capacity for spiritual belief among our folk.
It will perhaps be objected that the Church itself is opposed to all of the above indesiderata. I am sorry; the responsibility for what has been claimed as a divine charge cannot be so easily evaded. Words aside, these happen to be the actual results of its earthly reign.
The Promethean spirit of Aryan man, for its part, must now look in other directions.
THE FOREGOING are the first three sections of this unique treatise. Four additional sections include: "Twilight of the West"; "The Tragedy of 1945"; "Worldview of a New Age"; and "The Faith of Adolf Hitler."
Faith of the Future was originally published in the Spring 1982 issue of The NATIONAL SOCIALIST, a publication of the World Union of National Socialists, under the title "Hitlerism: Faith of the Future" and subsequently issued in booklet form under its current title.
The historian, Dr. Peter H. Peel, spoke glowingly of the essay as "pure beauty and truth and an insight made comprehensible to all who are receptive.
Now I can see that there is no incongruity in a vision of Hitler as both man and divine archetype — the instrument of Aryan destiny."
One Michigan college student declared: "Never before in such a short article have I found such a pristine and powerful message ... The many pieces of life's jigsaw-puzzle have come together."
Aryan ideologue and mystic Savitri Devi wrote: "Nothing ... could have given me as much joy as your outstanding, objective and brilliant article ... I got from you, through your prophetic vision of tomorrow (and your dispassionate description of today) an immense, more-than-personal surely, but also personal feeling of victory ... For a time I lost sight and consciousness of [my surroundings] and felt all around me, spreading over the old and new continents and the smoking ruins of the Old Order, those I called for with all my heart in 1945.
And from their midst I felt the strong, unfettered youth of tomorrow rushing forth ...
Your beautiful article haunts me. I have read and re-read it several times, always with renewed elation and feeling of victory."
For a copy of Faith of the Future in its entirety in booklet form contact:
NS Publications, PO Box 188, Wyandotte MI 48192 / email@example.com.
http://www.theneworder.org/national-soc ... he-future/ | <urn:uuid:7b91f958-d63b-41ab-aa7c-48ac8eedeb73> | CC-MAIN-2021-21 | https://whitebiocentrism.com/viewtopic.php?p=1831 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989012.26/warc/CC-MAIN-20210509183309-20210509213309-00209.warc.gz | en | 0.957419 | 4,656 | 3.046875 | 3 |
In 1961, artist and activist Gustav Metzger earned his fame before a London audience as he applied to his canvas not paint but hydrochloric acid. By then, he’d written several manifestos on “auto-destructive art.” Later, Metzger created “several large-scale self-destructive works, such as a sculpture of five walls, each consisting of 10,000 geometrical forms, that would disappear as a computer randomly ejected the forms, one by one, over a period of ten years,” according to the New York Times. In his largest project, “The Years without Art—1977–1980,” Metzger asked artists throughout the world to stop making art, galleries to stop exhibiting, and magazines to cease their art coverage. Nobody joined him. He died in 2017.
An art of self destruction comes naturally on the heels of the mid-twentieth-century avant-garde. Metzger, Yoko Ono, John Latham, Chris Burden, and others: their early works are a consequence of Rauschenberg’s Erased de Kooning, of Cage’s 4'33", of Duchamp’s Fountain. This chain of influence is what critic Boris Groys calls “the speed of art.” Retrospectively, Duchamp’s urinal is the Trinity Test of conceptual art, at which point the time between what is thought and what is said begins rapidly to diminish toward total collapse. “Today,” Groys observes, “it is enough for an artist to look at and name any chosen fragment of reality in order to transform it into a work of art. In this case, art production has virtually achieved the speed of light.” To have arrived here as a young artist in the 1960s would have meant despair. As Jed Perl writes in New Art City, a study of downtown Manhattan’s avant-garde, “This fast-forward attitude was part of a deeper malaise,” a kind of “history sickness, for there no longer seemed any possibility of considering art except in terms of historical development or evolution.” It’s understandable, then, to look at what you’ve been taught is art and wish to destroy it. The speed of light is not the speed of life.
In 1970, Bas Jan Ader created Light Vulnerable Objects Threatened by Eight Cement Blocks—threats Ader himself carried out by cutting the ropes that suspended these blocks, crushing a vase of yellow flowers, a carton of eggs, a birthday cake, and so on. Largely, art is about time. The majority of artworks are a distortion, stretch, diminution, or reversal of time, if not simply a meditation or reflection on time. Traditionally, Western artworks are supposed to outlive their creators: they are our secular souls, lingering after the body dies. Conceptual art thrives on this expectation, shortening art’s lifespan until it dies right there in the room. Destruction is a refutation of art’s immortality, its sacredness. Silence is a refutation of art’s presumed ability to instruct. To self-destruct or fall silent is to re-up Nietzsche’s ante: art too is dead.
Silence has always hidden a pearl of destruction. It carries a wish, as Susan Sontag writes, “for a perceptual and cultural clean slate.” Like art and its “history sickness,” language “is experienced not only as something shared but as something corrupted, weighed down by historical accumulation.” Virginia Woolf called words “full of echoes, memories, associations.” As if caught in spider silk, one word sends itself trembling to others caught alongside it, ensnared by centuries of speaking.
Language, Sontag wrote in “The Aesthetics of Silence,” is “the most impure, the most contaminated, the most exhausted of all the materials out of which art is made.” If an artist can’t dispense with language, silence can present itself as an absence of meaning, a frame without content. Having hit the “light speed” of Duchamp’s Fountain and the critical mass, say, of Joyce’s Ulysses or Proust’s Search, artistic methods of “impoverishment and reduction indicate the most exalted ambition art could adopt.” Here is hidden “an energetic secular blasphemy: the wish to attain the unfettered, unselective, total consciousness of ‘God.’” If speech is kinetic, silence is potential: “Unmoored from the body, speech deteriorates. It becomes false, inane, ignoble, weightless. Silence can inhibit or counteract this tendency.” Speech seems mortal; silence, with its mystery, offers renewed faith in eternity. Artworks of silence and destruction create absence. They tear open negative spaces where breath can hang between our words.
While it doesn’t help me feel like an artist, using social media is an exercise in authorial control. With embellishments and exclusions, careful modulations, and precisely applied filters, these ongoing projects are selves not quite ourselves. My Twitter persona is not an accurate representation of my personhood. My Instagram account is a curated offering of the rare moments I find myself or my life to be glamorous. Mistaking one’s persona for their personhood is to mistake the narrator of a roman à clef for its author.
As artists are accountable for what their art says, users of social media are accountable for their content—for their statements. But content is largely irrelevant to the form of social media. As a style, these platforms offer a near total divorce between form and content. Apart from deciding to use Twitter, the form is beyond our control: it’s a tweet, or a series of tweets, posted to a timeline of other tweets. So, while Twitter can seem like a creative outlet, what its users really provide is content. To tweet, then, is to participate in someone else’s—something else’s—project. What social media platforms have done is create an ongoing work of rigid form in which content continuously creates itself; and rather than do so via randomized algorithms, like Metzger’s sculptures, these platforms use human beings. Twitter, Inc. owns “our” tweets. To use Twitter is to be its self-creating content. It is to be restricted and driven by the form of this project: inexhaustible engagement with advertisements. As its content, our individual meanings are irrelevant and discordant. Together, we resemble not so much a symphony as unbearable noise. We are the babble before which one stands transfixed: an ideal condition for creating anxiety, which every American knows can be relieved by spending money.
Amid noise, silence is a form of resistance. Sontag describes an eerily familiar 1960s:
In an overpopulated world being connected by global electronic communication and jet travel at a pace too rapid and violent for an organically sound person to assimilate without shock, people are also suffering from a revulsion at any further proliferation of speech and images. Such different factors as the unlimited “technological reproduction” and near universal diffusion of printed language and speech as well as images (from “news” to “art objects”), and the degeneration of public language within the realms of politics and advertising and entertainment, have produced, especially among the better-educated inhabitants of modern mass society, a devaluation of language . . . And as the prestige of language falls, that of silence rises.
A cliché is that silence speaks louder than words, which is true only when the “prestige” of silence outgrows words—of which social media create a surfeit. Like avant-garde art-making in the 1960s, posting a tweet or a photograph in the twenty-first century can seem a humiliating act of superfluity.
Like anyone, I don’t want to be superfluous. Physically, psychically, socially: I don’t want to see myself as insignificant or small or forgotten. That would be the rust at the metal’s edge—and what then?
To call oneself tired amid this politics is to echo an old meme, but I am so tired. In all the communities I’ve found via social media, what used to stimulate me—what used to make me feel like part of these communities—is harder and harder to pick out from the punditry. As news events beyond one’s control provide opportunities for personas to define and refine themselves—to situate themselves along axes of compassion and intelligence—the more these personas conform to what a project like Twitter or Facebook encourages a persona to be, and the less of the original person one hears through the static. Being oneself on these platforms seems increasingly impossible, so rigid is the form that restricts its living content. While social media are not the art market, it’s easy to feel its echo: existing in this perpetual now, trying simultaneously to understand it, predict it, and shape it, one loses autonomy. In a neoliberal society, social media transform what should be one’s own, private capital—an authored persona or “brand” used to further one’s career or social standing—into capital of its own. It grows its capital by controlling how it behaves and by monitoring which investments generate the most profit.
Art itself has ever been vulnerable to this. Writing of kitsch and the avant-garde in the 1930s, Clement Greenberg observed that, despite its subversiveness, “the avant-garde remained attached to bourgeois society precisely because it needed its money . . . The true and most important function of the avant-garde was not to ‘experiment,’ but to find a path along which it would be possible to keep culture moving in the midst of ideological confusion and violence.” Like the expensive arms race that followed Oppenheimer’s Trinity Test, the avant-garde embarked upon a path of increasingly shocking and ideologically powerful detonations at enormous profit to gallery owners, art collectors, museums, and even, on occasion, artists. Hence, later, a trend toward resistance via silence, withdrawal, negation, and asceticism.
Lately, I’ve found it satisfying to delete what I call myself. Over the last several months, I’ve deleted the majority of my tweets, the most vulnerable being those that express joy, despair, frustration, or excitement. In the noisy depersonalization of social media, anything that sounds too much like me feels humiliating not only to post but to have posted, to have expressed at some point in the past. It feels humiliating to have wanted to express positive feelings about my face or my body by sharing images. It feels humiliating to have tweeted “I’m proud of my novel” on a platform where one is supposed to tweet something scathing about the president. These platforms—tailored toward distilling an ideal, fictional self from the messiness of the “real” self—make it easy to delete images, ideas, opinions, and desires. Pictorial representations of significant moments in our lives—as well as any comments or reactions associated with those moments, which remind us how the images of our friends felt about our image-lives at that time—can be thrown away, without the option for recovery. As with silence, there’s an ineffable sacredness in deletion, a kind of euphoria that’s unlike any euphoria I’ve received from followers or friends paying attention to what I’ve decided to express.
I’ve since realized it’s the same euphoria I felt as a teenager, when I cut myself.
As I said: the rust at the metal’s edge. The pleasure I’ve found in deletion is to delight in hurting some part of myself, even if only a persona. These new scars, too, are as visible as those on my arms: the severed @’s and broken threads, dead links, blank profiles, comments that reply to a void, and other digital detritus. But this means I wasn’t alone. There were other people who commented, who interacted, which means they carry memories of how they experienced me or my personas in time, even if they’re no longer able to verify that history. Enacting as we do so much of our lives online—living as we do at least in part through a curio of personas—one’s self blends so seamlessly with a consciousness of branding that these images, interactions, and information-events now constitute a substantial part of who we are, online and off. Let me show you this tweet, you say to your friend over dinner, or Earlier today, on Instagram. . ., and so on. Deleting these parts of my life, I am enacting a conscious pattern of self-harm; and as with self-harm, my goal is simple. I want someone to notice. I want to be missed. I want you to say sorrowfully, Where did you go? because it’s your sorrow I’m after. It’s your grief I’m trying to create.
No, I don’t want to be superfluous, which is terrible because everyone, ultimately, is superfluous. It’s one of the human intellect’s many cruelties to have deduced this—to see so clearly, and long prior to death, that one day none of this will matter, none of us will matter. But then we have memory: “a substitute,” in Joseph Brodsky’s words, “for the tail that we lost for good in the happy process of evolution. It directs our movements.” Memory can be a defense against suicide: one has already made it this far—just look at all this proof behind us! Cantilevered as we are over the mirrored waters of death, the past’s weight provides our balance; we can drink right from despair, and memory holds us back. At least until we forget, and in we go with scarcely a splash.
Targeting memories can be a precursor to suicide, a giving of permission. Common to suicidal ideation is the imagination of grief, of people missing you. Common to borderline personality disorder is an indignation that people around you seem happy. Where is your pain? the sufferer wants to know, and will often create that pain in those closest via cruelty or self-harm. These are attempts to communicate from someone who cannot articulate, but only replicate, their pain. These are reactions of traumatized persons.
Indignation and resentment are modes of social media, both of which amplify users’ anxiety. “If you’re not upset about X, then you’re not paying attention” is a common rebuke. Choosing to express joy or excitement while a tragic meme is unfolding—a shooting, say, or anything the president says—can leave you ignored, muted, or shamed. In moments like these, the sheer noise of contemporary life is unbearable, and a traumatized individual concludes, If I can’t contribute by being or speaking, I will contribute by erasing, by deleting.
To be fair, the indignant person is right. One should be upset, and not only about X but variables A through Z. One should be upset that a terrorist group of white male supremacists calling itself a political party has seized the largest military and economic power on the planet, and that they are using that power to commit climate genocide against those doomed to be incinerated or drowned once global temperatures rise beyond repair, and that, in advance of this, they are slamming the gates shut to the refugees these disasters will create. The most powerful person on the planet is a vicious, cruel idiot with a heavily armed cult of supporters who’d happily take to the streets and murder people, should he drop a hint. Language itself is demeaned daily, and not only by political terrorists but by their opponents and by journalists, who parrot the vocabulary coined by these terrorists. Every day is traumatically noisier, and individual voices grow indecipherable. Social media timelines begin to resemble a scene in Sarah Kane’s Crave:
C: (Emits a short one syllable scream.)
C: (Emits a short one syllable scream.)
B: (Emits a short one syllable scream.)
M: (Emits a short one syllable scream.)
B: (Emits a short one syllable scream.)
A: (Emits a short one syllable scream.)
M: (Emits a short one syllable scream.)
I wish it were funny, but the pain and trauma in her work undoes that. As in the first half of the twentieth century—among the increasing “ideological confusion and violence” Greenberg describes—a new iteration of the avant-garde is seeking a way forward, a continuity. As before, acts of withdrawal, refusal, silence, destruction, and deletion offer themselves as meaningful forms of expression; and as before, it’s difficult to draw the line between what these forms say in our art and what they say in our lives. As before, this is a spiritual crisis. It’s easy to forget, or to lose faith, that art can still mean anything. That one’s life can still mean anything.
Sarah Kane committed suicide on February 20, 1999.
In The Myth of Sisyphus, Albert Camus writes of the human mind’s craving for meaning: “To understand is, above all, to unify. The mind’s deepest desire, even in its most elaborate operations, parallels man’s unconscious feeling in the face of his universe: it is an insistence upon familiarity, an appetite for clarity.” Suicide, in a chaotically violent world, can be a confession—“that life is too much for you or that you do not understand it.” Trauma is the scratch in life’s record that keeps skipping, looping its victims back to that moment; the temptation to lift the needle and go quiet altogether often seems more reconciliatory, more complete, than that little shove one needs to keep playing. When we don’t sound how we want—when we can’t say what we want—it’s easy to believe that silencing ourselves forever will be more meaningful than trying to speak. Speech itself becomes vulgar, even polluting, and to say I feel lonely—to say, I’m afraid of what’s going to happen to us—can sound laughable, pedestrian, or even a violence in and of itself. Of course you’re lonely. Of course you’re afraid. Who isn’t?
Camus—like Sartre, like Adorno, like Arendt and even Sontag—wrote from a place of extraordinary societal trauma. But he did write, and so did countless others who saw the unseeable, who witnessed the unbearable. After fascism and the Holocaust, another spectacular horror, as Sontag observed, pushed human life and language to a new absurdity: the known threat of “collective incineration and extinction which could come at any time, virtually without warning.” After such trauma, yes, the latter half of the twentieth century has its suicides, silences, abstentions, and self-mutilations, but so too is there speech: its art and literature, its sciences and humanities, and the laughter, the love and language, of its families, its individual lives.
Trauma—that scratch that halts the music—is a wound of the spirit. It violates meaning. It humiliates it. The pollution of language is particularly hurtful, as language is the winged breath between one person and another that proves connection is possible. The ancient Greeks imagined that words traveled, on air, from my soul into yours, and to demean language is to mutilate and hobble it. It is to punish the god in each word, the Zeus in daylight and journal, the Persephone in every person still searching for flowers.
Human spirituality is a kinetic adventure. The OED adds approximately a thousand new words every year. These are the creations of individual human beings, who have always sought God and re-alchemized the love of being alive. This is and has ever been the province of art, of worship, of philosophy. Each, Camus writes, “always lay[s] claim to the eternal, and it is solely in this that they take the leap.” What I feel, being alive in a time of such shattering trauma, is a grief so immense I can’t articulate it. I feel alone in this, but I know I am not. Foolishly, I overlooked the grief in others, and sought to create harm by amplifying a grief they already felt. To do this, I used my personhood as a weapon of communication, rather than my art as a style of communication. Like language, art is made of gods, not mortals; and it is there, in art, that silence and deletion can truly speak, that the imagination of grief can offer a glimpse of meaning. I, like everyone I know and love, am mortal, and to use my body and its images—to use my personhood and its variations—in the replication of grief is a violence done to me and felt by others. It mistakes life, which is linear and precious, for art, which is endlessly repeatable. To do this is to internalize and perpetuate trauma, to let the needle scratch, and scratch, and scratch, and scratch. I’d rather hear the music, or what it meant to hear it before it fell silent. My grief is real. Your grief is real. We owe it to one another to say, This hurts, in a way each of us can understand, to offer our condolence—or, as the gods hidden in the word inform us, com+ dolore: an invitation to grieve together. | <urn:uuid:0fe7eee8-6a87-4625-a476-5646699960ed> | CC-MAIN-2021-21 | https://triquarterly.org/issues/issue-156/deletism-and-imagination-grief | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991870.70/warc/CC-MAIN-20210517211550-20210518001550-00457.warc.gz | en | 0.948521 | 4,686 | 2.53125 | 3 |
1934 Hallock Old Arena
The availability of indoor ice facilities has played an important role in the development of ice hockey in Minnesota. Those communities that have possessed advantages of indoor rinks consistently have developed better programs and players in most programs that have had to rely on outdoor ice.
In late 1892 the ice polo teams in St. Paul outfitted four indoor roller rinks with natural ice. They were as follows: Jackson, Summit, Wigwam and Exposition, all located near downtown St. Paul. These installations are believed to be the first buildings in the state with natural ice. During the winter of 1893, St. Paul ice polo teams played many of their contests at these rinks.
On February 7, 1893, the Glen Avon Curling Club Rink in Duluth hosted a game of ice polo in which St. Paul Henriette's defeated the Duluth Polo Club 2-1. The ice surface at the Duluth rink measured 126' x 80'.
In the fall of 1894 Hallock, located in northwestern Minnesota, constructed a wooden indoor skating rink. On January 8, 1895, two Hallock teams played a hockey game in the newly built rink that may have been among the very first hockey games played in Minnesota or the country. These two groups decided to play a game of hockey when the speed skaters who were scheduled to entertain the crowd did not show up. By 1907 four additional enclosed rinks had been constructed in Hallock. With the exception of the 1907 building, which was used through the 1915 season, these early rinks usually lasted only a year or two before being dismantled. In 1897 nearby Stephen and Pembina, North Dakota, built indoor skating facilities. Also in 1897, Crookston, located 90 miles to the south of Hallock, erected in indoor rink. In Drayton, North Dakota, a large enclosed rink measuring 100’ x 62’ was constructed in 1899. Most of these rinks were small, often 50’-60’ by 150’-160’ in size. Carnivals, hockey, curling, speedskating races and public skating were among the events that were held at these rinks.
By 1899 ice polo, which preceded ice hockey and had been a very popular winter sport in the late 1880’s and 1890s in St. Paul, Minneapolis and Duluth, had been replaced in popularity by ice hockey, a game better suited to a smaller size ice surface.
Records indicate that the first game of hockey played by a St. Paul team on an indoor ice surface occurred on January 6, 1901. The game was played at the Star Rink, located at 11th Street and 4th Avenue South in Minneapolis with St. Paul defeating Minneapolis 3-1. The Star Rink was a roller skating facility and a natural ice surface was installed in the building during December of 1900. In addition to the Minneapolis Hockey Club, North and Central high schools used the rink for practice and games. Two seasons of hockey and skating activity ended the use of the Star Rink as an ice rink.
The first indoor hockey in skating rink to be built on the Mesabi Iron Range was constructed at Eveleth by Andy O’Hare of Winnipeg in the fall of 1902. Located at the south end of Grant Street, the wooden building possessed an ice surface of 75’ x 150’. The first hockey game was played in the structure on January 23, 1903, when Two Harbors defeated Eveleth 5-2. World-renowned skater Norval Baptie of Bathgate, North Dakota, and John Nilsson of Minneapolis, the North American speed skating champion of 1896 in 1897, thrilled Eveleth audiences with their performances in the building. The building was used as a rink for only a few seasons after which Eveleth was without an indoor rink until construction of the recreation building in 1919.
The second building in Minneapolis to be employed as a hockey rink was the Broadway rink, located at Broadway and Washington Avenue in North Minneapolis. This building was used for hockey during the season of 1904-1905 but after one season of use it ceased to operate as an ice rink. During the first decade of the present century, hockey grew rapidly in St. Paul, Minneapolis, Duluth and northwestern Minnesota. St. Paul was especially active during this period, maintaining about 10 outdoor rinks and icing about three dozen teams of varying ages. Through this early period the progress of the game was hampered by the lack of suitable indoor playing facilities. Hockey in the Twin Cities was given a big boost when the Hippodrome at the State Fairgrounds in St. Paul was made available to skating and hockey in 1912. The use of the Hipp paved the path for the development of a strong St. Paul team composed of such outstanding local players as Moose Goheen, Tony and George Conroy, Emy Garrett, Ed Fitzgerald, Everett McGowan, and Cy Weidenborner. These teams competed favorably with the best in Canada and the United States. The Hippodrome proved to be a very popular facility and was used continually for skating and hockey until it was replaced by the present Coliseum in the 1940s. The Hipp had one of the largest sheets of indoor ice in the world measuring at- 270’ x 119’.
The opening of the Curling Club Arena in Duluth in 1913 resulted in increased interest in hockey in skating, just as it had in St. Paul. Soon Duluth formed a strong team and competed on an equal basis with the nation's best.
In 1919 the first combined curling and hockey rink on the Mesabi Range was opened at Eveleth. The recreation building, built at a cost of $125,000, housed curling on the first floor and hockey and skating on the top floor. In the first hockey game played in the building, part of the three day Winter Carnival to celebrate the opening of the rink, Eveleth lost to Hibbing. Players imported from Two Harbors dominated the Eveleth line-up. Opening of the recreation building vaulted Eveleth into big-time hockey and within a few years the city of Eveleth was noted nationwide for its teams and the development of outstanding players.
Following in the footsteps of Eveleth, in the early 1920’s Hibbing, Virginia and Chisholm constructed facilities similar to the Recreation Building in Eveleth. In 1922 interest in hockey proved so great in Eveleth that the city, mainly through the efforts of Mayor Essling, built a second and larger rink named the Hippodrome. This building, which was completely remodeled in 1936, is the rink presently being used in Eveleth.
St. Paul gained a second indoor rink in 1922 when ice was placed in the Coliseum, located on Lexington Avenue near University Avenue. The Coliseum was adjacent to the Lexington Baseball Park and home runs hit to left field and landed on the roof of the building. The Coliseum was torn down in the late 1950s to make way for a shopping center.
In 1924 the first buildings in the state to be built with artificial ice were constructed in Duluth and Minneapolis. The Minneapolis arena, with a seating capacity of 5,500, and the Duluth Amphitheater, accommodating 4,000 fans, created reliable ice conditions and additional comfort for the fans. Availability of artificial ice led to improved college, high school and amateur hockey and within a few years to the professional game.
Natural ice was placed in the riding Academy at Fort Snelling in 1924 and the facility was used for several years by Twin City amateur and the Fort Snelling teams. The Riding Academy still stands but ice has not been placed in the building since the late 1920’s. Within a few years natural ice was installed in White Bear Lake’s Hippodrome, which at the time was a part of the Ramsey County Fair complex erected in 1926. This building has been a popular rink for amateur teams during the past sixty years and continues in use at the present time.
Roseau in northwestern Minnesota constructed an indoor rink in 1924. This facility operated until it was destroyed by wind in 1943 and was replaced in 1949 by the present Memorial Arena.
In 1931 a modern arena-auditorium was built in downtown St. Paul, containing St. Paul's first artificial ice and seating for more than 7,500 fans for hockey. The very popular Minnesota State High School Hockey Tournament was initiated at the St. Paul Auditorium in 1945. It was held there for the next twenty-four years until it was shifted to the newly constructed in larger Metropolitan Sports Center in Bloomington. The Auditorium has been the home of many professional, amateur and high school sextets during the past fifty years.
During the mid-thirties indoor arenas with natural ice were built in the northwestern Minnesota communities of Thief River Falls, Hallock, Crookston, Bemidji and Clearbrook. These structures, along with the one at Crosby, located on the Cuyuna Range in the East Central Minnesota, where built as Federal government WPA projects. Those at Thief River Falls, Crookston and Hallock are still being used.
The rink at Hibbing, which had been built in the early 1920’s, was destroyed by a fire in 1934. It was replaced in 1935 by a beautiful, modern hockey and curling complex. The new facility, named the Memorial building, installed the largest sheets of artificial ice in the state (99’ x 200’) and seated 4,100 fans. The structure, in top physical condition, is currently in use.
In the late 1930’s artificial ice was installed, as an afterthought, in the Mayo Civic Auditorium in Rochester. This was the first artificial indoor facility in southern Minnesota. Although a short rink, it proved to be a boost for the game in that part of the state. At about the same time Myrum Fieldhouse at Gustavus Adolphus College in St. Peter placed natural ice in the building and there college team played there after opening in January of 1939. Within a few years the fieldhouse was converted to a drill building by the U.S. Army. In 1947 the building was converted to use as a basketball facility, in which capacity it remained prior to being demolished in 1984. The Lund Center, and Don Roberts Ice Arena opened in 1974 where the Gusties play today.
In the 1930’s Fergus Falls placed ice in an unused broom factory which contained posts down the center of the building. The Fergus Falls players soon developed the skills of taking advantage of the posts in their game strategy.
At the outbreak of World War II nineteen indoor ice rinks were operating in the state. Four of these - Mayo Civic Auditorium in Rochester, St. Paul Auditorium, Minneapolis arena and the Memorial building in Hibbing - contained artificial ice. Loss of the 14 year old Duluth Amphitheater occurred in 1939 when the roof of the arena caved in during a Fireman-Police game before a full house of 4,000 fans. Luckily no one was seriously injured. Natural ice rinks were in use at Duluth, Eveleth (2), Chisholm, Virginia, Clearbrook, Bemidji, Thief River Falls, Crookston, Roseau, Hallock, Crosby, St. Paul, White Bear Lake and St. Peter as the U.S. entered World War II.
The advent of World War II dealt a severe blow to the growth of hockey in the state. Indoor facilities were lost in Eveleth, Virginia, Chisholm, Clearbrook, and St. Peter in the mid-forties. The buildings at Eveleth, Chisholm and Virginia were taken over by the Arrow Shirt Co. in 1946 in 1947.
In the decade of 1941-1950 only two new rinks were constructed in the state. Warroad and Roseau, both located near the Canadian border in the northwestern part of the state, built indoor facilities toward the end of the decade. By 1950 the number of enclosed rinks in Minnesota had decreased from the nineteen existing in 1940 to thirteen. At this time four of the arenas possessed artificial ice. All of the thirteen buildings in use, five were located in northwestern Minnesota and only three in the Twin Cities area.
Activity in arena construction continued at a slow pace during the decade of the 1951-1960. The University of Minnesota, badly in need of practice and playing facilities, placed artificial ice in Williams Arena on the campus in Minneapolis for the 1950-1951 season. During the late fifties the Ice Center in Golden Valley and Miners Arena in Virginia were constructed, both with artificial ice. Albert Lea laid natural ice in a small structure, the Pavilion, in 1958.
In 1960 only seventeen indoor arenas were operating in the state, two fewer than in 1940. Eight of the seventeen structures contained artificial ice. The Minneapolis-St. Paul area and northwestern Minnesota each had five facilities, while three were functioning on the Mesabi Range.
The 1960’s experienced great growth in youth and high school hockey in Minnesota, resulting in an upsurge in arena construction. Notable buildings constructed during the period in including Metropolitan Sports Center in Bloomington, home of the North Stars, and the Duluth Arena-Auditorium complex. The former was built to seat more than 15,000 fans for hockey and the latter to seat 5,800. Aldrich Arena, located on St. Paul's East side, was built during this era and proved to be a boom to the game in St. Paul. In 1969, the Minneapolis Auditorium, with a seating capacity for 6,800 fans, installed artificial ice. During the decade numerous rinks were built in the seven county Metropolitan area. North St. Paul, South St. Paul, St. Paul, Blake at Hopkins, Edina (2), St. Paul Academy, Breck in Minneapolis, Fridley (2), Roseville, St. Mary's Point, Bloomington (2), Richfield, Stillwater, Wayzata, West St. Paul, Minnetonka (2), and St. Louis Park were among those erecting rinks.
In the northern part of the state, arenas were placed into operation at Coleraine, Grand Rapids, Baudette, Red Lake Falls, Two Harbors, Silver Bay, Roseau, Detroit Lakes, Bemidji, Babbitt, Hoyt Lakes, Thief River Falls, International Falls, Fergus Falls, Cloquet, Wheeler and Duluth.
Construction in southern Minnesota during the 1960’s included a rink at Faribault, built by Shattuck School, which had participated in varsity prep school hockey on outdoor natural ice since 1922. Graham Arena in Rochester, opened in 1968, provided the Mayo city with its second facility.
By 1970 sixty indoor arenas were in use in the state, more than a threefold increase from the number existing in 1960. 40% of those were located in the Twin Cities metropolitan area and 70% of the 60 contained artificial ice.
During the decade of the 1970’s, the increase in the building on the indoor facilities followed the growth of youth hockey in the state, resulting in a doubling in the number of indoor arenas. Demands for ice rental time remained strong throughout the decade as thousands of youths competed for use of a limited amount of indoor ice.
During this period the city of St. Paul and Ramsey County built ten indoor rinks with artificial ice, while Minneapolis constructed four. In the early seventies the Civic Center was opened in St. Paul adjacent to the Auditorium. With its large seating capacity of nearly 19,000, the Civic Center was able to lure the popular Minnesota State High School Hockey Tournament away from the Metropolitan Sports Center in 1976. With the Civic Center’s added seating capacity, attendance at the eight-team, three-day event climbed to more than 100,000 annually.
New buildings were also opened during the decade in the Twin Cities suburban area at Apple Valley, Brooklyn Center, Burnsville, Buffalo, Coon Rapids, Cottage Grove, Elk River, Farmington, Hastings, New Hope, Osseo and Shakopee.
Construction during the seventies in southern Minnesota included new arenas at Albert Lea, Austin, St. Peter, Mankato, Northfield, LeSueur, Owatonna, Rochester and Windom. In central and west central areas of the state, rinks were erected at Alexandria, Brainerd, Hutchinson, Litchfield, Moorhead (2), Fergus Falls, St. Cloud and Willmar.
In the Duluth and Iron Range areas new arenas were built in Duluth, Ely, Proctor and Hibbing. Bubbles, with natural ice surfaces, were in use at Moose Lake, Gilbert and Pine City. Communities in northwestern Minnesota whom added buildings were Williams, Red Lake, Roosevelt, Hallock and Crookston.
Enclosed arenas continue to be erected at a brisk pace in the first few years of the 1980’s. From 1980 to 1982 the following communities open indoor facilities: Eden Prairie, Mound-Westonka, Princeton, Anoka, Red Wing, Inver Grove Heights, Forest Lake, New Ulm and East Grand Forks.
Many of the smaller communities in the state could boast of having two indoor facilities. Among those with two as of 1982 were Crookston, East Grand Forks, Fergus Falls, Fridley, Hallock, Hibbing, Hopkins, Minnetonka, Moorhead, Roseau, Rochester, Thief River Falls, and White Bear Lake. Bemidji, a city without an indoor arena in the early 1960’s, had three indoor facilities by the early 1980’s, all equipped with artificial ice.
The facilities with the largest seating capacity, always seating in excess of 4,100, were: Hibbing Memorial Building, Duluth Arena, St. Paul Auditorium, Saint Paul Civic Center, St. Paul Coliseum, Minneapolis Auditorium, Williams Arena in Minneapolis and the Metropolitan Sports Center in Bloomington.
By 1982 the number of indoor arenas in the state numbered 130, more than double the number in use in 1970. All the 130 operating, 110 had artificial ice. Sixty of the buildings were located in the Twin Cities metropolitan area.
Compiled originally: January 27, 1982
SM – Southern Minnesota
TC – Twin Cities Metropolitan Area
CWC – Central/West Central Minnesota
NE – Northeastern Minnesota
NW – Northwestern Minnesota
In 1994 - Minnesota State DFL Representative Bob Milbert from South St. Paul, was the chief author of the 'Mighty Ducks Grant' program that was proposed as a way for Minnesota to both purchase the Winnipeg Jets, and assist youth hockey rinks across the entire State. Upon his retirement, the bill was described as his [Bob Milbert] greatest legacy, and was named as the 'biggest sports booster in the Legislature'. The legislation got its name from the two movies about a youth hockey team starring Emilio Estevez, and both shot in the Twin Cities, and prominent lawmakers called to sell bonds to purchase the Jets, and use the estimated $4.1 million in tax revenue the team would then generate to pay off the bonds and fund both construction of new arenas, and renovate existing rinks. The primary impetus behind the program was the ensure ice time for the ever growing number of girls' hockey teams [at time of bill there was 49 high schools that iced teams].
Initially, in 1994 the bill was shelved due in part to the NBA Minnesota Timberwolves when the State agreed to annually spend $750,000 for 15 years to save the franchise. In 1995, 82 communities including 31 in the metro area and ranged from Baudette in the north, to Albert Lea in the south that applied for matching grants through the Mighty Ducks program that was finally passed by Legislature. The State funded ten grants of $250,000 for new arenas, and eight grants of up to $50,000 for renovations among the 82 applicants. The winning communities however had to find ways to match the grants from the Minnesota Amateur Sports Commission. 'Communities must prove that their own financing is in place before the winners are announced when the Sports Commission meets December 18" - James Metzen, sponsor of the bill.
From 1995-2000 the grant provided $18.4 million in state money that helped build 61 new arenas statewide, and renovated countless others. 'Though the program was widely acclaimed, the money has since dwindled and only $858,000 was awarded in 1999 and 2000. In recent years, persuading taxpayers to invest in such facilities has proven more difficult'. - Rep. Bob Milbert | <urn:uuid:cf432e5d-5467-47c9-ba7b-928f5b702b0d> | CC-MAIN-2021-21 | https://history.vintagemnhockey.com/page/show/813675-history-of-indoor-ice-rinks-in-minnesota- | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.40/warc/CC-MAIN-20210514060536-20210514090536-00497.warc.gz | en | 0.970408 | 4,277 | 3.0625 | 3 |
The COVID-19 Pandemic Magnifies the Crisis of "U.S.-Style Human Rights"
Editor's Note: The China Society for Human Rights Studies on Thursday released an article titled "The COVID-19 Pandemic Magnifies the Crisis of 'U.S.-Style Human Rights'." Following is the full text:
The sudden and unexpected outbreak of the COVID-19 pandemic is the most serious global public health emergency that humanity has experienced since the end of World War II. It is also a "big test of human rights conditions" for all countries across the globe. The virus does not respect borders, nor is race or nationality relevant in the face of the disease. Given this, to honor our common commitment to human rights, governments of different countries are obliged to adopt scientific measures for the prevention and control of the virus and do their best to ensure the health and safety of their people. Nevertheless, the U.S. government's self-interested, short-sighted, inefficient, and irresponsible response to the pandemic has not only caused the tragedy in which about 2 million Americans became infected with the virus and more than 110,000 have died from it, but also caused the exposure and deterioration of the long-existing problems within the United States, such as a divisive society, the polarization between the rich and the poor, racial discrimination, and the inadequate protection of the rights and interests of vulnerable groups. This has led the American people into grave human rights disasters.
1. The U.S. Government's Ineffective Response to the Pandemic Leading to Human Rights Disasters
Being distracted, slack, and opinionated in the face of the pandemic, the United States Federal Government declared a national emergency on March 13, after tens of thousands of deaths had occurred. The pandemic statistics released by Johns Hopkins University in the United States showed that as of June 9 Eastern Standard Time (EST), there had been 1,971,302 confirmed cases of COVID-19 and 111,620 deaths from the virus in the United States, with both figures significantly higher than those of other countries. For the United States, which boasts the strongest economic and technological strength and the most abundant medical resources in the world, this is a sad irony.
Ignoring the Pandemic Warnings. According to the revisited timeline of the pandemic in the United States, which was released by The New York Times and Washington Post in April 2020, the U.S. government had repeatedly ignored the pandemic warnings and slacked off on pandemic control. In early January, the National Security Council (NSC) received intelligence reports predicting the spread of the virus in the United States. On January 29, Peter Navarro, Director of the Office of Trade and Manufacturing Policy at the White House, laid out in striking detail the potential risks of a coronavirus pandemic: as many as half a million deaths and trillions of dollars in economic losses. U.S. health officials and medical experts, such as Secretary of the United States Department of Health and Human Services Alex Azar, also repeatedly warned of the deadly pandemic in the United States. Unfortunately, the U.S. government ignored these warnings. Instead, it focused on controlling the spreading of information, restricted medical experts' freedom to release information on the pandemic to the public, and even published false information to mislead the public, such as calling the virus "a flu", claiming that the virus has low infection and fatality rates, and saying that "One day it's like a miracle, it will disappear." Under such circumstances, the "golden window" period for infectious disease prevention and control was wasted. Even then, many members of the U.S. Senate, including Richard Burr, who was the chairman of the U.S. Senate Intelligence Committee, were involved in insider trading scandals. They used their positions to understand the serious situation of the pandemic earlier, but they downplayed the risk of the pandemic in front of the public while selling a large number of stocks before the pandemic triggered a stock market crash. In this way, they had a "perfect hedge".
Prioritizing Capital Interests. The website of The New York Times reported on April 13, 2020, that the White House COVID-19 Response Working Group and the NSC jointly prepared a memo dated February 14, which was titled "U.S. Government Response to the 2019 Novel Coronavirus." According to the report, the memo documented drastic pandemic-control measures including: "significantly limiting public gatherings and cancellation of almost all sporting events, performances, and public and private meetings that cannot be convened by phone. Consider school closures." Nevertheless, the decision-makers immediately rejected the memo after hearing that the measures would lead to the collapse of the U.S. stock market. This showed how the U.S. government prioritized capital interests and the response of the capital market, rather than making the people's right to life and health a priority. Due to this, the U.S. government failed to give effective warnings to the public and failed to get prepared for the potential consumption of medical resources caused by the pandemic, bringing the American people to the brink of infection and death.
Politicizing the Anti-pandemic Endeavor. When the virus broke out in the United States, some U.S. politicians used it as a weapon to attack political opponents and seek election benefits instead of regarding the drive to protect the lives and health of their people as their top priority. The website of The Lancet, an authoritative medical journal, published an editorial on May 16, which was a rare case for the journal. The editorial explicitly pointed out the intervention of political parties in the public health sector of the United States, and the weakening role of the Centers for Disease Control and Prevention (CDC). It criticized the U.S. government for not actively adopting basic medical and anti-pandemic measures such as detection, tracking, and isolation, placing its hopes on "magic bullets" such as vaccines and new drugs, and hoping that the virus would eventually "magically disappear". On May 4, the famous U.S.political scientist Francis Fukuyama published the article The Wages of American Political Decay on the website The National Interest, pointing out that the highly polarized party politics made the political checks and balances an insurmountable obstacle to decision-making; that the pandemic, which should have been an opportunity to put aside differences and show unity, further deepened the political polarization. Politicians viewed the pandemic as an opportunity to seize power and partisan interests, and this came at the cost of the lives of countless American people.
Leading to Catastrophic Consequences. The website of The New York Times reported on May 20, 2020, that research by Columbia University of the United States showed that delays in adopting movement restrictions had caused at least 36,000 people to die. According to the research, if the U.S. government had adopted movement restrictions one week earlier, it could have saved 36,000 lives, and if the U.S. government had adopted movement restrictions two weeks earlier, 83 percent of the deaths from COVID-19 could have been avoided. On May 24, The New York Times unprecedentedly dedicated its entire front page to 1,000 U.S. COVID-19 victims. Their names, ages, and brief descriptions were listed under the headline "U.S. Deaths Near 100,000, an Incalculable Loss," with a subheadline reading: "They Were Not Simply Names on a List. They Were Us." The website of the Time reported on May 20 that the delayed social-distancing measures caused 90 percent of the deaths from COVID-19 in the United States, and observed that such a huge loss of life was essentially a failure of U.S. democracy.
2. Inequality within U.S. Society Being Fully Exposed during the Pandemic
In the United States, both liberal and conservative scholars agree on one basic fact that there is stark inequality within U.S. society. The deep-seated institutional reason for such inequality is that the U.S. government and political parties have long been manipulated by interest groups, and are unable to formulate and implement tax, industrial, and social security policies that promote social equality. During the pandemic, the social and economic inequality within U.S. society has been exposed and exacerbated.
The Elite Class of the United States Being Specially Treated in Coronavirus Testing. Viral infection does not distinguish between rich and poor, but the limited testing and medical resources were not fairly allocated in the United States. The website of The New York Times reported on March 19, 2020, that many dignitaries in the United States somehow underwent virus testing when they had no signs of infection and when nearly every state in the country lacked testing equipment. Uche Blackstock, an emergency medicine physician, said frustratedly, "As a physician, I find it upsetting that celebrities and government officials without symptoms have been able to access testing quickly with same-day results, while I've had to ration out testing to my patients with turnaround times of five or seven days." This obvious injustice has made the public increasingly raise questions. When medical staff and many patients cannot be diagnosed, whether the privileged class obtains priority testing means that the ordinary people are deprived of testing opportunities. The website of The Guardian, a British newspaper, published an article on March 21 and commented that when the Titanic hit the iceberg and sank, women and children were protected and rescued first, but in the face of COVID-19, the United States has first rescued the rich and powerful groups. The gap between rich and poor in getting the virus tests in the United States has exposed the slackness, confusion, and injustice in its pandemic prevention and control system.
People at the Bottom of U.S.Society Facing an Increasingly Dangerous Situation. The pandemic has made the lives of people at the bottom of U.S. society increasingly difficult, and further intensified the social polarization between rich and poor. According to a CBS report in 2019, nearly 40 percent of Americans would struggle to cover a $400 emergency expense, and 25 percent of Americans skipped necessary medical care because they could not afford the cost. The website of The Atlantic reported in April 2020 that low-income people in the United States would usually delay seeing a doctor when they became ill, not because they did not want to recover, but because they had no money at all. Faced with the COVID-19 pandemic, tens of millions of people in the United States are not covered by medical insurance, when intensive care for COVID-19 costs as high as tens of thousands of dollars in the country. "To be or not to be" is not just a philosophical proposition of some literary work, but a realistic choice that the people at the bottom of U.S. society have to make. Russia Today reported on April 30 that a Gallup survey revealed that one in seven American adults said that if they or their family members developed symptoms related to COVID-19, they would probably give up medical treatment because they were worried that they could not afford the costs. The United Nations Special Rapporteur on extreme poverty and human rights, Philip Alston, pointed out on April 16 that the poor in the United States were "being hit hardest by the COVID-19 pandemic". He observed, "Low-income and poor people face far higher risks from the coronavirus due to chronic neglect and discrimination, and a muddled, corporate-driven federal response has failed them."
High Unemployment Rates Leading the Working Class into a Crisis of Survival. According to data released by the U.S. Department of Labor on May 28, 2020, the cumulative number of Americans applying for unemployment relief from March 15 to May 23 has for the first time reached 40.8 million. Given the high unemployment rate brought about by the pandemic and the long-existing structural discrimination and polarization between rich and poor, the U.S. working class's ability to resist risks has greatly diminished. Vox News reported on April 10 that from the catering industry and tourism to the media industry, the entire U.S. economy has felt the impact of the pandemic, and just like the impact of other crises, the most vulnerable groups bore the brunt of the economic impact. According to the report, during the pandemic, the people who are most vulnerable amid layoffs are those who earn the lowest salaries, such as low-wage workers in the catering and retail industries. According to the report released by the National Restaurant Association on April 20, two-thirds of restaurant workers (about 8 million people) have been dismissed or put on vacation due to the pandemic. When the U.S. government launched the Paycheck Protection Program (PPP), which was intended to help small-and medium-sized enterprises, some large companies with sufficient funds took advantage of the rule loopholes to acquire huge loans, but small businesses and small shops that urgently needed loans to sustain themselves could not get the help. The above-mentioned report also showed that at least 60 percent of employers said that the existing relief plan of the federal government could not help them reduce layoffs. All this has revealed that the U.S. working class was the first to feel the pain of the economic recession brought about by the pandemic, and became the victims of the U.S. government's inefficient anti-pandemic measures.
3. Racial Discrimination Being Intensified within the United States during the Pandemic
The systemic racial discrimination is a chronic illness of U.S. society. Since 2016, white supremacy has revived and racial discrimination has intensified in the United States. Social tensions brought about by the COVID-19 pandemic, especially the unequal allocation of limited anti-pandemic resources, have further deepened mainstream society's discrimination against minorities, such as Asian, African, and Hispanic Americans.
Asian Americans Suffering from Stigmatization. The website of The Guardian, a British newspaper, pointed out on April 1, 2020 that stereotyping of Asian Americans still happens in the United States, and when reporting on the COVID-19 situation, some U.S. media always attach photos of Asian faces. The website of The New York Times observed on April 16 that the COVID-19 pandemic meant isolation for Asian Americans. Since the outbreak of the virus, Asian Americans have often been humiliated and even attacked in public places. Some U.S.politicians even deliberately misled the public. After the World Health Organization officially named the new coronavirus "COVID-19", senior U.S. leaders, including Secretary of State Mike Pompeo, still insisted on referring to the virus as the "Chinese virus" or "Wuhan virus". They also refused to correct that mistake even after being strongly criticized by the international community. The United Nations Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance, Tendayi Achiume, pointed out on March 23 and April 21 that politicians of relevant countries proactively expressed xenophobic opinions or made xenophobic implications by deliberately replacing COVID-19 with other names that linked this particular disease to a particular country or nation, which was an irresponsible and disturbing expression. The current pandemic-related discrimination has fully exposed the once-concealed racial prejudice. In the United States, which bills itself as the "beacon of freedom", government officials blatantly incite, guide, and condone racial discrimination, which is tantamount to humiliating the modern concept of human rights.
African and Hispanic Americans Falling Victims to Severe Racial Inequality. Racial discrimination is deeply rooted in the history and reality of the United States. The COVID-19 pandemic has put this issue underneath a magnifying glass and tragically exposed its negative consequences. African Americans across the country are infected with and die from COVID-19 at a disproportionately higher rate than any other group in the United States. The Michigan Department of Health and Human Services (MDHHS) released a racial breakdown of the state's confirmed COVID-19 cases and COVID-19 deaths on April 2, 2020, which showed that African Americans, who make up only 12 percent of Michigan's population, accounted for 33 percent of Michigan's confirmed COVID-19 cases and 40 percent of the state's COVID-19 deaths. National statistics released by the CDC showed that as of May 13, African Americans accounted for 22.4 percent of deaths from COVID-19 in the United States. This percentage is significantly higher than the African Americans'12.5 percent share of the total U.S.population. African Americans in Kansas, Illinois, and Missouri accounted for only 5.7 percent, 14.1 percent, and 11.6 percent of the total population of the states, respectively, but accounted for 29.7 percent, 30.3 percent, and 35.1 percent of these states' deaths from COVID-19.Hispanic Americans also have disproportionately higher infection and fatality rates during the pandemic. In early April, New York City released a racial breakdown of the city's deaths from COVID-19.According to it, Hispanic Americans accounted for 34 percent of the city's fatal cases of COVID-19. The website of The New York Times pointed out on April 14 that the fact the pandemic infected and killed African and Hispanic Americans at disproportionately higher rates was the result of a health gap directly created by the historical inequality of wealth and opportunities. The website of the Financial Times, a British newspaper, pointed out on May 15 that African Americans and Hispanic Americans were more likely than whites to perform the work that was necessary for the maintenance of social operations, making them more susceptible to poverty, diabetes, high blood pressure, and COVID-19. The report also pointed out that this pandemic exacerbated the racial differences in the United States, and that nothing could better display the difference in skin color in the United States than the life and death stories in the pandemic shutdown. On May 25, George Floyd, an African-American man from Minnesota, died after a white police officer pressed his knee on Floyd's neck for several minutes during his arrest. This led to large-scale protests and demonstrations across the United States, once again exposing the dissatisfaction and anger that the American people have for the worsening racial inequality.
Frequent Racist Violence. During the pandemic, racist violence has haunted the United States, and Asian Americans suffer from venomous personal attacks. From March 19 to April 1, 2020, alone, the U.S. nonprofit organization "Stop AAPI Hate "received more than 1,100 reports of hate incidents. In February 2020, a 16-year-old Asian boy in Los Angeles was accused of being a "virus carrier" and viciously beaten at school. On March 14, in a supermarket in Midland, Texas, a 19-year-old man deliberately stabbed an Asian man and his two young children with a knife on the ground that "they were Chinese and were spreading the novel coronavirus to others." On April 5, an Asian woman in Brooklyn, New York City, was attacked by a racist who spilled unidentified chemical liquid on her to severely burn her upper body, face, and hands when she was emptying rubbish at her door. Violent incidents have further exacerbated the social tension during the pandemic, and problems such as a divisive society, racial divide, and the proliferation of guns have worsened. On April 15, nearly 200 U.S. foreign policy scholars and former diplomats jointly issued a statement in USA Today, pointing out that hate crimes and violent attacks against Asian Americans sounded the alarm for the United States, and that leaders at all levels and in all sectors should take action to oppose racism against Asians and end hate crimes against Asian communities.
4. Vulnerable Groups in the United States Struggling to Survive during the Pandemic
The care for the survival of socially disadvantaged and marginalized groups represents the virtue of a society. It is also a touchstone for the human rights situation in a country. During the pandemic, the features of the United States' cruel capitalism have been fully exposed, which have forced the elderly, homeless people, and children into a tragic situation.
The Elderly, the "Victims" of the Government's Ineffectiveness in Fighting the Pandemic. UN Secretary-General Antonio Guterres has repeatedly stressed that the elderly, like young people, have the same rights to life and health amid the COVID-19 pandemic, and no person is expendable. Unfortunately, during the epidemic in the United States, the elderly group, which is naturally at greater risk, has been further weakened and marginalized due to age discrimination, and the elderly's right to life has not been ensured. On March 23 and April 22,2020, Texas Lieutenant Governor Dan Patrick expressed in interviews with Fox News that he would rather die than see public health measures damage the U.S. economy and take the risk of restarting the U.S. economy at the cost of elderly people's lives. Ben Shapiro, the editor-in-chief of The Daily Wire, which is a right-wing U.S. media outlet, declared coldly on an interview show on April 29, "If somebody who is 81 dies of COVID-19, that is not the same thing as somebody who is 30 dying of COVID-19." He also said, "If grandma dies in a nursing home at age 81, and that's tragic and that's terrible, also the life expectancy in the United States is 80." The website of The New York Times reported on May 11 that at least 28,100 occupants and staff members of long-term care institutions such as nursing homes in the United States have died of COVID-19, accounting for about one-third of COVID-19 deaths in the United States. In these care institutions, many elderly people live in a relatively closed environment, so their risk of death from COVID-19 infection is very high. The website of The Atlantic published two articles respectively on March 28 and April 29, titled Ageism Is Making the Pandemic Worse and We're Literally Killing Elders Now. They pointed out that long-existing defects in the old-age care system, such as insufficient capital investment and staffing, caused the United States to fall behind in ensuring the rights and interests of the elderly, and that such a situation was due to many political reasons. The website of the Washington Post observed on May 9 that the U.S. anti-pandemic action had become a state-approved massacre that deliberately sacrificed the lives of the elderly, the working class, African Americans, and Hispanic Americans.
The Homeless Having Nowhere to Go during the Pandemic. The website of USA Today reported on April 22, 2020, that more than 550,000 people were homeless every night in the United States. The report also pointed out that according to the statistics released by the Homeless Alliance, about 17 out of every 10,000 Americans have experienced homelessness and about 33 percent of them are families with children. Many of the homeless Americans are elderly people and disabled people. Given their originally poor physical health and bad living and hygienic conditions, they are susceptible to the virus. During the epidemic, the homeless who are living on the streets are deported and forced to live in temporary shelters for isolation. The website of Reuters reported on April 23 that the crowded shelters made it impossible for the homeless who lived there to maintain social distance, which made it easier for the virus to spread. As of April 20, 43 homeless people living in New York City's shelters had died from infection of COVID-19, and 617 tested positive for COVID-19.The website of The New York Times reported on April 13 that the shelters for the homeless had become a delayed-action bomb of a virus outbreak in New York City, where more than 17,000 people lived and slept almost side by side in centralized shelters. The website of Nature journal reported on May 7 that when researchers began conducting virus testing on homeless people in the United States, they found that the situation there had gotten out of control. The website of the Boston Globe reported on May 4 that 596 homeless people in Boston had been diagnosed with COVID-19, accounting for one-third of the confirmed cases in the region.
The website of the Los Angeles Times reported on May 14 that research showed that due to the impact of the pandemic, the number of homeless people in the United States might surge by as much as 45 percent within a year, further exacerbating the public health crisis.
Worrisome Situation of Poor Children and Immigrant Children. The United States has not yet ratified the United Nations Convention on the Rights of the Child, which is one of the core international human rights conventions. In recent years, child poverty and abuse has remained a grave problem in the United States. This problem has been exacerbated by the pandemic. Forbes News reported on May 7, 2020 that a survey showed a large number of American children were facing hunger during the pandemic. As of the end of April, more than one-fifth of American households had been facing food crises, and two-fifths of American households with children under 12 years of age have been facing such crises. Forbes News reported on May 9 that the number of child exploitation reports in the United States surged during the pandemic. The National Center for Missing &Exploited Children received 4.2 million reports in April, an increase of 2 million from March 2020 and an increase of nearly 3 million from April 2019. Apart from that, a more worrisome fact is that a large number of unaccompanied immigrant children are still being held in detention centers in the United States. They are currently in an extremely dangerous situation during the pandemic. The United Nations Special Rapporteur on human rights of migrants Felipe Gonzalez Morales and other UN human rights experts issued a joint statement on April 27, requesting the U.S. government to transfer immigrants from overcrowded and insanitary detention centers. On May 29, 15 experts of the Special Procedures of United Nations Human Rights Council issued a joint statement urging the United States to take more measures to prevent virus outbreaks in the detention centers. The website of the United Nations reported on May 21 that since March, the U.S. government had repatriated at least 1,000 unaccompanied immigrant children to Central and South America regardless of the risk of the pandemic. The United Nations Children's Fund (UNICEF) criticized this move, for it would expose the children to greater danger.
5. Relevant Behaviors of the U.S. Government Seriously Violating the Spirit of International Human Rights Law
When the citizens' right to life and health is severely threatened by the spreading pandemic, the U.S. government, instead of focusing on controlling the pandemic, wields a hegemonic stick and fans the flames of trouble everywhere, trying to divert attention and shirk responsibility. Its behaviors have seriously undermined the international community's concerted efforts to control the pandemic.
Ineffective Anti-pandemic Efforts Failing the National Duty of Ensuring the Citizens' Right to Life. The International Covenant on Civil and Political Rights (ICCPR) stipulates that every human being has the inherent right to life, and countries are obliged to take proactive measures to guarantee their citizens' right to life. As a party to the convention, the U.S.government, however, has not given priority to its citizens' right to life and health during the pandemic. Instead, it has been prioritizing the political campaign at home and the political drive to suppress China abroad, rather than safeguarding the lives and safety of its citizens. Given this, it has missed the best chance to curb the spread of the virus, and caused a grave human rights disaster in which about 2 million people have been infected with COVID-19, and more than 110,000 have died from the virus. Clearly, the U.S. government has failed to fulfill its due national obligations to protect its people's lives from the threat of epidemics. The website of The Independent, a British newspaper, commented on April 10, 2020, that the United States, an active advocator for human rights, ignored its own human rights obligations and blatantly overlooked its citizens' lives. The website of the Huffington Post reported on May 6 that after making a rigorous assessment of the U.S. government's poor performance during the COVID-19 pandemic, Yale University epidemiologist Gregg Gonsalves directly pointed out that this was getting awfully close to genocide by default.
Maliciously Stigmatizing China in Violation of the Principles of Equality and Non-discrimination. The principles of equality and nondiscrimination are the core norms of international human rights laws and are confirmed by a series of international human rights instruments, such as the Universal Declaration of Human Rights. Since the outbreak of the virus, high-ranking U.S. government officials have violently disregarded human conscience and ethical bottom lines. To maintain the U.S. hegemony, these officials have politicized the pandemic and constantly stigmatized China by referring to the virus as the "Wuhan Virus" or the "China virus".When the scientific community came to the conclusion that the exact source of the virus was from nature, U.S. Secretary of State Mike Pompeo still repeatedly claimed that the virus came from some lab in Wuhan according to some "intelligence reports" he read. These politicians' behaviors have clearly violated World Health Organization Best Practices for the Naming of New Human Infectious Diseases, which was jointly issued by the World Health Organization, the World Organization for Animal Health, and the Food and Agriculture Organization of the United Nations in 2015. The way they refer to the virus goes against the World Health Organization's suggestion for the official name of the novel coronavirus disease. The United Nations Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia and related intolerance, Tendayi Achiume, pointed out on March 23 that some U.S. government officials refused to use the internationally-recognized name of the virus and deliberately replaced it with other names that linked this particular disease to a particular country or nation, which was an irresponsible and disturbing expression that came from and would give rise to racism, xenophobia, stigmatization, and exclusion of certain groups, and violence against certain groups. She held that such behaviors were unforgivable and incompatible with the obligations stipulated by the international human rights laws.
Suspending the Payment of WHO Membership Fees Impeding the Joint Anti-Pandemic Efforts of the International Community. In order to shirk its responsibility for its disastrous anti-pandemic measures, the U.S. government tried to scapegoat the World Health Organization by fabricating false charges against the organization and threatening to stop paying its membership fees. On April 14, 2020, the U.S. government announced its suspension of paying dues to the WHO, which was unanimously criticized by the international community. UN Secretary-General Antonio Guterres issued a statement on April 14 stating that when the world was fighting the COVID-19 pandemic, it was inappropriate to reduce the resources required by the WHO or any other humanitarian organization for operations. The President of the American Medical Association, Patrice Harris, issued a statement on April 15 stating that combating the pandemic required international cooperation and that the suspension of financial support to the WHO at this critical moment is a dangerous step in the wrong direction. Josep Borrell, the European Union High Representative for Foreign Affairs and Security Policy, expressed on April 15 that at this time, there was no reason to justify such an action. On April 15, the website of The Guardian, a British newspaper, published an editorial and commented that when the world desperately needed to jointly overcome this threat that the world had never experienced before, the suspension of the WHO dues by the U.S.government was an act that lacked morality and disrupted the international order, and was a horrible betrayal to global solidarity. German Foreign Minister Heiko Maas said on April 16 that the WHO was the mainstay of the global fight against the pandemic, and the suspension of the WHO dues by the United States at this time "would be nothing other than throwing the pilot out of the plane in mid-flight".Under such circumstances, the U.S.Secretary of State Mike Pompeo once again attacked the WHO on April 22, threatening to permanently suspend the payment of dues. On May 29, the president of the United States announced the suspension of relations with the WHO.
Unilateral Sanctions Violating the Spirit of Humanitarianism and the Principle of International Cooperation. International cooperation is the cornerstone of the existence and operation of the international community. It is an important principle for ensuring the implementation of human rights and fundamental freedoms across the globe. It is also a national obligation stipulated by international instruments, such as the Charter of the United Nations. At this critical moment when the deadly pandemic spreads globally and threatens human life, health, and well-being, all countries should work together to respond to the pandemic and maintain global public health security. Nevertheless, during this pandemic, the U.S. government has still imposed sanctions on countries such as Iran, Cuba, and Venezuela, which made it difficult for the sanctioned countries to obtain needed anti-pandemic medical supplies in a timely manner, and posed threats to the people's rights to life and health in the sanctioned countries. The United Nations High Commissioner for Human Rights, Michelle Bachelet, expressed on March 24, 2020, that in the case of a global pandemic, sanctions would hinder medical work and increase risks for everyone. She argued that to maintain global public health security and protect the rights and lives of millions of people in sanctioned countries, sanctions should be relaxed or suspended in certain sectors. The United Nations Special Rapporteur on extreme poverty and human rights, the Special Rapporteur on human rights for safe drinking water and sanitation, and the Special Rapporteur on the right to education issued a joint statement on May 6 stating that the U.S. sanctions on Venezuela were seriously harming the human rights of the people in the country. They urged the United States to immediately lift sanctions that exacerbated the suffering of the people when the pandemic raged in the country. | <urn:uuid:5c60276e-1419-48f7-989c-e48a5f22611e> | CC-MAIN-2021-21 | https://www.chinadaily.com.cn/a/202006/12/WS5ee2bc0ea310834817252784.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992516.56/warc/CC-MAIN-20210516075201-20210516105201-00335.warc.gz | en | 0.967544 | 7,008 | 2.625 | 3 |
[updated April 2020 with 2019 population data]
While Australian cities are growing outwards, densities are also increasing in established areas, and newer outer growth areas are some times at higher than traditional suburban densities.
So what’s the net effect – are Australian cities getting more or less dense? How has this changed over time? Has density bottomed out? And how many people have been living at different densities?
This post maps and measures population density over time in Australian cities.
I’ve taken the calculations back as far as I can with available data (1981), used the highest resolution population data available. I’ll discuss some of the challenges of measuring density using different statistical geographies along the way, but I don’t expect everyone will want to read through to the end of this post!
[This is a fully rewritten and updated version of a post first published November 2013]
Under traditional measures of density, you’d simply divide the population of a city by the area of the metropolitan region.
At the time of writing Wikipedia said Greater Sydney’s density was just 4.23 people per hectare (based on its Greater Capital City Statistical Area). To help visualise that, a soccer pitch is about 0.7 hectares. So Wikipedia is saying the average density of Sydney is roughly about 3 people per soccer field. You don’t need to have visited Sydney to know that is complete nonsense (don’t get me wrong, I love Wikipedia, but it really need to use a better measure for city density!).
The major problem with metropolitan boundaries – in Australia we use now Greater Capital City Statistical Areas – is that they include vast amounts of rural land and national parks. In fact, in 2016, at least 53% of Greater Sydney’s land area had zero population. That statistic is 24% in Melbourne and 14% in Adelaide – so there is also no consistency between cities.
Below is a map of Greater Sydney (sourced from ABS), with the blue boundary representing Greater Sydney:
One solution to this issue is to try to draw a tighter boundary around the urban area, and in this post I’ll also use Significant Urban Areas (SUAs) that do a slightly better job (they are made up of SA2s). The red boundaries on the above map show SUAs in the Sydney region.
However SUAs they still include large parks, reserves, industrial areas, airports, and large-area partially-populated SA2s on the urban fringe. Urban centres are slightly better (they are made of SA1s) but population data for these is only available in census years, the boundaries change with each census, the drawing of boundaries hasn’t been consistent over time, they include non-residential land, and they split off most satellite urban areas that are arguably still part of cities, even if not part of the main contiguous urban area.
Enter population-weighted density (PWD) which I’ve looked at previously (see Comparing the densities of Australian, European, Canadian, and New Zealand cities). Population-weighted density takes a weighted average of the density of all parcels of land that make up a city, with each parcel weighted by its population. One way to think about it is the average density of the population, rather than the average density of the land.
So parcels of land with no population don’t count at all, and large rural parcels of land that might be inside the “metropolitan area” count very little in the weighted average because of their relatively small population.
This means population-weighted density goes a long way to overcoming having to worry about the boundaries of the “urban area” of a city. Indeed, previously I have found that removing low density parcels of land had very little impact on calculations of PWD for Australian cities (see: Comparing the residential densities of Australian cities (2011)). More on this towards the end of this post.
Calculations of population-weighted density can also answer the question about whether the “average density” of a city has been increasing or decreasing.
But… measurement geography matters
One of the pitfalls of measuring population weighted density is that it very much depends on the statistical geography you are using.
If you use larger geographic zones you’ll get a lower value as most zones will include both populated and unpopulated areas.
If you use very small statistical geography (eg mesh blocks) you’ll end up with a lot fewer zones that are partially populated – most will be well populated or completely unpopulated, and that means your populated weighted density value will be much higher, and your measure is more looking at the density of housing areas.
To illustrate this, here’s an animated map of the Australian Capital Territory’s 2016 population density at all of the census geographies from mesh block (MB) to SA3:
Only at the mesh block and SA1 geographies can you clearly see that several newer outer suburbs of Canberra have much higher residential densities. The density calculation otherwise gets washed out quickly with lower resolution statistical geography, to the point where SA3 geography is pretty much useless as so much non-urban land is included (also, there are only 7 SA3s in total). I’ll come back to this issue at the end of the post.
Even if you have a preferred statistical geography for calculations, making international comparisons is very difficult because few countries will following the same guidelines for creating statistical geography. Near enough is not good enough. Worse still, statistical geography guidelines do not always result in consistently sized areas within a country (more on that later).
We need an unbiased universal statistical geography
Thankfully Europe and Australia have adopted a square kilometre grid geography for population estimates, which makes international PWD comparisons readily possible. Indeed I did one a few years ago looking at ~2011 data (see Comparing the densities of Australian, European, Canadian, and New Zealand cities).
This ABS is now providing population estimates on a square km grid for every year from 2006.
Here is what Melbourne’s estimated population density looks like on a km square grid, animated from 2006 to 2019:
The changes over time are relatively subtle, but if you watch the animation several times you’ll see growth – including relatively high density areas emerging on the urban fringe.
It’s a bit chunky, and it’s a bit of a matter of luck as to whether dense urban pockets fall entirely within a single grid square or on a boundary, but there is no intrinsic bias.
There’s also an issue that many grid squares will contain a mix of populated and non-populated land, particularly on the urban fringe (and a similar issue on coastlines). In a large city these will be in the minority, but in smaller cities these squares could make up a larger share of the total, so I think we need to be careful about this measure in smaller cities. I’m going to arbitrarily draw the line at 200,000 residents.
How are Australian cities trending for density using square km grid data? (2006 to 2019)
So now that we have an unbiased geography, we can measure PWD for cities over time.
The following chart is based on 2016 Significant Urban Area boundaries (slightly smaller than Greater Capital City Statistical Areas but also they go across state borders as appropriate for Canberra – Queanbeyan and Gold Coast – Tweed).
Technical notes: You cannot perfectly map km squares to Significant Urban Areas. I’ve included all kilometre grid squares which have a centroid within the 2016 Significant Urban Area boundaries (with a 0.01 degree tolerance added – which is roughly 1 km). Hobart appears only from 2018 because that’s when it crossed the 200,000 population threshold.
The above trend chart was a little congested for the smaller cities, so here is a zoomed in version without Sydney and Melbourne:
You can see most cities getting denser at various speeds, although Perth, Geelong, and Newcastle have each flat-lined for a few years.
Perth’s population growth slowed at the end of the mining boom around 2013, and infill development all but dried up, so the overall PWD increased only 0.2 persons/ha between 2013 and 2018.
Canberra has seen a surge in recent years, probably due to high density greenfield developments we saw above.
How is the mix of density changing? (2006 to 2019)
Here’s a look at the changing proportion of the population living at different densities for 2006-2019 for the five largest Australian cities, using square km grid geography:
It looks very much like the Melbourne breakdown bleeds into the Sydney breakdown. This roughly implies that Melbourne’s density distribution is on trend to look like Sydney’s 2006 distribution in around 2022 (accounting the for white space). That is, Melbourne’s density distribution is around 16 years behind Sydney’s on recent trends. Similarly, Brisbane looks a bit more than 15 years behind Melbourne on higher densities.
In Perth up until 2013 there was a big jump in the proportion of the population living at 35 persons / ha or higher, but then things peaked and the population living at higher densities declined, particularly as there was a net migration away from the inner and middle suburbs towards the outer suburbs.
Here’s the same for the next seven largest cities:
Of the smaller cities, densities higher than 35 persons/ha are only seen in Gold Coast, Newcastle, Wollongong and more recently in Canberra.
The large number of people living at low densities in the Sunshine Coast might reflect suburbs that contain a large number of holiday homes with no usual residents (I suspect the dwelling density would be relatively higher). This might also apply in the Gold Coast, Central Coast, Geelong (which actually includes much of the Bellarine Peninsula) and possibly other cities.
Also, the Central Coast and Sunshine Coast urban patterns are highly fragmented which means lots of part-urban grid squares, which will dilute the PWD of these “cities”.
Because I am sure many of you will be interested, here are animated maps for these cities:
Canberra – Queanbeyan
Newcastle – Maitland and Central Coast
What are the density trends further back in time using census data?
The census provides the highest resolution and therefore the closest measure of “residential” population weighted density. However, we’ve got some challenges around the statistical geography.
Prior to 2006, the smallest geography at which census population data is available is the collector district (CD), which average around 500 to 600 residents. A smaller geography – the mesh block (MB) – was introduced in 2006 and averages around 90 residents.
Unfortunately, both collector districts and mesh blocks are not consistently sized across cities or years (note: y axis on these charts does not start at zero):
Technical note: I have mapped all CDs and MBs to Greater Capital City Statistical Area (GCCSA) boundaries, based on the entire CD fitting within the GCCSA boundaries (which have not yet changed since they were created in 2011).
There is certainly some variance between cities and years, so we need to proceed with caution, particularly in comparing cities. Hobart and Adelaide have the smallest CDs and MBs on average, while Sydney generally has larger CDs and MBs. This might be a product of whether mesh blocks were made too small or large, or it might be that density is just higher and it is more difficult to draw smaller mesh blocks. The difference in median population may or may not be explained by the creation of part-residential mesh blocks.
Also, we don’t have a long time series of data at the one geography level. Rather than provide two charts which break at 2006, I’ve calculated PWD for both CD and mesh block geography for 2006, and then estimated equivalent mesh block level PWD for earlier years by scaling them up by the ratio of 2006 PWD calculations.
In Adelaide, the mesh block PWD for 2006 is 50% larger than the CD PWD, while in the Australian Capital Territory it is 110% larger, with other cities falling somewhere in between.
Would these ratios hold for previous years? We cannot be sure. Collector Districts were effectively replaced with SA1s (with an average population of 500, only slightly smaller) and we can calculate the ratio of mesh block PWD to SA1 PWD for 2011 and 2016. For most cities the ratio in 2016 is within 10% of the ratio in 2011. So hopefully the ratio of CD PWD to mesh block PWD would remain fairly similar over time.
So, with those assumptions, here’s what the time series then looks like for PWD at mesh block geography:
As per the square km grid values, Sydney and Melbourne are well clear of the pack.
Most cities had a PWD low point in 1996. That is, until around 1996 they were sprawling at low densities more than they were densifying in established areas, and then the balance changed post 1996. Exceptions are Darwin which bottomed out in 2001 and Hobart which bottomed in 2006.
The data shows rapid densification in Melbourne and Sydney between 2011 and 2016, much more so than the square km grid data time series. But we also saw a significant jump in the median size of mesh blocks in those cities between 2011 and 2016 (and if you dig deeper, the distribution of mesh block population sizes also shifted significantly), so the inflection in the curves in 2011 are at least partly a product of how new mesh block boundaries were cut in 2016, compared to 2011. Clearly statistical geography isn’t always good for time series and inter-city analysis!
How has the distribution of densities changed in cities since 1986?
The next chart shows the distribution of population density for Greater Capital City Statistical Areas based on collector districts for the 1986 to 2006 censuses:
You can more clearly see the decline in population density in most cities from 1986 to 1996, and it wasn’t just because most of the population growth was a lower densities. In Hobart, Canberra, Adelaide, Brisbane and Melbourne, the total number of people living at densities of 30 or higher actually reduced between 1986 and 1996.
Here is the equivalent chart for change in density distribution by mesh block geography for the capital cities for 2006, 2011, and 2016:
I’ve used the same colour scale, but note that the much smaller geography size means you see a lot more of the population at the higher density ranges.
The patterns are very similar to the distribution for square km grid data. You can see the how Brisbane seems to bleed into Melbourne and then into Sydney, suggesting a roughly 15 year lag in density distributions. This chart also more clearly shows the recent rapid rise of high density living in the smaller cities of Canberra and Darwin.
The next chart shows the 2016 distribution of population by mesh block density using Statistical Urban Area 2016 boundaries, including the smaller cities:
Gold Coast and Wollongong stand out as smaller cities with a significant portion of their population at relatively high densities, but a fair way off Sydney and Melbourne.
(Sorry I don’t have a mesh block times series of density distribution for the smaller cities – it would take a lot of GIS processing to map 2006 and 2011 mesh blocks to 2016 SUAs, and the trends would probably be similar to the km grid results).
Can we measure density changes further back in history and for smaller cities?
Yes, but we need to use different statistical geography. Annual population estimates are available at SA2 geography back to 1991, and at SA3 geography back to 1981.
However, there are again problems with consistency in statistical geography between cities and over time.
Previously on this blog I had assumed that guidelines for creation of statistical geography boundaries have been consistently applied by the ABS across Australia, resulting in reasonably consistent population sizes, and allowing comparisons of population-weighted density between cities using particular levels of statistical geography.
Unfortunately that wasn’t a good assumption.
Here are the median population sizes of all populated zones for the different statistical geographies in the 2016 census:
Note: I’ve used a log scale on the Y-axis.
While there isn’t a huge amount of variation between medians at mesh block and SA1 geographies, there are massive variations at SA2 and larger geographies.
SA2s are intended to have 3,000 to 25,000 residents (a fairly large range), with an average population of 10,000 (although often smaller in rural areas). You can see from the chart above that there are large variances between medians of the cities, with the median size in Canberra and Darwin below the bottom of the desired range.
I have asked the ABS about this issue. They say it is related to the size of gazetted localities, state government involvement, some dense functional areas with no obvious internal divisions (such as the Melbourne CBD), and the importance of capturing indigenous regions in some places (eg the Northern Territory). SA2 geography will be up for review when they update statistical geography for 2021.
While smaller SA2s mean you get higher resolution inter-censal statistics (which is nice), it also means you cannot compare raw population weighted density calculations between cities at SA2 geography.
However, all is not lost. We’ve got calculations of PWD on the unbiased square kilometre grid geography, and we can compare these with calculations on SA2 geography. It turns out they are very strongly linearly correlated (r-squared of over 0.99 for all cities except Geelong).
So it is possible to estimate square km grid PWD prior to 2006 using a simple linear regression on the calculations for 2006 to 2018.
But there is another complication – ABS changed the SA2 boundaries in 2016 (as is appropriate as cities grow and change). Data is available at the new 2016 boundaries back to 2001, but for 1991 to 2000 data is only available on the older 2011 boundaries. For most cities this only creates a small perturbation in PWD calculations around 2001 (as you’ll see on the next chart), but it’s larger for Geelong, Gold Coast – Tweed Heads and Newcastle Maitland so I’m not willing to provide pre-2001 estimates for those cities.
The bottom of this chart is quite congested so here’s an enlargement:
Even if the scaling isn’t perfect for all history, the chart still shows the shape of the curve of the values.
Consistent with the CD data, several cities appear to have bottomed out in the mid 1990s. On SA2 data, that includes Adelaide in 1995, Perth and Brisbane in 1994, Canberra in 1998 and Wollongong in 2006.
Can we go back further?
If we want to go back another ten years, we need to use SA3 geography, which also means we need to switch to Greater Capital City Statistical Areas as SA3s don’t map perfectly to Significant Urban Areas (which are constructed of SA2s). Because they are quite large, I’m only going to estimate PWD for larger cities which have reasonable numbers of SA3s that would likely have been fully populated in 1981.
I’ve applied the same linear regression approach to calculate estimated square kilometre grid population weighted density based on PWD calculated at SA3 geography (the correlations are strong with r-squared above 0.98 for all cities).
The following chart shows the best available estimates for PWD back to 1981, using SA3 data for 1991 to 2000, SA2 data for 2001 to 2005, and square km grid data from 2006 onwards:
Technical notes: SA3 boundaries have yet to change within capital cities, so there isn’t the issue we had with SA2s. The estimates based on SA2 and SA3 data don’t quite line up between 1990 and 1991 which demonstrates the limitations of this approach.
The four large cities shown appear to have been getting less dense in the 1980s (Melbourne quite dramatically). These trends could be related to changes in housing/planning policy over time but they might also be artefacts of using such a coarse statistical geography. It tends to support the theory that PWD bottomed out in the mid 1990s in Australia’s largest cities.
Could we do better than this for long term history? Well, you could probably do a reasonable job of apportioning census collector district data from 1986 to 2001 censuses onto the km grid, but that would be a lot of work! It also wouldn’t be perfectly consistent because ABS use dwelling address data to apportion SA1 population estimates into kilometre grid cells. Besides we have reasonable estimates using collector district geography back to 1986 anyway.
Melbourne’s population-weighted density over time
So many calculations of PWD are possible – but do they have similar trends?
I’ve taken a more detailed look at my home city Melbourne, using all available ABS population figures for the geographic units ranging from mesh blocks to SA3s inside “Greater Melbourne” and/or the Melbourne Significant Urban Area (based on the 2016 boundary), to produce the following chart:
Most of the datasets show an acceleration in PWD post 2011, except the SA3 calculations which are perhaps a little more washed out. The kink in the mesh block PWD is much starker than the other measures.
The Melbourne SUA includes only 62% of the land of the Greater Melbourne GCCSA, yet there isn’t much difference in the PWD calculated at SA2 geography – which is the great thing about population-weighted density.
All of the time series data suggests 1994 was the year in which Melbourne’s population weighted density bottomed out.
Appendix 1: How much do PWD calculations vary by statistical geography?
Census data allows us to calculate PWD at all levels of statistical geography to see if and how it distorts with larger statistical geography. I’ve also added km grid PWD calculations, and here are all the calculations for 2016:
Technical note: square km grid population data is estimated for 30 June 2016 while the census data is for 9 August 2016. Probably not a significant issue!
You can see cities rank differently when km grid results are compared to other statistical geography – reflecting the biases in population sizes at SA2 and larger geographies. Wollongong and Geelong also show a lot of variation in rank between geographies – probably owing to their small size.
The cities with small pockets of high density – in particular Gold Coast – drop rank with large geography as these small dense areas quickly get washed out.
I’ve taken the statistical geography all the way to Significant Urban Area – a single zone for each city which is the same as unweighted population density. These are absurdly low figures and in no way representative of urban density. They also suggest Canberra is more dense than Melbourne.
Appendix 2: Issues with over-sized SA1s
As I’ve mentioned recently, there’s an issue that the ABS did not create enough reasonably sized SA1s in some city’s urban growth areas in 2011 and 2016. Thankfully, it looks like they did however create a sensible number of mesh blocks in these areas, as the following map (created with ABS Maps) of the Altona Meadows / Point Cook east area of Melbourne shows:
In the north parts of this map you can see there are roughly 4-8 mesh blocks per SA1, but there is an oversized SA1 in the south of the map with around 50 mesh blocks. This will impact PWD calculated at SA1 geography, although these anomalies are relatively small when you are looking at a city as large as Melbourne. | <urn:uuid:3d0ae0b6-aa08-47e2-832a-7afd7e6df5b4> | CC-MAIN-2021-21 | https://chartingtransport.com/2019/04/21/how-is-density-changing-in-australian-cities-2nd-edition/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992440.69/warc/CC-MAIN-20210517180757-20210517210757-00615.warc.gz | en | 0.942555 | 4,958 | 2.765625 | 3 |
We can now look at more sophisticated ANNs, which are known as multi-layer artificial neural networks because they have hidden layers. These will naturally be used to undertake more complicated tasks than perceptrons. We first look at the network structure for multi-layer ANNs, and then in detail at the way in which the weights in such structures can be determined to solve machine learning problems. There are many considerations involved with learning such ANNs, and we consider some of them here. First and foremost, the algorithm can get stuck in local minima, and there are some ways to try to get around this. As with any learning technique, we will also consider the problem of overfitting, and discuss which types of problems an ANN approach is suitable for.
We saw in the previous lecture that perceptrons have limited scope in the type of concepts they can learn - they can only learn linearly separable functions. However, we can think of constructing larger networks by building them out of perceptrons. In such larger networks, we call the step function units the perceptron units in multi-layer networks.
As with individual perceptrons, multi-layer networks can be used for learning tasks. However, the learning algorithm that we look at (the backpropagation routine) is derived mathematically, using differential calculus. The derivation relies on having a differentiable threshold function, which effectively rules out using perceptron units if we want to be sure that backpropagation works correctly. The step function in perceptrons is not continuous, hence non-differentiable. An alternative unit was therefore chosen which had similar properties to the step function in perceptron units, but which was differentiable. There are many possibilities, one of which is sigmoid units, as described below.
Remember that the function inside units take as input the weighted sum, S, of the values coming from the units connected to it. The function inside sigmoid units calculates the following value, given a real-valued input S:
Where e is the base of natural logarithms, e = 2.718...
When we plot the output from sigmoid units given various weighted sums as input, it looks remarkably like a step function:
Of course, getting a differentiable function which looks like the step function was the whole point of the exercise. In fact, not only is this function differentiable, but the derivative is fairly simply expressed in terms of the function itself:
Note that the output values for the σ function range between but never make it to 0 and 1. This is because e-S is never negative, and the denominator of the fraction tends to 0 as S gets very big in the negative direction, and tends to 1 as it gets very big in the positive direction. This tendency happens fairly quickly: the middle ground between 0 and 1 is rarely seen because of the sharp (near) step in the function. Because of it looking like a step function, we can think of it firing and not-firing as in a perceptron: if a positive real is input, the output will generally be close to +1 and if a negative real is input the output will generally be close to -1.
We will concern ourselves here with ANNs containing only one hidden layer, as this makes describing the backpropagation routine easier. Note that networks where you can feed in the input on the left and propagate it forward to get an output are called feed forward networks. Below is such an ANN, with two sigmoid units in the hidden layer. The weights have been set arbitrarily between all the units.
Note that the sigma units have been identified with sigma signs in the node on the graph. As we did with perceptrons, we can give this network an input and determine the output. We can also look to see which units "fired", i.e., had a value closer to 1 than to 0.
Suppose we input the values 10, 30, 20 into the three input units, from top to bottom. Then the weighted sum coming into H1 will be:
SH1 = (0.2 * 10) + (-0.1 * 30) + (0.4 * 20) = 2 -3 + 8 = 7.
Then the σ function is applied to SH1 to give:
σ(SH1) = 1/(1+e-7) = 1/(1+0.000912) = 0.999
[Don't forget to negate S]. Similarly, the weighted sum coming into H2 will be:
SH2 = (0.7 * 10) + (-1.2 * 30) + (1.2 * 20) = 7 - 36 + 24 = -5
and σ applied to SH2 gives:
σ(SH2) = 1/(1+e5) = 1/(1+148.4) = 0.0067
From this, we can see that H1 has fired, but H2 has not. We can now calculate that the weighted sum going in to output unit O1 will be:
SO1 = (1.1 * 0.999) + (0.1*0.0067) = 1.0996
and the weighted sum going in to output unit O2 will be:
SO2 = (3.1 * 0.999) + (1.17*0.0067) = 3.1047
The output sigmoid unit in O1 will now calculate the output values from the network for O1:
σ(SO1) = 1/(1+e-1.0996) = 1/(1+0.333) = 0.750
and the output from the network for O2:
σ(SO2) = 1/(1+e-3.1047) = 1/(1+0.045) = 0.957
Therefore, if this network represented the learned rules for a categorisation problem, the input triple (10,30,20) would be categorised into the category associated with O2, because this has the larger output.
As with perceptrons, the information in the network is stored in the weights, so the learning problem comes down to the question: how do we train the weights to best categorise the training examples. We then hope that this representation provides a good way to categorise unseen examples.
In outline, the backpropagation method is the same as for perceptrons:
We randomly assign the weights between all the nodes. The assignments should be to small numbers, usually between -0.5 and 0.5.
Each training example is used, one after another, to re-train the weights in the network. The way this is done is given in detail below.
After each epoch (run through all the training examples), a termination condition is checked (also detailed below). Note that, for this method, we are not guaranteed to find weights which give the network the global minimum error, i.e., perfectly correct categorisation of the training examples. Hence the termination condition may have to be in terms of a (possibly small) number of mis-categorisations. We see later that this might not be such a good idea, though.
Because we have more weights in our network than in perceptrons, we firstly need to introduce the notation: wij to specify the weight between unit i and unit j. As with perceptrons, we will calculate a value Δij to add on to each weight in the network after an example has been tried. To calculate the weight changes for a particular example, E, we first start with the information about how the network should perform for E. That is, we write down the target values ti(E) that each output unit Oi should produce for E. Note that, for categorisation problems, ti(E) will be zero for all the output units except one, which is the unit associated with the correct categorisation for E. For that unit, ti(E) will be 1.
Next, example E is propagated through the network so that we can record all the observed values oi(E) for the output nodes Oi. At the same time, we record all the observed values hi(E) for the hidden nodes. Then, for each output unit Ok, we calculate its error term as follows:
The error terms from the output units are used to calculate error terms for the hidden units. In fact, this method gets its name because we propagate this information backwards through the network. For each hidden unit Hk, we calculate the error term as follows:
In English, this means that we take the error term for every output unit and multiply it by the weight from hidden unit Hk to the output unit. We then add all these together and multiply the sum by hk(E)*(1 - hk(E)).
Having calculated all the error values associated with each unit (hidden and output), we can now transfer this information into the weight changes Δij between units i and j. The calculation is as follows: for weights wij between input unit Ii and hidden unit Hj, we add on:
[Remembering that xi is the input to the i-th input node for example E; that η is a small value known as the learning rate and that δHj is the error value we calculated for hidden node Hj using the formula above].
For weights wij between hidden unit Hi and output unit Oj, we add on:
[Remembering that hi(E) is the output from hidden node Hi when example E is propagated through the network, and that δOj is the error value we calculated for output node Oj using the formula above].
Each alteration Δ is added to the weights and this concludes the calculation for example E. The next example is then used to tweak the weights further. As with perceptrons, the learning rate is used to ensure that the weights are only moved a short distance for each example, so that the training for previous examples is not lost. Note that the mathematical derivation for the above calculations is based on derivative of σ that we saw above. For a full description of this, see chapter 4 of Tom Mitchell's book "Machine Learning".
We will re-use the example from section 13.1, where our network originally looked like this:
and we propagated the values (10,30,20) through the network. When we did so, we observed the following values:
|Input units||Hidden units||Output units|
|Unit||Output||Unit||Weighted Sum Input||Output||Unit||Weighted Sum Input||Output|
Suppose now that the target categorisation for the example was the one associated with O1. This means that the network mis-categorised the example and gives us an opportunity to demonstrate the backpropagation algorithm: we will update the weights in the network according to the weight training calculations provided above, using a learning rate of η = 0.1.
If the target categorisation was associated with O1, this means that the target output for O1 was 1, and the target output for O2 was 0. Hence, using the above notation,
t1(E) = 1; t2(E) = 0; o1(E) = 0.750; o2(E) = 0.957
That means we can calculate the error values for the output units O1 and O2 as follows:
δO1 = o1(E)(1 - o1(E))(t1(E) - o1(E)) = 0.750(1-0.750)(1-0.750) = 0.0469
δO2 = o2(E)(1 - o2(E))(t2(E) - o2(E)) = 0.957(1-0.957)(0-0.957) = -0.0394
We can now propagate this information backwards to calculate the error terms for the hidden nodes H1 and H2. To do this for H1, we multiply the error term for O1 by the weight from H1 to O1, then add this to the multiplication of the error term for O2 and the weight between H1 and O2. This gives us: (1.1*0.0469) + (3.1*-0.0394) = -0.0706. To turn this into the error value for H1, we multiply by h1(E)*(1-h1(E)), where h1(E) is the output from H1 for example E, as recorded in the table above. This gives us:
δH1 = -0.0706*(0.999 * (1-0.999)) = -0.0000705
A similar calculation for H2 gives the first part to be: (0.1*0.0469)+(1.17*-0.0394) = -0.0414, and the overall error value to be:
δH2 -0.0414 * (0.0067 * (1-0.0067)) = -0.000275
We now have all the information required to calculate the weight changes for the network. We will deal with the 6 weights between the input units and the hidden units first:
|Input unit||Hidden unit||η||δH||xi||Δ = η*δH*xi||Old weight||New weight|
We now turn to the problem of altering the weights between the hidden layer and the output layer. The calculations are similar, but instead of relying on the input values from E, they use the values calculated by the sigmoid functions in the hidden nodes: hi(E). The following table calculates the relevant values:
|η||δO||hi(E)||Δ = η*δO*hi(E)||Old weight||New weight|
We note that the weights haven't altered all that much, so it might be a good idea in this situation to use a bigger learning rate. However, remember that, with sigmoid units, small changes in the weighted sum can produce big changes in the output from the unit.
As an exercise, check whether the re-trained network performs better with respect to the example than the original network.
The error rate of multi-layered networks over a training set could be calculated as the number of mis-classified examples. Remembering, however, that there are many output nodes, all of which could potentially misfire (e.g., giving a value close to 1 when it should have output 0, and vice-versa), we can be more sophisticated in our error evaluation. In practice the overall network error is calculated as:
This is not as complicated as it first appears. The calculation simply involves working out the difference between the observed output for each output unit and the target output and squaring this to make sure it is positive, then adding up all these squared differences for each output unit and for each example.
Backpropagation can be seen as using searching a space of network configurations (weights) in order to find a configuration with the least error, measured in the above fashion. The more complicated network structure means that the error surface which is searched can have local minima, and this is a problem for multi-layer networks, and we look at ways around it below. Having said that, even if a learned network is in a local minima, it may still perform adequately, and multi-layer networks have been used to great effect in real world situations (see Tom Mitchell's book for a description of an ANN which can drive a car!)
One way around the problem of local minima is to use random re-start as described in the lecture on search techniques. Different initial random weightings for the network may mean that it converges to different local minima, and the best of these can be taken for the learned ANN. Alternatively, as described in Mitchell's book, a "committee" of networks could be learned, with the (possibly weighted) average of their decisions taken as an overall decision for a given test example. Another alternative is to try and skip over some of the smaller local minima, as described below.
Imagine a ball rolling down a hill. As it does so, it gains momentum, so that its speed increases and it becomes more difficult to stop. As it rolls down the hill towards the valley floor (the global minimum), it might occasionally wander into local hollows. However, it may be that the momentum it has obtained keeps it rolling up and out of the hollow and back on track to the valley floor.
The crude analogy describes one heuristic technique for avoiding local minima, called adding momentum, funnily enough. The method is simple: for each weight remember the previous value of Δ which was added on to the weight in the last epoch. Then, when updating that weight for the current epoch, add on a little of the previous Δ. How small to make the additional extra is controlled by a parameter α called the momentum, which is set to a value between 0 and 1.
To see why this might help bypass local minima, note that if the weight change carries on in the direction it was going in the previous epoch, then the movement will be a little more pronounced in the current epoch. This effect will be compounded as the search continues in the same direction. When the trend finally reverses, then the search may be at the global minimum, in which case it is hoped that the momentum won't be enough to take it anywhere other than where it is. Alternatively, the search may be at a fairly narrow local minimum. In this case, even though the backpropagation algorithm dictates that Δ will change direction, it may be that the additional extra from the previous epoch (the momentum) may be enough to counteract this effect for a few steps. These few steps may be all that is needed to bypass the local minimum.
In addition to getting over some local minima, when the gradient is constant in one direction, adding momentum will increase the size of the weight change after each epoch, and the network may converge quicker. Note that it is possible to have cases where (a) the momentum is not enough to carry the search out of a local minima or (b) the momentum carries the search out of the global minima into a local minima. This is why this technique is a heuristic method and should be used somewhat carefully (it is used in practice a great deal).
Left unchecked, backpropagation in multi-layer networks can be highly susceptible to overfitting itself to the training examples. The following graph plots the error on the training and test set as the number of weight updates increases. It is typical of networks left to train unchecked.
Alarmingly, even though the error on the training set continues to gradually decrease, the error on the test set actually begins to increase towards the end. This is clearly overfitting, and it relates to the network beginning to find and fine-tune to ideosyncrasies in the data, rather than to general properties. Given this phenomena, it would be unwise to use some kind of threshold for the error as the termination condition for backpropagation.
In cases where the number of training examples is high, one antidote to overfitting is to split the training examples into a set to use to train the weight and a set to hold back as an internal validation set. This is a mini-test set, which can be used to keep the network in check: if the error on the validation set reaches a minima and then begins to increase, then it could be that overfitting is beginning to occur.
Note that (time permitting) it is worth giving the training algorithm the benefit of the doubt as much as possible. That is, the error in the validation set can also go through local minima, and it is not wise to stop training as soon as the validation set error starts to increase, as a better minima may be achieved later on. Of course, if the minima is never bettered, then the network which is finally presented by the learning algorithm should be re-wound to be the one which produced the minimum on the validation set.
Another way around overfitting is to decrease each weight by a small weight decay factor during each epoch. Learned networks with large (positive or negative) weights tend to have overfitted the data, because larger weights are needed to accommodate outliers in the data. Hence, keeping the weights low with a weight decay factor may help to steer the network from overfitting.
As we did for decision trees, it's important to know when ANNs are the right representation scheme for the job. The following are some characteristics of learning tasks for which artificial neural networks are an appropriate representation:
The concept (target function) to be learned can be characterised in terms of a real-valued function. That is, there is some translation from the training examples to a set of real numbers, and the output from the function is either real-valued or (if a categorisation) can be mapped to a set of real values. It's important to remember that ANNs are just giant mathematical functions, so the data they play around with are numbers, rather than logical expressions, etc. This may sound restrictive, but many learning problems can be expressed in a way that ANNs can tackle them, especially as real numbers contain booleans (true and false mapped to +1 and -1), integers, and vectors of these data types can also be used.
Long training times are acceptable. Neural networks generally take a longer time to train than, for example, decision trees. Many factors, including the number of training examples, the value chosen for the learning rate and the architecture of the network, have an affect on the time required to train a network. Training times can vary from a few minutes to many hours.
It is not vitally important that humans be able to understand exactly how the learned network carries out categorisations. As we discussed above, ANNs are black boxes and it is difficult for us to get a handle on what its calculations are doing.
When in use for the actual purpose it was learned for, the evaluation of the target function needs to be quick. While it may take a long time to learn a network to, for instance, decide whether a vehicle is a tank, bus or car, once the ANN has been learned, using it for the categorisation task is typically very fast. This may be very important: if the network was to be used in a battle situation, then a quick decision about whether the object moving hurriedly towards it is a tank, bus, car or old lady could be vital.
In addition, neural network learning is quite robust to errors in the training data, because it is not trying to learn exact rules for the task, but rather to minimise an error function. | <urn:uuid:4c532c84-7c96-498a-b47c-67df0ede6c16> | CC-MAIN-2021-21 | http://ccg.doc.gold.ac.uk/ccg_old/teaching/artificial_intelligence/lecture13.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00537.warc.gz | en | 0.922096 | 4,779 | 3.375 | 3 |
The numbers are inescapable. Of the 38 pupils in 5th grade in Byng Elementary School, 17 receive special education services. Fourteen of those children didn’t score at the “proficient” level on the state’s reading test in 2002-03. As a result, the Ada, Okla., school missed its achievement target and failed to make “adequate yearly progress,” or AYP, under the No Child Left Behind Act this school year.
“It’s a flaw in the law to assume that children who are identified as having special needs academically can perform satisfactorily on an age-appropriate test,” contends Steven P. Crawford, the superintendent of the 1,700-student Byng district. “It’s the same principle as asking kids to jump a bar one foot off the ground and providing no exceptions for children who are in a wheelchair.”
Remarks like Crawford’s are echoing across the nation, as the federal law forces many schools to confront publicly the performance of students with disabilities for the first time.
The 2001 reauthorization of the Elementary and Secondary Education Act rests on the belief that all children can reach challenging achievement standards. Nowhere is that challenge more evident than in the push to raise expectations for the nation’s nearly 6.6 million students in special education and hold schools accountable for their performance. Until now, those children have been largely excluded from state testing and accountability systems.
Federal civil rights laws have barred discrimination against students with disabilities since the mid-1970s. But before 1975, when Congress guaranteed such students a “free appropriate public education,” some 1 million were excluded from public education entirely and hundreds of thousands more were denied appropriate services.
Today, nearly 96 percent of students with disabilities are served in regular school buildings. The debate largely has shifted from providing access to the classroom to providing access to the same general education curriculum and standards.
That shift was heralded in 1997. The reauthorization that year of the Individuals with Disabilities Education Act, known as the IDEA, requires states and districts to establish goals for special education students consistent, to the maximum extent appropriate, with the goals and standards set for other children; to include students with disabilities in state and district testing programs; and to report those results.
But the 1997 law has few consequences for noncompliance, and the performance of students with disabilities remained chiefly the concern of parents and special educators.
The No Child Left Behind Act has changed all that. The law holds states, districts, and schools accountable for ensuring that the same minimum percentage of special education students performs at the proficient level on state tests as other youngsters. The 12-year goal is to hoist virtually all students over that bar by 2013-14. States also must test at least 95 percent of students with disabilities.
“Along comes AYP, and all of a sudden, principals and superintendents are starting to sit up and take notice,” says Margaret J. McLaughlin, a professor of special education at the University of Maryland College Park. “I think the thing that No Child Left Behind has done, which is good, is it’s not just a special education issue anymore.”
As a result, educators both inside and outside special education for the first time must confront what it really means for all students to learn to high standards, including those with the most unusual educational needs.
The new accountability model represents a gut-wrenching shift away from a special education system that’s been rooted in complying with legal procedures and in meeting highly individualized goals for children, rather than aggregate results. Now, many people are asking how much progress special education students can be expected to make, how fast.
As the title of this year’s Quality Counts report, Count Me In: Special Education in an Era of Standards, suggests, students with disabilities have the same right to be included in state standards, assessments, and accountability systems as all other children. Otherwise, it’s impossible to know how they’re performing or how well public schools are serving their needs.
Although enormous strides have been made in special education over the past three decades, enormous gaps remain: in the performance of special education students compared with their peers’, in understanding how best to assess what students with disabilities know and can do, and in the preparation of special and general education teachers to provide such students with full access to the general education curriculum.
Indeed, while the idea of including students with disabilities in standards-based education has finally become real, it’s not always clear how to do so in ways that are accurate and appropriate. For many educators, then, this new reality is both exhilarating and daunting.
“This is fantastic,” says Judy Elliott, the assistant superintendent for special education in the Long Beach Unified School District in California. “For the first time, we don’t have to fight to be at the table. This is for all kids, including kids with disabilities. How long have we been struggling for that?
“And now,” she says, “it’s like, be careful what you asked for.”
A Diverse Group
As a 1997 report by the National Research Council, “Educating One & All: Students with Disabilities and Standards-Based Reform,” noted, it’s hard to talk about asking students in special education to meet the same standards and outcomes as everyone else without paying attention to their varied characteristics.
Quality Counts 2004 has tried to capture that diversity through the personal stories of children scattered throughout this report. The picture is complicated because states use different criteria to determine who qualifies for special education and different names for disability categories. Those labels can mask a lot about the strengths and weaknesses of individuals, based in part on the severity of their disabilities.
This report focuses specifically on youngsters who receive special education services. The sheer scope of that task prevented Education Week from looking at students with disabilities who qualify for Section 504 plans under the federal Rehabilitation Act of 1973, but who do not receive special education services.
Of the nearly 6 million students ages 6 to 21 receiving special education under Part B of the IDEA, the nation’s primary law pertaining to special education, the vast majority--67 percent--have a specific learning disability or a speech or language impairment. Of those with specific learning disabilities, about 80 percent struggle with reading, according to federal statistics. (See story, Page 39.)
Fewer than 12 percent of students in special education have disabilities, such as mental retardation, associated with significant cognitive impairments.
Minority representation in special education categories varies aero s states. The under- or overrepresentation of some group in categories that are the most subjective to identify--mild mental retardation, specific learning disabilities, emotional disturbance--has led critics to question whether students are being misidentified for special education simply because they did not receive effective instruction in the first place. (See story, Page 22.)
One-fifth of special education students spend the majority of their time--in excess of 60 percent--outside regular classrooms, although that varies by disability. As is true of almost everything else about these youngsters, however, students in all disability categories can be found across the full range of placements: from separate public facilities, to resource rooms within regular schools, to private residential placements. Elementary pupils are more likely to be served in regular classrooms than are secondary school students.
Though students with disabilities, as a group, tend to perform far lower on state tests than their nondisabled peers do, individual students with disabilities can be found across the full range of academic performance.
As part of a federal research project, the Special Education Elementary Longitudinal Study, a nationally representative sample of pupils with disabilities ages 6 to 12 took a range of achievement tests. “You can find kids with disabilities who are scoring right near the top-above the 80th percentile-and you’ll find some in the middle,” says José Blackorby, a co-director of the SEELS project, “and then a lot more kids in the lowest quartile. So it’s heavily weighted toward the low end, but there’s quite a bit of diversity.”
In general, he says, students with speech or visual impairments have the highest performance. Those who spend more time in general education classrooms also have higher scores than those of their peers who spend less time in such settings. And students with fewer disabilities tend to have higher scores than those with a greater number of impairments.
“We have a range of students who have disabilities, so I would adamantly reject, as a blanket statement, that students with disabilities can’t meet the same achievement targets,” says Martha L. Thurlow, the director of the National Center on Educational Outcomes, located at the University of Minnesota. “I would say that’s not the case for the broad majority of students with disabilities.”
But graduation rates for students with disabilities remain alarmingly low. In 2001-02, only 32 percent of such students age 14 or older earned standard high school diplomas, according to the federal office of special education. The rates ranged from 14 percent in Alabama to 56 percent in Arkansas.
Students with disabilities also drop out of high school at approximately twice the rate of other students. That same year, Alaska, Louisiana, and Michigan all posted dropout rates for special education students higher than 25 percent.
Large Performance Gaps
For Quality Counts, Education Week asked states what percentage of special education students scored at the proficient level or above on state tests in mathematics and reading in 2002-03 compared with general education students. One of the most striking effects of the No Child Left Behind law so far is that those numbers are now publicly available for the majority of states.
Education Week encourages readers to look at the numbers, on Page 86, by examining the performance gaps within each state, not by comparing the proficiency rates across states. That’s because states use different tests and define what they mean by “proficient” differently. Some states also exclude the scores of certain students with disabilities from their calculations.
By the end of October 2003, nine states and the District of Columbia could not provide any data. Five more states could not provide results for the 2002-03 school year, but provided disaggregated data for earlier years.
Initial results from a few states are promising.
In Kansas, which tests virtually all its special education students, nearly half of 5th graders with disabilities scored at the proficient level or higher on state reading tests in 2003, compared with only 26 percent in 2000. In math, 58 percent of 4th graders with disabilities scored at the proficient level or better last year, compared with 36 percent four years earlier. Other grades have seen substantial, but smaller, increases.
“For too long, we held these students to lower standards,” says Alexa Pochokowski, the assistant commissioner for learning services in the Kansas education department. “I hate to say it: I think we almost felt sorry for them.”
Despite such progress, large gaps remain between the performance of special education students and the general student population, with special education students as a group performing well below general education students in every state.
For example, 30 of the 39 states that provided complete data had an achievement gap between special education and general education students on 4th grade reading tests of 30 percentage points or more. In Arkansas, Iowa, Montana, New Hampshire, Oklahoma, and Vermont, the gap was more than 50 percentage points. Gaps in 8th grade reading tended to be even worse. Only five of the 39 states--Michigan, Mississippi, Nebraska, South Carolina, and Texas--reported achievement gaps of less than 30 percentage points. Thirty-two of 36 states showed gaps larger than 30 percentage points on their 10th grade reading exams. (Where states do not test in grades 4, 8, and 10, Education Week used data from the next closest grade.)
Given such statistics, it’s no surprise that the No Child Left Behind Act has spawned dissension and unease in the education community. As Mitchell D. Chester, the assistant superintendent for policy development in the Ohio education department, observes, accountability for students with disabilities has become a “major lightning rod” in the implementation of the federal law.
Some observers argue there’s an essential conflict between the IDEA, which focuses on individual goals and learning plans for students, and the No Child Left Behind law, which stresses systems accountability and uniformity.
“The individualized nature of IDEA is totally inconsistent with the group nature of NCLB, even though they talk about classes of kids who are disabled,” says Miriam K. Freedman, a lawyer in Boston who works with school districts. “To me, that’s a collision course, to hold a school responsible for Billy not reading at grade level, when Billy has a disability whose need is individually met at a prekindergarten level.
“It’s holding a school accountable for something they can’t do, if they are consistent with another federal statute.”
For Quality Counts, Education Week commissioned a national survey of 800 special and general education teachers, conducted by the Washington-based firm of Belden Russonello & Stewart. The poll found that while most teachers agree in principle that students with disabilities should be taught to higher standards, many reject the notion that students in special education should be held to the same standards and testing requirements as other youngsters their age. (See story, Page 20.)
More than eight in 10 teachers believe that most special education students should be expected to meet a separate set of academic standards. Nearly as many think special education students should be given alternative assessment measures, rather than being required to take the same tests as general education students.
While teachers are positive about how much their special education students achieve each year, they also express reservations about whether all children with disabilities can actually meet state standards. Just 19 percent of teachers say all or most of their special education students would be able to score at the proficient level on state tests for students their age.
Half of teachers say only a few or none of their special education students would be able to do so, even with reasonable accommodations.
The most contentious issue, because of the federal law, is how to test students with disabilities appropriately, report the results, and include those scores in rating schools. (See story, Page 44.) States have made enormous strides in those areas since the 1997 reauthorization of the IDEA. Even so, testing is a rapidly moving target.
Quality Counts found that in 2002-03, 19 states and the District of Columbia could not calculate the percent of special vs. general education students who took state tests by grade level. Thirteen of the 37 states that provided data on participation rates tested 95 percent or more of their students with disabilities in reading and math in grades 4, 8, and 10 during the 2002-03 school year. (Where states did not test in grades 4, 8, or 10, Education Week used data from the next closest grade.) Overall, participation rates for students with disabilities ranged from 40 percent to 100 percent.
This is for all kids, including kids with disabilities. How long have we been struggling for that?”
By far the most common strategy for increasing the participation of students with disabilities is to provide accommodations, such as more time or one-on-one testing. Federal law requires states and districts to include students with disabilities in their testing programs with “appropriate accommodations,” where necessary. Students generally receive the same accommodations they get during regular classroom instruction, as spelled out in their individualized education plans, or IEPs.
But while all states have guidelines to help determine which accommodations are appropriate, the list of those permitted and prohibited varies widely by state, in part because of a limited research base to help in making those decisions.
Fifteen states bar special education students from taking tests with “nonstandard” accommodations or “modifications” that they believe alter the nature of what’s being tested, such as reading portions of a reading test out loud to a student. Ten states exclude from their state accountability systems the scores of special education students who take state tests with modifications. And 18 states automatically give students who take state tests with modifications a score of zero or nonproficient, according to Quality Counts’ survey of the 50 states and the District of Columbia.
For students who can’t take state tests even with accommodations, federal law requires states and districts to offer “alternate” assessments, which might range from a portfolio of student work to a teacher’s observational checklist. Although the IDEA required all states to provide alternate assessments by July 2000, many did not. By 2003-04, every state and the District provided at least one alternate assessment or permitted districts to develop such tests.
As with accommodations, however, states differ in who’s eligible to take alternate assessments, the extent to which such tests reflect grade-level content and achievement standards, and how they treat those results in rating schools. Quality Counts found that 23 states included the scores of students who took alternate assessments when calculating proficiency rates on state tests for the purposes of this report.
Although federal law has required states and districts to report test results for students with disabilities vs. students without disabilities since 1994, states have been slow to do so. Last year, Quality Counts found that of the 47 states and the District of Columbia with school report cards, fewer than half published test results for students with disabilities. And even fewer compared the performance of those students with that of their nondisabled peers. This school year, 35 states and the District of Columbia require school or district report cards to include information separately on the test-participation rates and performance of students with disabilities. Twelve states require report cards to include the performance of students who took alternate assessments. Few states--seven and 15, respectively--require schools or districts to report dropout and graduation rates separately for students in special education.
States have been moving to include the performance of special education students in rating schools, as the No Child Left Behind Act now requires. Quality Counts found that to comply with federal law, all jurisdictions rate schools based, in part, on the performance of special education students on state tests. Forty-three states and the District of Columbia incorporate the test scores of students who took alternate assessments in those ratings. Fewer than half the states--23 states and the District--rate schools based, in part, on the dropout rates of students with disabilities. Twenty-eight states and the District rate schools based, in part, on the graduation rates of special education students.
Now, as state after state rolls out initially low test scores for students with disabilities, some officials fear a rush to judgment.
“We’re very concerned about the unintended consequences of holding schools accountable for this population,” says Chester of the Ohio education department. “We’re sensitive to the potential for pushing these students out, for scapegoating these students, for identifying them as the reason that a school or a district isn’t measuring up.”
The biggest tension centers on whether those special education students who now function at least several years below grade level can be expected to do well on grade-level exams, as federal law demands.
“The idea has been brought up multiple times,” says Debra Dixon, a program manager for specialized services in the Louisiana education department. “How do we even write IEPs for these children? The teachers are frustrated because they know that these kids have to be in accountability systems, they know they have to show growth, but they’re afraid for the children who are functioning significantly below grade level.”
But as Thurlow of the University of Minnesota cautions: “I wouldn’t want to automatically say we know who those kids are, and let’s come up with something different for them. As soon as you do that, that group is going to get bigger and bigger.”
At the high school level, some also worry that the heavy focus on academics may divert attention from vocational and life skills that some students in special education need to succeed.
“We don’t want to say, ‘Gee, these kids don’t need English or math,’” says McLaughlin of the University of Maryland. “I’m just really concerned that for some kids, the level of cognitive complexity, the amount of material that all kids are expected to learn, is quite stunning.”
“It’s a really difficult thing to talk about,” she adds, “because as soon as you open the door to any other options, then anybody who can squeeze into special education will be there. That’s always been the problem.”
‘Something Quite Different’
But the most fundamental problem may be that many students in special education still lack sufficient access to the general education curriculum to demonstrate success.
“I think the biggest challenge is going to be for educators because, in many cases, the population that we will hold to grade-level standards has not been in mainstream curriculum, and often has not been in mainstream instructional programs,” says Chester.
Without such exposure, many of those students are unlikely to achieve grade-level performance in the short term, advocates for children with disabilities note. They view exposing that “instructional gap” as precisely the point of the federal law.
The good news is that a majority of special education teachers polled for Quality Counts say the curriculum for students with disabilities is more demanding and more similar to the curriculum for general education students than it was three years ago.
A majority also say that special education students are learning more academic content that is based on state academic standards for children their age than was true three years previously. But far fewer general education teachers feel that way, suggesting that such changes have yet to penetrate the regular classroom as deeply.
- The SEELS study found that about 55 percent of elementary students with disabilities got their primary language arts instruction in general education classes, and about 62 percent of them had a goal of reading at grade level. About half used general education materials without any modifications at all. They spent about the same amount of time as other students practicing phonics, learning vocabulary, taking quizzes or tests, and working on projects or worksheets.
But, says Blackorby, the SEELS co-director and the program manager for the disability-policy program at SRI International in Menlo Park, Calif, “when you look at the classes of behaviors that require the kids to do something, you actually see something quite different.”
Compared with other students, elementary pupils with disabilities were less likely to read literature, plays, poetry, or informational material. They read either silently or aloud less often. They also were significantly less likely to complete a writing assignment, respond orally to questions, take part in class discussions, or make presentations to the whole class or group.
For special education students, individualized education plans must address access to and progress in the general education curriculum, as well as set annual, measurable goals for their performance.
But, as Frederick M. Hess of the American Enterprise Institute in Washington points out, “IEPs have historically reflected a given student’s particular instructional regimen, rather than provided a road map for helping that child accomplish the general education goals promulgated by the school or state.”
Until 1997, the IDEA did not even require regular classroom teachers to participate in the development of students’ IEPs. A three-state study financed by the federal office of special education programs and publishedin 2000 found that most IEPs were not aligned with state academic-content standards; that special education teachers lacked guidance about how to align the documents with the standards; and that, by and large, special education teachers were “not involved in schoolwide discussions about standards” and “tended to use the IEPs rather than the standards as a guide for instruction.” (Of course, in many states, general instruction is not well-aligned with state content standards either.)
Stephanie Lee, the director of the U.S. Department of Education’s office of special education programs, says states and districts have a lot of flexibility in how they design IEPs and what forms they use.
“I think what we’re learning, through our research on best practices, is that it’s important to consider the state standards in developing IEPS,” she adds, “but we don’t have any specific guidance that I know of about how a state has to do this.”
Quality Counts’ survey of the 50 states and the District of Columbia found that only seven states require that the IEPs of students with disabilities address state content standards. Alabama plans to require such alignment for the 2004-05 school year. Alaska requires that IEPs be aligned to the state’s academic-performance standards.
The teacher poll for the report revealed that more than eight in 10 teachers say their students’ IEPs are aligned with their states’ academic-content standards “very much” (40 percent) or “somewhat” (43 percent).
While 98 percent of special educators are included in the writing of those IEPs all or most of the time, however, such involvement is true for only 57 percent of general educators. In addition, 49 percent of special education teachers believe their schools should spend a “great deal of time” teaching special education students content aligned with the state standards for students their age, compared with 85 percent who say a “great deal of time” should be spent teaching content specifically outlined in those students’ IEPs.
‘Highly Qualified’ Teachers
How to prepare educators to teach children with disabilities to state standards is an open question, particularly given a nationwide shortage of special education teachers. In 2003, the American Association for Employment in Education identified special education as a top teacher-shortage area. (See story, Page 62.)
The No Child Left Behind Act requires all teachers in the core academic subjects to be “highly qualified” in each and every subject they teach. That requirement extends to special educators who are solely responsible for teaching a core academic subject, such as mathematics or reading. Yet, traditionally, states have not required those with special education licenses to demonstrate subject-matter knowledge.
Quality Counts found that 27 states and the District of Columbia require special educators to have a minimum degree or coursework in special education to earn their initial teaching licenses, and 29 states and the District require special educators to pass an exam related to special education. Fourteen states and the District require both in the 2003-04 school year. But no state currently requires special educators at the secondary school level to pass exams or complete coursework related to the core subjects they teach.
While 14 states and the District of Columbia require general education teachers to complete one or more courses related to special education to earn initial teaching licenses, only nine states require them to complete preservice training related to special education.
“As long as we have youngsters with special education teachers certified in noncontent-based areas, there’s no surprise why these youngsters aren’t learning the algebra, or geometry, or trigonometry, or whatever they need to know in order to meet the same standards as other kids,” argues Kathleen B. Boundy, a co-director of the Center for Law and Education, a Washington-based organization that advocates on behalf of students with disabilities and their parents.
Deborah A. Ziegler, the assistant director for public policy at the Council for Exceptional Children, a leading group that represents special educators, says students in self-contained classrooms who are eligible to take courses such as algebra should have a special education teacher with competence in that subject and in special education.
But, in general, the Arlington, Va.-based council argues, special education teachers should teach secondary-level content in consultation or collaboration with their general education colleagues, rather than be experts in such subjects themselves. The federal mandate regarding highly qualified teachers is especially a problem for some rural areas that may have one special educator for the whole high school, Ziegler says.
Quality Counts’ poll found that 70 percent of special education teachers in grades 6-12 teach more than one subject, compared with 13 percent of general education teachers in those grades.
Special education teachers polled for the report generally feel prepared to teach their special education students.
Fifty-seven percent of special education teachers say they are “very” familiar with their states’ academic content for the subjects they teach, compared with 70 percent of general education teachers. But while 95 percent of special educators say they feel “very” prepared to teach students with IEPs, that is true for only 45 percent of general educators.
Eighty percent of teachers believe it’s “very important” for special educators to demonstrate competency in all the academic subjects they teach. Three-quarters say it is “very important” that general education teachers demonstrate “competency in teaching students in special education.”
“I think we’re going to have to revisit the role of special and regular educators,” says Ohio’s Chester. “In cases where special educators have the primary responsibility for reading and math instruction, that may not be the wisest decision.”
Perhaps the most explosive issue on the table is whether students with disabilities should have to pass high school “exit tests” or end-of-course exams to earn standard diplomas. (See story, Page 53.)
Although federal law does not require such high-stakes testing for individual students, 20 states require students to pass an exam to earn a diploma. Fourteen of those require students with disabilities to pass the exams.
Lawsuits challenging such rules for students with disabilities are pending in both California and Massachusetts. Plaintiffs argue that special education students either lacked adequate accommodations to pass the tests or enough academic preparation in their classes to master the level of content on the exams, as evidenced by their high failure rates.
“Who is really getting punished here?” asks Boundy of the Center for Law and Education, which represents students in the Massachusetts case. “This really is coming down on the backs of the youngsters.”
The issue has often found special education advocates and parents on both sides of the fence, torn between wanting schools to retain high expectations for the students and fears that the children will suffer harsh consequences as a result.
Of the 14 states that now require special education students to pass a test to earn a diploma, the National Center on Educational Outcomes found that five allow students with IEPs to take an alternate assessment. Three states have an appeals process for those who fail the exam.
Seven states have no other options for students with disabilities to earn a standard diploma if they fail the tests. Other states award special education students who fail the tests alternative or nonstandard diplomas, but there’s little research on how colleges or employers view such documents.
Critics worry that the diploma demands will boost the already high dropout rates for special education students and provide incentives to remove them from regular education settings, where such youngsters’ poor showings could make schools look bad.
But Boundy, whose group opposes high-stakes testing for any student, argues that the solution isn’t to exempt students with disabilities from diploma requirements where they do exist. That smacks of discrimination, she says, and could lead to even weaker standards and a more watered-down curriculum for such students.
While 39 states and the District of Columbia regulate the requirements for a standard diploma, such as credits earned and attendance, 24 states allow students with disabilities to graduate with a standard diploma even if they haven’t met the graduation requirements.
To some, the ultimate solution may be to focus more on progress, or on the year-to-year gains of individual students, than on getting special education students over an absolute bar.
“I don’t want to see a lower achievement goal set for these kids,” McLaughlin of the University of Maryland says, “but I do think that for students with disabilities, we need to consider both the level as well as the slope of their progress. Yes, we want to know what the gap is between this subgroup and other subgroups, but we need a more sensitive measure.”
“Before we lower the boom on schools, we should be able to look deeper,” she adds, “because we’ve got a hell of a long way to go.”
Otherwise, McLaughlin worries, “what we’ll see is exactly what we’re starting to see: people out there in the field starting to push back on this. And the way they’re pushing back is to say, ‘We’ll never do it with these kids, so let’s exempt them.’”
A version of this article appeared in the January 08, 2004 edition of Education Week | <urn:uuid:1b7eb7d6-15f8-400c-9f57-5b9dff100045> | CC-MAIN-2021-21 | https://www.edweek.org/education/enveloping-expectations/2004/01 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991378.48/warc/CC-MAIN-20210515070344-20210515100344-00176.warc.gz | en | 0.960249 | 6,946 | 3.234375 | 3 |
The ketogenic diet is a high-fat, adequate-protein, low-carbohydrate diet that in medicine is used primarily to treat difficult-to-control (refractory) epilepsy in children. The diet forces the body to burn fats rather than carbohydrates. Normally, the carbohydrates contained in food are converted into glucose, which is then transported around the body and is particularly important in fueling brain function. However, if there is little carbohydrate in the diet, the liver converts fat into fatty acids and ketone bodies. The ketone bodies pass into the brain and replace glucose as an energy source. An elevated level of ketone bodies in the blood, a state known as ketosis, leads to a reduction in the frequency of epileptic seizures. Almost half of children and young people with epilepsy who have tried some form of this diet saw the number of seizures drop by at least half, and the effect persists even after discontinuing the diet. There is some evidence that adults with epilepsy may benefit from the diet, and that a less strict regimen, such as a modified Atkins diet, is similarly effective. The most common adverse effect is constipation, affecting about 30% of patients—this was due to fluid restriction, which was once a feature of the diet, but this led to increased risk of kidney stones and is no longer considered beneficial.
Walnuts are packed with tryptophan, an amino acid your body needs to create the feel-great chemical serotonin. (In fact, Spanish researchers found that walnut eaters have higher levels of this natural mood-regulator.) Another perk: "They're digested slowly," said Dr. David Katz, director of the Yale Prevention Research Center. "This contributes to mood stability and can help you tolerate stress."
We have all heard of essential fatty acids (EFAs) and essential amino acids (EAAs), but have you ever heard of essential carbohydrates? No. The human body is capable of burning fat for fuel. If the body can burn fat for fuel, why would you ingest a substance (carbohydrate) that raises your blood sugar, raises your insulin levels, and makes you sick? Why would the ADA advocate the very diet that made us sick in the first place? When are they going to admit they’ve been wrong and start doing what is in the best interest of diabetics?
I'm going to give the DASH diet a try. It sounds easy enough but haven't actually tried it yet. I enjoyed the book and am anxious to start the plan. I don't necessarily agree with the artificial sweeteners used. The book does have some good recipes that I want to try. I do think it's a good basic diet that you can adapt to fit your likes and needs. And as always including exercise with a diet will always help. This will hopefully help to accomplish one of my goal for the new year.
When you eat foods high in carbohydrates and fat, your body naturally produces glucose. Carbohydrates are the easiest thing for the body to process, and therefore it will use them first – resulting in the excess fats to be stored immediately. In turn, this causes weight gain and health problems that are associated with high fat, high carbohydrate diets (NOT keto).
Although some studies have indicated that a ketogenic diet is associated with dyslipidemia (cholesterol and triglyceride perturbations), many of these results were obtained from studies on rodents and did not always agree with what the data show in human studies. A recent review summarized the controversy, highlighting the discrepancies in the literature. In part, the discordance is likely due to the exact composition of the diet, specific study design, as well as the metabolic differences between rodents and humans.
Below is a quick graphic of a meal plan on the traditional Mediterranean diet, it is the same meal plan that I also follow. Under the graphic you can find details, tips and links to the recipes. I provide a variety of choices for meals that you can mix and match with links to the recipes. For more ideas just head over to the Recipe Index and you will find a large selection of Mediterranean recipes.
Yes!! Edward!! I am pre-diabetic myself and have IBS which many doctors have no explanation for many of my questions because IBS triggers everyone differently and with different foods. I have been keto for 6 weeks and have lost 14lbs and have not noticed any symptoms of IBS even when I eat trigger foods (onion/garlic) I am no means 100% keto yet because I have had slip ups here and there but I jump right back in. I can’t imagine not following this way of life moving forward. I immediately feel the difference if I indulge in anything more then I should. Im learning to listen to my body and now see carbs/sugar is what has been causing madness on my body. Keto-on Edward!
The idea is that the fasting induces mild stress to the cells in your body, helping them become better at coping with such stress and possibly helping your body grow stronger. The verdict is still out regarding the diet’s long-term effectiveness with weight loss, according to a review of preliminary animal research published in January 2017 in Behavioral Sciences. (17)
If you’re on the ketogenic diet, be sure to test blood sugar levels throughout the day to make sure they are within their target range. Also, consider testing ketone levels to make sure you’re not at risk for DKA. The American Diabetes Association recommends testing for ketones if your blood sugar is higher than 240 mg/dL. You can test at home with urine strips.
Blood specimens were obtained at weeks 0, 8, and 16 after the participant had fasted overnight. The following serum tests were performed in the hospital laboratory using standardized methods: complete blood count, chemistry panel, lipid panel, thyroid-stimulating hormone, and uric acid. A non-fasting specimen was also drawn at weeks 4 and 12 to monitor electrolytes and kidney function.
With this eating style, you’re looking at a lot of menu planning and preparation. A review published in August 2017 in Nutrients suggests the diet could lead to weight loss, but the Academy of Nutrition and Dietetics warns the plan could also cause certain nutrient deficiencies, such as in calcium and vitamin D. (3,4) And, therefore, according to an article published in the January–February 2016 issue of the Royal Australian College of General Practitioners, anyone at risk for osteoporosis should avoid it. (5)
I know it is hard when you have been taught something, and believed it, and taught it to others…only to be shown that what you have been taught is not the end all be all that you were led to believe. It sucks. But, you can choose to ignore the truth, and continue to follow the incorrect path. Or, you can look at the facts, and realize that what you have been taught is not the truth…and you can take a new path, which will lead many to wonderful new lives.
“We have basically no evidence that this diet is consistent with human health over time,” says Dr. Katz. (Its heavy emphasis on animal protein isn’t ecologically sustainable, either, he adds.) “All of the evidence we have points toward a plant-predominant diet with an emphasis on vegetables, whole grains, fruits, nuts, and seeds—all of the very things that the ketogenic diet avoids.”
The best diet for losing weight is Weight Watchers, according to the experts who rated the diets below for U.S. News. Volumetrics came in second, and the Flexitarian Diet, Jenny Craig and the vegan diet were third on this overall weight loss ranking list, which takes into account short-term and long-term weight loss scores. Some other diets performed as well or better in our rankings for enabling fast weight loss, but long-term weight loss is more important for your health.
Mastering Diabetes: Studies conducted in tens of thousands of people over 5+ years indicate that low-carbohydrate diets increase your risk for cardiovascular disease, hemorrhagic stroke, hypertension, atherosclerosis, diabetes mortality, obesity, cancer, and all-cause mortality (premature death). No matter how you slice it, low-carbohydrate diets trick patients and doctors into believing that ketosis is an excellent long-term dietary strategy, when in reality the consequences can be disastrous.
Christopher D. Gardner, PhD; Alexandre Kiazand, MD; Sofiya Alhassan, PhD; Soowon Kim, PhD; Randall S. Stafford, MD, PhD; Raymond R. Balise, PhD; Helena C. Kraemer, PhD; Abby C. King, PhD, “Comparison of the Atkins, Zone, Ornish, and LEARN Diets for Change in Weight and Related Risk Factors Among Overweight Premenopausal Women,” JAMA. 2007;297(9):969-977. http://jama.jamanetwork.com/art icle.aspx?articleid=205916.
I do know a little bit about nutrition (what heavy person doesn't?). I wanted a plan that followed sound nutritional guidelines and had some research to back it up. This one does. Marla does a great job of explaining why the things I learned about nutrition in my 20s aren't working for me in my 40s, and then lays out, clearly, concisely, and with menus and recipes, what *will* work...and it did. I was nervous about cutting down on grains--I attempted the Atkins plan a few times and it just made me sick--but I felt fine. The menu plans are satisfying and tasty, and Marla has really helped me to re-frame the way I think about food.
“During physiological ketosis ketonemia reaches maximum levels of 7/8 mmol/L with no change in pH while in uncontrolled diabetic ketoacidosis it can exceed 20 mmol/L with a concomitant lowering of blood pH. Blood levels of ketone bodies in healthy people do not exceed 8 mmol/L precisely because the central nervous system (CNS) efficiently uses these molecules for energy in place of glucose,” researchers summarize.
It’s time to focus on your lentil health. In one four-week Spanish study, researchers found that eating a calorie-restricted diet that includes four weekly servings of legumes aids weight loss more effectively than an equivalent diet that doesn’t include beans. Those who consumed the legume-rich diet also saw improvements in their “bad” LDL cholesterol levels and systolic blood-pressure. To reap the benefits at home, work lentils, chickpeas, peas and beans into your diet throughout the week.
i began eating the Mediterranean “diet” last January. Actually began with the Daniel plan in getting ready for my sons wedding in June! I was able to successfully lose quite a bit of weight and feel wonderful at the same time! It is now the plan i follow most of the time. I still love a good hamburger and fries; but now for the most part eat a Mediterranean style every day! I am grateful that i happen to love the Mediterranean flavors and never feel hungry or deprived! I love the recipes you post and have made many of them! Do you have a cookbook or are your considering putting all your fabulous recipes together in one soon? Thank you for sharing your delicious and healthy recipes!
Because some cancer cells are inefficient in processing ketone bodies for energy, the ketogenic diet has also been suggested as a treatment for cancer. A 2018 review looked at the evidence from preclinical and clinical studies of ketogenic diets in cancer therapy. The clinical studies in humans are typically very small, with some providing weak evidence for anti-tumour effect, particularly for glioblastoma, but in other cancers and studies, no anti-tumour effect was seen. Taken together, results from preclinical studies, albeit sometimes contradictory, tend to support an anti-tumor effect rather than a pro-tumor effect of the KD for most solid cancers.
Hello, I am hoping someone can reach out to me and explain something. My son who is T1D just started the keto diet 4 days ago. At first we were doing great numbers were good, then out of nowhere we are having highs! He is correcting and it’s not bringing him down into normal range. I am going into a panic, I don’t know what to do, or who to ask for help. His doctor would be no help, and thinks the Standard American Diet is fine. I don’t see eye to eye with him. I hope someone can tell me why this might be happening. Thanks in advance for your time!
And it’s not just this study, either. Several other studies have found that keto leaves rodents unable to process carbs, leads to insulin resistance, and, more long-term, causes non-alcoholic fatty liver disease, which is when your liver accumulates lots of fat and begins to shut down. Triglycerides and inflammation go way up, too.
"We recommend against 'dieting', which is invariably a short-term solution," Dr. Gonzalez-Campoy, tells EndocrineWeb, "and since weight loss may be accomplished by a reduction in calories by any means, a ketogenic diet that restricts carbs is simply shifting the calories away from foods that typically demand insulin as in both of these studies.1,2
“Instead of using a heavy salad dressing, try a drizzle of thick balsamic glaze along with a squeeze of fresh lemon or lime juice” Taub-Dix says. “By cutting the fat in your diet, you can not only save calories, but you can also leave room for healthier fats like avocado or nuts, which are toppings you can actually chew and enjoy with greater satisfaction.”
I was diagnosed in 2004 with Brittle Type 1 diabetes, peripheral and autonomic neuropathy, and Hypothyroidism. A short time later with Gastroparesis due to the nerve damage from diabetes. Since then, I had followed every guideline and rule that the Endocrinologist and Primary Care Doctors had told me to follow. NOTHING WAS GETTING BETTER. In fact, I was gradually getting worse. So many ups and downs. Extreme highs (250-500 bgl ) to seizures from crashes (drop from 300 to 13 in no time). It was a constant battle with adjustments in insulin intake (and different insulins NPH, R, Novolog, Humalog, Lantus), carb intake, exercise and one contributing factor was the Gastroparesis. Meds were taken for the Gastroparesis but I always had side effects from meds. To my point. I was kicking a dead horse and I told them this. My sister and mom had come across the ketogenic way of eating and it dramatically improved thier lives. Mom was diagnosed way back with Type 2 and within a week or two she was off of her meds completely. I was totally interested. So, I decided to go for it on April 17, 2017. I did go through some rough patches of what they call Keto Flu. It did pass after a couple weeks. I was gaining so much energy like never before as well as mental focus. The even greater aspect of this all was, I had DRAMATICALLY LOWERED MY INSULIN INTAKE TO ALMOST NONE! My Lantus was always being adjusted from 30-40 units daily (and changed from AM to PM to splitting it to half AM, other half PM). I was on a sliding scale of Humalog or Novolog. From 4-6 units per meal and then there were the corrections throughout my day (some daily totals could be up to 40 UNITS)! Very exciting for me to only take 2 units of Lantus in the AM and daily totals of Humalog/Novolog….1.5-3 units! Other great things I began to notice, neuropathy pains were fading and finally GONE. No more nights up stinging, burning and RLS (restless leg syndrome). So, in my life, there are no questions or hardships on whether I can get off of this way of eating. It’s either do or die. If someone truly wants to have a better life, they can. The sad thing is, doctors and nutritionists aren’t being educated in the real facts. My primary care doctor isn’t willing to help me with all the labs I need nor listen. Always telling me “You need carbohydrates and insulin to live.” All that know me see the dramatic change for the better. I’m doing the Ketogenic way of eating with intermittent fasting for the rest of my life. The alternative IS NOT WORTH a lifetime of illnesses and suffering.
I, too, am finding the keto diet to be beneficial. My weight is moving down. My recent A1c was 5.7. I am consistently below 90 each morning when I check my blood. I am learning to adapt my cooking to the needs of maintaining this way of eating. I have incorporated walking because now I FEEL like it. I don’t feel deprived. I feel empowered. No medications for diabetes!
H. Guldbrand, B. Dizdar, B. Bunjaku, T. Lindström, M. Bachrach-Lindström, M. Fredrikson, C. J. Östgren, F. H. Nystrom, “In Type 2 Diabetes, Randomisation to Advice to Follow a Low-carbohydrate Diet Transiently Improves Glycaemic Control Compared with Advice to Follow a Low-fat Diet Producing a Similar Weight Loss,” Diabetologia (2012) 55: 2118. http://link.springer.com/article/10.1007/s00125-012-2567-4.
Klein S, Sheard NF, Pi-Sunyer S, Daly A, Wylie-Rosett J, Kulkarni K, Clark NG. Weight management through lifestyle modification for the prevention and management of type 2 diabetes: rationale and strategies. A statement of the American Diabetes Association, the North American Association for the Study of Obesity, and the American Society for Clinical Nutrition. Am J Clin Nutr. 2004;80:257–263. [PubMed]
While the DASH diet was originally developed as an eating style to help lower blood pressure, it has been found to be a fabulous plan for weight loss. The DASH Diet Weight Loss Solution turbocharges weight loss with a powerful plan based on previously overlooked DASH research. And the new book The DASH Diet Younger You is more pumped up on plants to help you become healthier, lighter, and actually physically younger, from the inside out. It feaatures 14 days of meal plans for vegetarians, and 14 days of plans for meat-eaters, supporting your diet preferences and showing many options on how to put DASH together. It relies on all natural foods, with no artificial additives or sweeteners!
A ketogenic diet is high in fat and low in carbohydrates. It’s called “ketogenic” because people on this diet shift from using glucose (a type of sugar) as their main fuel source to ketone bodies, which are derived from fat. In other words, people on the ketogenic diet can use their bodies’ fat stores as fuel—and this is why many studies show that this diet is superior for sustainable weight loss.
Another weight-loss-friendly substitute to keep in mind is favoring salsa over ketchup. While ketchup typically has around 19 calories and 4 grams of sugar per tablespoon, fresh tomato salsa has about 5 calories per tablespoon, no added sugar, and is packed with nutritious veggies. Tomatoes, for example, are loaded with fat-blasting fiber and vitamin C, a deficiency of which has been associated with increased body fat and larger waists. If you can handle spice, toss some jalapenos in your salsa to rev up your metabolism. For more on how you can switch your metabolism into overdrive, check out The 55 Best Ways to Boost Your Metabolism!
You start each day with a heart healthy breakfast. Your vegetable intake is increased. You find yourself making trips to the farmers market to get a better variety of fresh fruits and veggies. You stop eating processed food. And that’s a big one. Let’s talk about bread for example. Besides price and taste, what is the difference between white and whole grain bread?
Dr. Reynolds reviewed numerous research studies on ketogenic diets,6 and he has found that most studies show that the drop in blood sugar is typically short-term—only lasting during the initial three months or so—but does not last. "So it is very hard to encourage ketogenic diets when we have no evidence that they work over longer periods of time," he tells EndocrineWeb.
You’ll find that in their meals, they emphasize a plant-based eating approach, loaded with vegetables and healthy fats, including olive oil and omega-3 fatty acids from fish. It’s a diet known for being heart-healthy. (1) "This diet is rich in fruits and vegetables, whole grains, seafood, nuts and legumes, and olive oil," says Nancy L. Cohen, PhD, RD, professor of nutrition at the University of Massachusetts in Amherst. On this plan, you’ll limit or avoid red meat, sugary foods, and dairy (though small amounts like yogurt and cheese are eaten).
What is your opinion on the conflicting opinions about whether or not wine is healthy or harmful? It seems there is a daily article touting research that proclaims wine is health alternating with another article about research that indicates that even moderate intake of wine is associated with cancer or dementia. I’m trying to understand all of this conflicting data with the reality/evidence of Mediterranean cultures that include daily intake of wine. Is it the amount drunk that is key?
There’s a large spectrum of where people can fall on a vegetarian diet: For example, vegans consume no animal products, whereas ovo-lacto vegetarians eat both dairy and eggs. The eating style may help with weight loss, suggests a review published in August 2017 in Nutrients, but some vegans and vegetarians may become deficient in specific nutrients, such as calcium, iron, zinc, and vitamin B12, according to an article published in December 2017 in Nutrition, Metabolism and Cardiovascular Diseases. (23,24)
As always, I encourage you to speak to your own doctor about whether or not this diet may be right for you. And, if you decide to go for it, be sure to check in with your doctor regularly to make sure your body is responding well. Those patients who do respond well to the diet will be rewarded with less symptoms and may even be able to completely get off of their medications.
DASH stands for "dietary approach to stop hypertension" and was created by the National Institutes of Health (NIH) as a way to help reverse national trends of obesity and heart disease. Scientists combed through decades of research to come up with an expert-backed list of diet tips, along with a prescription for exercise. And it worked: The DASH diet has topped nearly every diet list for nearly a decade. Doctors particularly recommend it for people looking to lower high blood pressure, reverse diabetes, and lower their risk of heart disease. (Here's the basic list of DASH diet-approved foods.)
Enter the DASH diet. When individuals followed this eating plan, researchers saw dramatic reductions in blood pressure levels. Today, the eating plan is recommended for preventing and treating hypertension and heart disease—and it has been linked to decreased bone deterioration, improved insulin sensitivity, and possible risk reduction for some cancers.
A study published in the journal Nutrition & Metabolism discouraged the Atkins diet for anyone with diabetes because the plan doesn’t limit fat, but noted the approach may be a safe way for people without the disease to lose weight effectively. According to a study published in the Journal of the American Medical Association, Dr. Atkins helped women lose weight better than other low-carb diets, such as the Zone diet, the Ornish diet, and the LEARN diet after 12 months.
2- Eat more vegetables, fruits, grains, and legumes. The base of the Mediterranean diet pyramid should make up the base of every meal. When you can, opt for vegetarian entrees like this Cauliflower and Chickpea Stew; Spicy Spinach and Lentil Soup; or Spanakopita (Greek Spinach Pie). Rely more on satisfying, flavor-packed salads to make up a good portion of your plate. Some ideas: Kindey Bean Salad; Mediterranean Chickpea Salad; Greek Salad; Bean and Lentil Salad.
While studies have demonstrated the weight-loss benefits of the diet, Grandl says, several recent studies that apply more rigorous research techniques to study insulin resistance have suggested detrimental effects. The new study, published in August 2018 in the Journal of Physiology, aimed to better understand the basic biological processes that contribute to the development of type 2 diabetes and the early effects of the ketogenic diet. What people eat impacts the release of glucose, or sugar, in the bloodstream. High glucose levels can, over time, cause insulin resistance. Insulin is the substance released in the body to help manage and regulate sugar in the blood at healthy levels.
It also may help stave off chronic diseases, like heart disease and type 2 diabetes, as well as act protectively against certain cancers. (34) The diet is also a boon to mental health, as it’s associated with reduced odds of depression. (34) There’s even some data to suggest it can be supportive in relieving symptoms of arthritis, according to a paper published in April 2018 in the journal Frontiers in Psychology. (35)
WH verdict: It’s still a diet by any other name, but props to Weight Watchers for acknowledging that there’s more to being healthy than ‘weight’. The new platform really does consider all aspects of wellness. And with plans to partner with Alexa and Google Assistant to help track your progress, WW could be to 2019 what Weight Watchers was to the early noughties.
It’s easy to get keto and paleo confused since many of the same foods are encouraged in both diets. The keto diet is specifically crafted as a very low carbohydrate diet to get the body into a state of ketosis. The paleo diet focuses on bringing eating back to the basics and eating like our hunter-gatherer ancestors with less emphasis on where the calories are coming from: carbs, fat or protein. The paleo diet includes lean meats, seafood, seasonal veggies, some nuts and fruit and eliminates grains, dairy, processed foods, and certain oils.
Our science-backed SmartPoints® system guides you to eat more fruits, veggies, and lean protein, while keeping track of foods with added sugar and unhealthy fats. Making smart decisions just got simpler, so you can live your best life. We meet you where you are— this plan works for men, brides, new moms, really anybody looking for inspiration to create healthier habits.
Adequate food records were available for analysis in a proportion of participants at each of the 4 timepoints (Table (Table2).2). Participants completed food records at a mean of 2.5 and a median of 3 timepoints. In general, comparing baseline to subsequent timepoints, mean carbohydrate intake decreased substantially and energy intake decreased moderately while protein and fat intake remained fairly constant.
When it comes to condiments, mustard is about as healthy and low cal as it gets, and the pungent yellow stuff that contains about 5 calories per teaspoon has also been found to stimulate weight loss. Scientists at England’s Oxford Polytechnic Institute found that eating just one teaspoon of mustard can boost the metabolism by up to 25 percent for several hours after it’s been consumed. Researchers attribute this to capsaicin and allyl isothiocyanates, phytochemicals that give the mustard its characteristic flavor. So instead of reaching for the sickeningly sweet ketchup, make sure you have mustard on hand at your next BBQ.
The ketogenic diet tries to bring carbohydrates down to less than 5 percent of a person’s daily caloric intake – which means eliminating most grains, fruit, starchy vegetables, legumes and sweets. Instead, it replaces those calories with fat. That fat is turned into ketone bodies, which are an alternative energy source: besides glucose derived from carbohydrates, ketones from fat are the only fuel the brain can use.
Like peanuts, lentils also contain genistein, but their weight loss powers don’t end there. In one four-week Spanish study, researchers found that eating a calorie-restricted diet that also included four weekly servings of legumes aided weight loss more effectively than an equivalent diet sans the pulses. Those who consumed the legume-rich diet also saw improvements in their “bad” LDL cholesterol levels and systolic blood pressure. Next time you’re cooking something starchy for dinner, consider eating fiber and protein-packed lentils instead.
That makes a lot of sense. Keeping up insulin pathways when you aren’t eating carbs would be like keeping the lights on when it’s daytime outside — it’s a waste of energy. You aren’t using insulin on keto, so your body probably downregulates your insulin pathways. As a refresher, insulin is a hormone produced by your pancreas that tells your cells to absorb glucose to use as fuel. When you eat carbs, insulin production begins. In the absence of carbs, there’s less need for insulin.
This is a wealth of information. My husband and I are starting the keto diet tomorrow and I knew nothing about it. When I sat down to look up information about it, I found this. Thank you! This is everything I need to know in one place. We are not as healthy as we’d like to be and I am optimistic this will help us obtain our goals, along with an exercise plan.
In 2008, researchers conducted a 24-week study to determine the effects of a low-carbohydrate diet on people with type 2 diabetes and obesity. At the end of the study, participants who followed the ketogenic diet saw greater improvements in glycemic control and medication reduction compared to those who followed a low-glycemic diet. A study from 2017 found the ketogenic diet outperformed a conventional, low-fat diabetes diet over 32 weeks in regards to weight loss and A1c. A 2013 review reports again that a ketogenic diet can lead to more significant improvements in blood sugar control, A1c, weight loss, and discontinued insulin requirements than other diets.
We recently published an article documenting the grim long-term effects of low-carbohydrate diets, in which we explain the evidence-based research showing that low-carbohydrate diets high in fat and protein including meat, dairy products, eggs, fish, and oil actually worsen diabetes health, increase cancer risk, increase cholesterol, increase atherosclerosis, harden blood vessels, and increase all-cause mortality.
First, I want to thank you for all of your dedication and work in providing this site. The difficulty of maintaining a healthy weight is a big problem for so many people. My personal question & issue in staying on Keto is my craving for fresh fruit. This a.m I had a large fresh peach along with my “Bullet Proof” coffee. Have I now sabotaged today’s Keto eating?
What the diet advocate says: 'The key components of a Mediterranean diet are lots of vegetables, olive oil, oily fish and nuts, with no calorie restrictions. Combine that with cutting down on sugar, which was traditionally a rarity in the region, and you’ve got the base of the Mediterranean diet right. And if you get the base right you can eat a little of whatever else you like,' says Consultant Cardiologist Dr Aseem Malhotra.
I am sorry you had this experience. I feel that this educator was not giving you good advice. All my women who want to lose weight are recommended to consume 30 grams of good carbohydrates at each meal, and 15 at each snack. If you were not trying to lose weight, I would have recommended 45. I find this is all it usually takes to begin to lose some weight as you start to get active. Patients set their own goals with motivational help from their Certified Diabetes Educator. Our intent is never to insult, and you should not have gone through that. It sounds that you have now found the right path. There are many CDEs who could help you, so see what tools and motivation others may offer. I wouldn’t let one bad apple spoil the whole bunch. Many CDEs are also diabetic.
Fairly recently, the diet was introduced as a weight-loss diet by an Italian professor of surgery, Dr. Gianfranco Cappello of Sapienza University in Rome. In his 2012 study, about 19,000 dieters received a high-fat liquid diet via a feeding tube inserted down the nose. The study showed an average weight loss of more than 20 pounds in participants, most of whom kept it off for at least a year. The researchers reported a few minor side effects, like fatigue. | <urn:uuid:90c9c30e-e4f9-4aa0-aef8-f681e42e2634> | CC-MAIN-2021-21 | http://loss-of-weight-allegiance.com/fad-diets-why-the-mediterranean-diet-is-bad.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991801.49/warc/CC-MAIN-20210515100825-20210515130825-00374.warc.gz | en | 0.954132 | 6,994 | 2.921875 | 3 |
A history of La Rocque Tower
Grouville Bay No 1 (La Rocque Tower –Tour de la Rocque)
La Rocque Tower was most likely built in 1779, added to earlier fortifications from the 16th and 17th centuries at Boulevard de la Rocque.
It was erected between the original three platforms of the bulwarks and the St. Saviour Guard House, at La Rocque Point, Grouville. This point makes the South-East corner of the Island and Boulevard de la Rocque commands the southern end of Grouville Bay.
This area is around the corner from La Rocque Harbour and Platte Rocque, which are on the edge of St Clement’s Bay. The battery is shown on the Popinjay Map of 1563 and is referred to in documents from 1587. The Guard House and magazine was built in 1691.
This tower was one of the first four to be built and exhibits some differences from the other towers, possibly because its width was constrained by the available space between the gun platforms and the Guard House. The Guard House had an additional magazine added to the west gable by the parish of St Clement in 1742, when a shore patrol from that parish was added to that from St Saviour.
Towers 0 to 8
It was the first in a sequence of five towers going north, numbered 1 to 5. Because Forts Henry (originally known as Conway) and William were notionally given the numbers 6 and 7, the most northerly tower in the bay was known as No 8. And because, before the French invasion of 1781, there had been no plans for a tower at Platte Rocque, when it was added it was designated as No 0.
Tower No 1 is built on a large outcrop, dominating the surrounding beaches on both sides, possibly referred to in Thomas Le Maistre’s Note Book as La Carière Giffard (Giffard’s quarry), although ‘carrière’ can be confused with ‘charrière’ (which would give ‘Giffard’s cart track’). This is referred to in translation, as “above the guard house”, suggesting the summit of the outcrop on which the tower was to be built.
The outcrop slopes down to the west, so that the tower appears much taller from the Guard House than it does from other directions. Two large orthostats support the path around the tower on that side. They were thought to be Megalithic remains, but later Dr Arthur Mourant decided they were not, but had been dragged up from the beach during the building of the tower, or possibly the guard house.
As with the guard house and battery, the tower was manned by the St Saviour Militia, who also formed the shore patrol at La Rocque; a sergeant and eight men (increased to 24 men by 1811).
The Tower featured in the French invasion of 1781, which resulted in the Battle of Jersey, after which Clement Falle, the Chef de Garde at the tower was imprisoned for dereliction of duty and the rest of the guard bound over. The invasion occurred on Twelfth Night, the highlight of the Christmas season and the guard were otherwise engaged, in the revelries.
A distinguishing feature of this tower is that the entrance is arched (later rendered, but probably in brick). The outside opening has a lesser arch to that of the passage behind it. The other towers have entrances roofed flat, with stepped granite slabs. Tower No 5 has an entranceway roofed by a brick arch, but fronted by a straight granite lintel on the outside. At both these towers this arched passage is fronted above the lintel by a thin screen wall that is a point of weakness. This must have been identified early and resulted in a change in design for the construction of the other towers.
The tower has only one fireplace on the first floor (usually towers have two fireplaces, with one on the second floor as well). The magazine is cobbled in round sea-worn cobbles, the oldest form of flooring seen at some Jersey farms in their oldest buildings (pre-1780s), the other towers having the oblong dressed setts commonly seen in farmhouses and on pavements.
The magazine consists of two parts, with a curtain wall in brick and granite separating the inner powder magazine under a brick arch, with another screen wall in brick at the back, with two ventilation slits, separated from the outside wall by a concealed brick pillar around which air passes to the central slit in the outside wall. This pillar avoided direct access from outside, for the firing of the powder by an attacker, and gave protection from the weather.
The magazines are consistent between all the towers, although they vary in size and this is one of the medium-to-larger sizes, but with a much smaller anteroom than is usual. The roof of this magazine is much lower than in the other towers and the diameter narrower, as on all the other floors.
The magazine lies directly below and at a slight angle to the fireplace on the first floor, which is located to the west of the entrance. The magazine would have been topped with dressed flagstones in front of the fireplace, forming about a third of the floor area, with the remaining floor made of wooden planks on heavy beams, but the Germans replaced all the floors with concrete.
This magazine still has its original door, which is identical to the few surviving doors at some of the other towers. It appears that this tower was something of a prototype that exposed shortcomings in the design, that were modified in the towers that followed and reinforces the likelihood that this was one of the first four towers built in 1779, if not the first.
The entrance door faces approximately north, at right-angles to the coast. The window openings on the first and second floors at ceiling-level, for light and ventilation, are all aligned with the machicolations. At many of the other towers this is not the case and they can be between the machicolations, sometimes asymmetrically positioned, sometimes with one under a machicolation and the others not.
The musket loops are lined at the sides and bottom with brick, as is the case at many, but not all other towers. The external magazine vent is also lined in brick, as at other towers. The loops on the upper and lower rows are staggered, in relation to each other; this would have strengthened the wall, but would also have increased the field of fire.
The firing-step on the roof is of the highest quality, composed of single curved granite blocks (unlike at some towers, where they are made of caps over rough masonry, or even a double circle of dressed masonry). The coping at the top of the wall is also of high quality and the carving bears signature characteristics that identifies the hand of an individual stone mason and can be compared with those at other towers. The floor of the roof-parapet also exhibits some novel features associated with its drainage, not seen at the other surviving towers.
The tower had an 18lb gun on a swivel arm and carriage with the fulcrum being a pin on a tripod, fixed to the iron roof-access collar. This collar also acted as the keystone to the domed brick ceiling below, and the tripod was secured by nuts to three long bolts passed through the thickness of the roof and a circular plate below, secured by another set of nuts. The thickness of the metal of this collar is quite thin, about a quarter of an inch, the securing plate giving it the illusion of greater thickness; this is the same at most of the towers having access to the roof through a central collar, the exceptions being Archirondel and La Rocco.
Naming of fortifications
Boulevard de La Rocque stretched from the Guard House courtyard northwards to the site of Grenville, and later had a second battery added to the north. This northern battery was moved between two locations; the first just north of the Guard House, the second on or to the north of the future site of Grenville, then moved back to its original location.
This northern battery was designated Boulevard du Nord and the earlier battery as Boulevard du Sud. The tower was called Tour de la Rocque or Tour au boulevard de la Rocque.
The bulwarks and tower have also been referred to from time to time as the tower or boulevard at la Rocque Point. There were two or possibly three occasions when the guard house was identified in contracts as St Sampson’s Guard House.
However, there are rocks to the east and south-east of Seymour Tower and L’Avarison called Les Settes Sampsons and Les Settes Sampson
The Guard House
As stated above, the Parish of St Saviour was required to provide the shore patrol on this part of the coast and man the batteries and later the tower. Presumably these men and those manning the guns had to bivouac in all weathers, a practice that may have given its name to La Baragonne.
In 1690 the Lieut-Governor decided that Grouville and St Saviour should build a guardhouse at their common cost in Grouville Bay, for their shore patrols, with gorse to be planted at possible landing points, to provide cover. A year later he changed his mind and decided that each should have their own guard house, St Saviour’s at La Rocque and Grouville at Boulevard du Maresq ou François Piroet, in the middle of the coast.
The Constable of St Saviour bought land to the west of the cannon platforms and the guardroom with a magazine was duly built. The design is adapted from domestic dwellings of the time, but with the roof composed of a Gothic vault in rough granite, with extended lines at the top of the front and back walls to support it, as at the parish churches. In addition, the entrance doorways have reinforced external surrounds.
The windows in the guard room are of the small casement type, probably originally shuttered, a design that was in use in previous centuries and was widely superseded around the middle of the following century by longer and larger sash windows. These windows are arched at the top and the square windows in the magazines were put in between the 1930s to ‘50s (most likely during the Occupation).
In 1742 the Guard House was repaired and signs of alterations or repair can be seen at the front by the east gable and inside in the vaulted ceiling. In the same year, the Constable of St Clement was ordered to add another magazine to the building, for which he purchased land from the Constable of St Saviour.
The greater quality of the dressed masonry on this second magazine attests to progress in stonemason’s skills over the intervening fifty years, as is in evidence in the quality of stonework on the tower, compared with most later towers.
By 1780 the Constable of St. Saviour still had charge of the Guard House, but in 1786 the cost of maintenance was taken over by the States, to be maintained at public expense,
In 1787 the guard house was to be paved, the floors being earth. In 1793 the Guard House was supplied with grates and coal (in all likelihood it was built with a fireplace in 1691, with fuel burnt on the hearth). In 1805, the Defence Committee authorised the Constable of St Saviour to have the Guard House re-roofed and paved. .
The Guard House is located to the west of the bulwarks and tower outcrop, at a lower level, but above that of the land to the north. It was built on a deep flat mound composed of sand, ending in embankments north, south and west, and to the east abutting the Tower outcrop.
This mound can be seen from both sides in pre-war photos and the present garages of the neighbouring flats are built on its southern embankment. The north embankment is a prominent feature and originally the edge would not have been so straight and may have included a ramp. A slope up the north-east corner of the Guard House, from the embankment to the battery level, seen in old photos and since replaced with steps, suggests this may have been a ramp for cannon and ammunition, although access to the battery at the foot of the tower may have been from the Guard House courtyard or the cart-track, to the south.
The Cannon Platforms
Boulevard de la Rocque is shown on the Popinjay Map of 1563 and it is mentioned in 1587. In 1692 the States asked the British Government for a supply of cannons. The platforms were repaired in 1734, so that guns could be placed there. In 1736 contracts were placed for the rebuilding of the three boulevards in Grouville Bay, one at each end and one in the centre.
In 1742 the sea damaged the foundations of La Rocque Boulevard, but because of the expense, nothing was done. In 1779 it was ordered that additional platforms should be erected in wood or dressed stone at places where there should be cannon, it having been found that those constructed in rough stone were inadequate for their intended use.
On 20 August 1755 the States ordered the Constables of St Saviour, Grouville and St Clement to repair the boulevard du nord dans la Baye du Vieux Chateau From this it appears that by then all three parishes were responsible for guarding Grouville Bay.
In 1798 there was further sea damage and landowners participated in building sea walls, which extended north from the tower to the end of the present garden and south to Baragone and around the corner.
The southern section was presumably replaced in the 19th century, but can be seen, in part, supporting a track running past the tower and down to the later seawall, shown in a photo from the 1930s by Emile Guiton and in the present wall to the north of the Guard House, that supported the north battery. The fact the northern wall had to be built suggests the north battery already existed, despite its absence from the 1795 Richmond Map, possibly collapsed on to the beach, at the time of the survey for the map.
Up to 1811 the fortifications of the bulwarks consisted of the tower, which was a firing platform for muskets, and an 18lb gun, the south battery with three 24-pounders which had gone into decay by 1806, but was either restored by 1811 or the guns restored to the north battery, and the north battery with two 12 pounders.
At some point between 1908 and 1911 the site of the northern battery was discernible, but was lost on the building of Granville.
Firing lines and south battery wall
A photo taken from Grenville towards the tower in the 1930s shows part of the firing lines, or the south battery wall, that were destroyed during the construction of installations by the Germans during the Occupation and which are shown on the Popinjay and Richmond Maps.
An earlier photo shows these firing lines ending in a substantial dressed edge that suggests a substantial random stone wall enclosed the entire battery from south to north, including the tower within its circuit and ending by the north-east corner of the Guard House.
From the ‘Green Books’ photos, taken by the Germans in 1943, it is evident that the original platforms at the foot of the tower were still exposed, after they had built their installations, but evidently they later covered them in a foot or more of sand, presumably as camouflage, as in evidence from live target bullets found in the sand, the platforms being rediscovered with surviving parts of the firing lines and footings of the battery wall in 1956.
Another military building?
The Richmond Map shows another building a short distance north of the Guard House, but at right angles to it. This would not have been a good defensive orientation, relative to the coast, but at that time the site was still under military control, so this is likely to have been a store for the north battery, built either in wood or stone and evidently a temporary structure.
Some shallow steps are cut into the outcrop below the original boulevard wall. These appear to be very old and for very small feet, and they may predate the battery, or they had been cut for the shore patrol to access the dunes.
Below the north battery sidewall are substantial steps cut into the rock, making the edge of the 19th century seawall, but it is not clear whether they predate it or were cut when it was built. The concrete wall making the side of the 18th century wall features in a photo from the 1930s and dates from before the Occupation, but replaced granite blocks, of the same type as at the front.
Between 1911 and 1915 portions of the site were sold by the Crown into private ownership, at first to Elie Bree of Boulevard Farm. A year or two after each purchase he sold them to Catherine Berrow. She added these to a dune she had bought from him in 1908, that had formed the northern end of Boulevard de La Rocque, to build Grenville in 1908 and 1909.
A condition of the sale by the Crown was that any tower could be required to be painted on the seaward side as navigation markers for shipping, which some still are. This is still noted in the contracts, whenever one is sold on. Catherine Berrow eventually added the tower, guard house and bulwarks to her garden, reuniting these various elements that had originally formed La Rocque Boulevard.
In the late 1950s Grenville was sold, but the owner retained the historic buildings and the adjacent half of the garden.
In the 1920s the then owner commenced converting the Guard House into a power house A cable was laid between the guard house and Grenville, but mains electricity came along the coast road and the project was abandoned.
The property was bought in 1920 by Arthur Whiston Whitehead, who sold it to Dr Charles Albert Bois in 1937. During the Occupation the Germans converted the guardroom into a cook house for the accommodation bunker, now under Le Boulevard flats, with the magazines being used to accommodate the cooks. They widened the fireplace for their field range (removing the north corbel and granite lintel at this time) and restored the chimney for their stove pipe.
A relic of their stay is a painting of a riverine scene, in the magazine that served as their bunk room. Wooden flooring was laid throughout the St Saviour magazine and guardroom, which had started to rot by the 1950s. In the later 1950s the magazines and courtyard outside were paved.
The War Department (WD) stones on the southern boundary are still active and run from WD 3 to 6. Property contracts up to the late 1930s refer to the other WD stones along the road and to the north as still being active. In the late 1950s the owners of Grenville and the flats donated a strip of their land for the widening of the coast road, and WD 3 was set back on the same boundary line and its new position formalised in a contract of the Royal Court.
In 1947 the flagpole on the Tower roof was struck by lightning and shredded. The current blew apart the coping stone in which it was set, passed down the tower into the bedrock and from there through the floor of the Guard House into the abandoned powerhouse cable to Grenville.
On 13 July 1757 there was an earlier lightning strike at Boulevard de la Rocque, described by Thomas Le Maistre in his ‘Note Book’: “Near the Guard House at La Rocque a little boy aged ten years was struck by lightning; he was the son of James Norman of La Ferme, and never moved from the place where he was struck, remaining stiff and dead, without other wound. The horse that he had been driving in a little cart (a bachot) was not hurt. Above the Guard House at a spot called La Carière Giffard some stones were broken into pieces, and the sentry box by St Sampson’s Tower was smashed and several people saw a lot of smoke there.”
During the Second World War, the German Occupying Forces requisitioned most of Grenville’s garden, including the historic buildings, which they designated as La Rocque B, leaving Dr C A Bois and his wife in residence at Grenville, with barbed wire strung across the front of the house, leaving just enough room for access to the front door.
They also established roadblocks at Boulevard Farm and Platte Rocque, with moveable barbed-wire barriers and concrete Tobruk Positions at each end. The other installations at La Rocque B were a roofed searchlight emplacement for guiding fire onto shipping, an outside searchlight table using the same searchlight, for directing anti-aircraft fire, a small bunker under another Tobruk Position, overlooking the beach and a concrete slip-trench for rifles and heavy machinegun, on the slope to the north of the tower entrance, to cover the beach approach to an artillery position in the garden below
These facilities cut into the old cannon battery platforms, leaving only one of the three untouched, with the other two surviving in part. The portions of the platforms taken up were used to reinforce and level off the coastline, before putting up their installations; the lintel from the Guard House fireplace was similarly used. This work also extinguished most of the firing lines, although part of these were re-discovered buried underground in the 1980s.
A 10.5 cm field piece was placed in Grenville’s garden on the first and final site of Boulevard du Nord, at the northern end of the 18th century sea and battery wall, with a covered slip-trench leading towards the tower.
This corridor was pre-fabricated from slabs made by slave labour in Alderney and these slabs bear work-gang identification marks. They also cut through into the Tower magazine at ground level. They took out the wooden floors and replaced them at their original levels with concrete, rendered the inside walls, discarded the cannon tripod on the roof and installed a heavy machine gun for anti-aircraft fire. Another field piece was located by the seawall, overlooking the beach, by the accommodation bunker, in an unroofed, walled embrasure open at the front.
Fired in anger
In the period for which they were built, these towers never saw action, but at least one has fired in anger, ironically by the enemy during the Occupation of 1940-45. The neighbouring tower at Plat Rocque (Grouville No 0), fired on an American aircraft, returning from a bombing raid on St Malo. As the Americans had a few bombs left over, they decided to destroy the tower, but missed and hit private houses behind it. One of the other towers (possibly La Rocco) may have fired on passing Allied aircraft as well. So, it cannot be said these towers never fired in anger, it’s just that this was not by those intended. The battery at Plat Rocque also saw action at the Battle of Jersey, when the French who took the battery fired on approaching British troops and Militia, during their attack to retake it.
Notes and references
- ↑ On one occasion the tower was so identified In Thomas Le Maistre’s notes, which most likely proceeded from a clerical error, a mis-transcription of St Saviour’s guard house.
- ↑ Or Samson, without any reference to a saint, and so possibly designating the vraicing or other rights of a local family, but with other more prominent named rocks on the foreshore much closer to the tower, such as L’Etac du Nord (La Tas du Nord), L’Etac du Sud (Tas du Sud), La Baragonne and others Page 19 Draft Survey Booklet, Jersey Rock and Coastal Names Survey, Société Jersiaise . This led to a notion that the Guard House was a converted Dark Age chapel dedicated to St Sampson of Dol, who converted Guernsey to Christianity and took part in the Breton invasion of Armorica, then when the building was shown to have been based on a typical domestic dwelling of the 17th century, this attribution was moved to a second building shown on the Richmond Map, close to the Guard House. The citations given to support this idea have proved to be groundless and it is without foundation, an idea based on the name of a rock, that progressed from a notion to assumed fact.
- ↑ Unless this application came from fishermen’s huts for which baraque is commonly used, sounding in Jèrriais (with its long ‘a’) somewhat like ‘boroque’ All of these terms com from the same Catalan source for fishermen’s, workmen’s or soldiers’ huts, tool sheds or camps, and also are the origin of the English ‘barracks’)
- ↑ These may date from later alterations or from a lightning strike of 1757
- ↑ One of the rare occasions the building was referred to as the guard house and magazine of St Sampson, the other being in the contract of 1742 between the Constables of St Saviour and St Clement.
- ↑ The magazines still had earthen floors into the 1950s
- ↑ This re-roofing probably refers to the pantiles over the granite vault and not to the vault itself. If the guard room had been paved in 1787, it is possible these were of the same type as the rounded cobbles seen in the Tower magazine and that these were replaced with dressed granite setts in 1805
- ↑ The east gable is built on this outcrop, the rest of the building being on sand, which may have resulted in historic cracks in the stone vault by the fireplace.
- ↑ Suggested, although uncertain, in some 19th century photos.
- ↑ Possibly to replace earlier ordinance.
- ↑ These were Boulevard du Nord – le Boulevar pres Gorey, Goré, [BSJ. 1908, p. 315] - near Gorey; Boulevard du Milieu - Middle Battery - just north of where tower No 5 was to be built, it was dismantled in 1816; and Boulevard de la Rocque at La Rocque Point.
- ↑ The location of Boulevart du Maresq ou de François Piroet (also the location of Grouville guard house or maison de guet) is unknown, but was also somewhere on the middle of this coast and it should be noted that marais and maresq are variations of the same word, signifying marsh or wetland. This boulevard may have been named for the vingtaine in which it stood, and the surrounding marsh, rather than for the family name Dumaresq.
- ↑ In 1745, sand incommoding the boulevards was to be removed, so evidently, they were still in use, nominally, at least.
- ↑ The rough stone to the south of the present dressed platforms may be evidence of earlier platforms in rough stone,or they may be infill added later.
- ↑ Later a farm was built across the road, on the dunes inland, by the water meadows, and it then acquired the dunes to the south of the tower, towards La Baragonne. This farm is shown on the Richmond and Godfray Maps, and took its name Le Boulevard from Boulevard de la Rocque. During the Occupation this farm was requisitioned and demolished by the Germans. After the Occupation, the company that bought the farm site and developed the flats over the bunker built by the Germans, and the detached houses built over the former site of the farm, took its name from the farm, and used it for the block of flats, which further complicates the matter of site identification.
- ↑ Presumably, this was the north battery at Boulevard de la Rocque and not the boulevard of the same name at Gorey
- ↑ To the start of the present later and lower 19th century sea wall, running up the bay
- ↑ This was presumably the remains of its northern location and not that supported by the 18th century wall, which was its last active location. This section of the 18th century wall, which may have had the dual purpose of a seawall and battery wall, sits on a rocky shelf rising from the sand. The whole edge of the Tower outcrop on this side may have been heavily quarried and remodelled when the original boulevard was first built on top of it, and later, during the building of the Guard House and the Tower, for both defensive purposes and building material, but it is possible the north battery was originally on an earth rampart at a lower level and is buried behind this wall, perhaps as a hurdle and sod embankment or of random stone, although this is pure conjecture. De Rullecourt’s reconnaissance assumed that the aforementioned seawall was a fortress wall, which supports the possibility this section of the sea wall had a dual purpose.
- ↑ On the latter, the firing lines extending the course of the later seawall constructed and slightly beyond, shortly after this map was published.
- ↑ This edge is at a lower level than expected and so may be part of an unrelated structure.
- ↑ As this was before the 18th century seawall had been built and it is not known if the land surface sloped down towards the beach from the Guard House embankment, it is not known at what level this other building would have been.
- ↑ WD No 6 is inscribed “To Sea”, indicating access across the rocks and dunes to the south, although that is centuries later. These steps may be the oldest extant feature on this site.
- ↑ The line of the earlier sidewall was angled outwards, landwards, and extended inland slightly. It is aligned into the bay and so undoubtedly was the front for one of the two (and at one time three) cannons on this battery, the front of the wall facing the Roads to Gorey Harbour. These steps may pre-date the 18th century wall and the lowest steps were most likely added more recently, with the sand level on the beach being higher than it is now. Given the angle of the former side wall, it is possible the top step would have been partially under it and so possibly they were cut much later (after the wall was realigned).
- ↑ It is not clear if the adjacent rock shelf, on which the 18th century wall stands, is the stump of a quarry step at the edge of the outcrop, originally reaching to the same height as the rest of the outcrop, or is a natural shelf at its original level. If the result of substantial quarrying, it would have pre-dated any military use and had it been left unquarried, would have rendered the need for the 18th century wall unnecessary, but most likely the outcrop sloped down to a natural shelf, at this point, topped by an accumulation of dunes.
- ↑ Evidently Boulevard de La Rocque had expanded over time, through the acquisition of a number of parcels of land that the British Government sold off. Earlier the whole site had passed from the parish of St Saviour to the States and, when the latter neglected the installations, to the War Department of the British Government), as with all the minor fortifications on the south and east coasts. This is why those fortifications have largely survived, with most now being in private ownership.
- ↑ Occasionally spelt Barrow, the wife of Edgar Courtney Carvolth
- ↑ The present listed site extends from the line of WD boundary stones on the southern boundary, to the bottom of the Guard House northern embankment.
- ↑ This is most likely when the 17th century chimney stack was taken down, and possibly when the present granite setts were installed in the guard room, if they did not date from 1805. The north corbel of the fireplace and the chimney breast were later removed by the Germans and only the south corbel remains.
- ↑ Some flat rough granite cobbles have been found below the soil underlying the paving. The fact that there is a step down into the Guard House suggests this may be the original paving for the courtyard, although this may have been laid in the 1950s to stabilise the ground before laying the paving (a common practice). When an electricity cable was laid through the entrance in the 1990s, a musket flint was found not far under the present paving, but more or less level with a line of dressed granite under the present threshold into the building. This would have been dropped by a Militiaman before 1800, for which, apparently, he would have been fined.
- ↑ WD 1 still exists, but appears to be inactive and may not be in its original position.
- ↑ It blew out all the electric cables and water and other pipes in Grenville. In the aftermath, the offending power-cable was removed.
- ↑ “Above the Guard House” could be interpreted in three ways; either it means that the Carière Giffard is located above the Guard House, being the top of the battery outcrop, between the battery and the building (although this would depend on the absence of any tower), or it is ‘above’ on the map, either to the north or inland (for which the topology is wrong), or he meant “... . Above, the Guard House at a spot called ...”, in other words, above the beach where the boy was killed, at the Guard House on Carière Giffard. It would all depend on the punctuation, if none had been used, but should have been. The significance of this incident and this detail, is in the information it provides on the use and topology of the site, before the present tower was built.
- ↑ It is notable that the author identifies the tower as “St Samson’s Tower” but the guard house only as “the Guard House at La Rocque”. This is curious, as the Tower had not yet been built, and it could be the author, if he was writing sometime after the event, did not know this, but this raises the possibility that the tower built in 1779 replaced an earlier tower on the same site. The term translated as ‘sentry box’ is gueritte. During the same storm, three men were thrown by a lightning strike from the top of a rock called La Petite Doguerie (Jean Giffard, Francois Filleul and another) and “La Houbie” (Hougue Bie) was struck (see Bul SJ 1967. An Eighteenth Century Diary, Thomas Le Maistre’s Note Book, by Joan Stevens pp. 251-252).
- ↑ They added a number of single campaign structures in concrete, well below the lowest quality of concrete construction, and probably raised by the troops stationed there.
- ↑ A ‘Tobruk Position’ is a static concrete structure surmounted by a tank turret, some of which were captured by the Germans at Tobruk. In this case, they were from French Renault tanks
- ↑ Grouville Green Book: La Rocque B 10.5 cm Kanone Vom Turmgesehen: 10.5cm cannon seen from the Tower
- ↑ Formerly accessed from above, through a trap-door inside the main entrance on the first floor. They did this at a number of towers they requisitioned, including Ouaisné, which they abandoned half done. Fortunately, they did not cut through the magazine’s back wall (as they did at some other towers), leaving it intact, but giving access to resurrect its use as an ammunition store.
- ↑ This was roofed over by Harold Le Seelleur in the 1950s, when he built the block of flats.
- Data sheets on the Tower, Platforms and Guard House (F de L Bois)
- Grouville Parish Treasury (notes by F de L Bois)
- Bulletins of the Société Jersiaise (various authors)
- The Coastal Towers of Jersey (William Davies)
- Jersey Militia – A History (David Dorgan)
- Jersey Place Names (Stevens, Arthur and Stevens)
- Guard House, La Rocque, site booklet (Giles Bois)
- Supplementary booklets 1 and 2, submission to Planning for SSI notice of intent to List, The Guard House and La Rocque Tower - GR0089. Title: The origin of speculation on a “supposed” St Sampson’s Chapel at la Rocque Point (Giles Bois). | <urn:uuid:0f5a65c9-3056-43e4-87dc-9a2fba6c43a9> | CC-MAIN-2021-21 | https://theislandwiki.org/index.php/History_of_La_Rocque_Tower | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989012.26/warc/CC-MAIN-20210509183309-20210509213309-00216.warc.gz | en | 0.976348 | 7,759 | 3.25 | 3 |
As early human settlements developed and were able to support non-agricultural specialization, a method was required to record basic information. Typically this occurred within the frameworks of religion and government.
Records were kept of inventory on tablets. Strokes for the number with pictures ('pictograms') of an animal or object.
Early civilisations were often sited near principal rivers and the indigenous reeds frequently used to mark clay tablets. A common approach was to cut the stem ends to wedges of different sizes to mark or produce indentations, as in cuneiform.
The next phase in linguistic development was the use of symbols to also depict sounds. Our Latin alphabet is a very straightforward implementation. The delay in the ability of western culture to read the 'lost' hieroglyphic script was, in part, due to the presumption that hieroglyphs, because of their quantity (more than 700 in frequent use and 2400 in total) must solely refer to objects (one of the first, ancient textbooks on hieroglyphs was completely mistaken in this belief).
The realisation that they were partly phonetic was made by the Frenchman, Champollion, after he studied the Rosetta Stone (an engraving of the edict of a pharaoh written in three languages, that included hieroglyphs).
His clue was the requirement for the Egyptians to write the Greek name, Cleopatra and which had to be spelt phonetically.
Whereas the English alphabet has 26 letters which are used phonetically, the Egyptians used symbols as both pictograms and phonograms.
Initially say, a picture of a rectangle was used to represent a house (prounounced 'per'). Later, the symbol was used to write part of another word that involved that sound.
The usual example here is if you need to write the word 'belief'. You could draw a picture of a bee and then a leaf = bee leaf.
The formal name for this is the rebus principle (rebus, Latin plural res, thing). Incidentally, the Egyptians thought that bees were the tears of the god Ra.
So, the language has a core alphabet of 30 symbols used phonetically together with a large number of pictograms (there are also phonetic symbols that represent not just a single consonant but two or three, bilaterals and trilaterals. An example is the heart symbol, a trilateral representing the three consonants n + f + r, pronounced nefer).
Note: The term pictogram is often confused with ideogram. Essentially they may be used interchangably but strictly pictograms are earlier symbols that have a likeness to the object they represent, ideograms have an abstract or conventional meaning and may not visually resemble the item or concept.
One other reason that it took so long to translate the hieroglyphs was that in writing their language the Egyptians only wrote the consonants and omitted the vowels (an unpointed language). Try, if you will, to translate 'th ct jmpd vr th mt', the cat jumped over the mat. Today when pronouncing glyphs it is typical to insert an 'e' for the omitted vowels as the actual proununciation has been lost (for those who need a quick definition of vowels and consonants, when you speak you expel air, if you also vibrate your vocal cords you are producing a vowel, if you obstruct the airflow, say with your tongue, you are producing a consonant).
Why omit the vowels? No, not laziness! In common with other Semitic languages such as Hebrew and Arabic, in Egyptian, vowels are pronounced differently depending on their context. So only the consonants were thought important.
One consequence of this is that many words had the same spelling. To clarify such situations, the Egyptians added another, silent, symbol at the end to indicate the intended meaning, a 'determinative'.
The use of the determinative is very common and is actually quite intuitive. Usually a fairly obvious symbol is used.
(walking legs) symbol appears at the end of a group, you would be correct in surmising a connection with that action. To go, come, walk, run or enter are a few of the words it ends.
each designate the occupations of men or women or I, me or my.
So we have
for brother and
for sister (Also note that the loaf of bread symbol
pronounced t, is the feminine ending in Egyptian).
Returning to the rectangle symbol
By itself it is the sound (phonogram) h
However, if combined with the stroke determinative
the representation is the object (house).
This is an important general rule, if you see the stroke determinative, an object not a sound is being represented. Perhaps the most common determinative was the sparrow. It is used to determine over 500 words connected with evil, pain and misery!
After Champollion published his 'Precis du Systeme hieroglyphique' in 1823 the script became accessible to the world for the first time in over 1500 years. The mystique associated with the symbols lessened. It had been such that the Romany people of Europe in medieval times, who had been viewed as having occult knowledge, were known as gypsies (a contraction of Egyptians) and the terms alchemy and chemistry (derived from the Arabic 'that of kem'; and 'kem', Kem being an ancient name for Egypt).
Hieroglyphs were in use for over three thousand years. In part, the longevity of the Egyptian civilisation was imbued by the Nile, but whereas a written language in any culture has a tendency to change, hieroglyphs exhibit an unusual stability. One reason is that hieroglyphs were considered the language of the gods (hieroglyphs Greek, sacred writing) or more precisely, from Egyptian, 'the God's Words'.
However their civilisation never developed printing. In later eras, as the West harnessed technology and books became common, it was practical for knowledge to be easily recorded. Literacy (helped by a simple 26 character alphbet) became common. Science and industry developed..
In Egypt it took years to learn the symbols, and the skill was limited to a small elite, scribes (some psychologists contend that those languages with large numbers of characters that require many years of study inhibit creative development).
Hieroglyphic writing appeared in Egypt in about 3150 BC and was used until the Graeco-Roman period (the last known texts, found at Philae, date from the fourth century AD).
The chronology of ancient Egypt may be divided into 10 periods containing in total thirty one dynasties:
Fourth millennium B.C.
First to third dynasties
Fourth to sixth dynasties
||Seventh to tenth
The hieroglyphs were mainly used for formal engraving and their aesthetic appearance was therefore important. There are no gaps between words and they are written from left to right, right to left or top to bottom. When looking at an engraving (a stelae), the orientation is gleaned from looking for a person or animal. If facing to the right, the glyphs are read from the right to the left. The symbols are always read from top to bottom within that semantic, so in figure 1 below of Cleopatra's Cartouche the interpretation is from left to right beginning with the 'chair' symbol.
Figure 1. Cleopatra's Cartouche
The Rosetta stone was written in Greek, hieratic and hieroglyphic script. For daily use, it was time consuming to write hieroglyphs and the Egyptians used a cursive (joined) writing called Hieratic (Greek, hieratikos priestly). Towards the late 7th century BC, when the administrative centre for the country moved from upper to lower Egypt, hieratic was replaced by the demotic (popular) script. It, in turn, evolved under Greek influence, into Coptic.
As mentioned, Champollion's clue was the need to write the Greek name, Cleopatra, which had no Egyptian equivalent, and had to be spelt phonetically:
Figure 3. Cleopatra's Cartouche
On beginning to study hieroglyphs, a good starting point is the pharaohic names. The reason? It was the practice to write the name of the pharaoh inside a coil of rope, a cartouche (from the French soldiers who thought it resembled a bullet, the Egyptians termed it shenu). It represents the circle of life (probably from a mystical symbol called the Girdle of Isis, a cord around the waist, tied in a mystical knot).
It is the natural initial location when attempting to decipher an inscription.
Each pharaoh had five names. The early representation used for the Pharaoh was the 'Horus' name and consisted of the Horus falcon perched on a rectangle representing the Pharaoh as an earthly incarnation of the great sky-god. The rectangle was called a serekh and represented the Pharaoh's great house.
The two most important names were drawn within a cartouche, the nomen (family name) and the prenomen (coronation name). It is interesting to note that in Egypt royal succession followed the maternal side (the practical benefit that, in that period, the mother of a child was always known, had to be set against the resultant intrigues of court).
As we refer to our leaders with royal epithets such as 'royal highness', the Egyptians used a number of formal addresses for the pharaohs. These often appear before the prenomen and nomen cartouches. The most common consisted of the statement of rule over the two divisions of Egypt, upper and lower and consisted of the sedge plant and bee symbols,
The term Upper Egypt refers to the northern part where the Nile forms a delta, Lower Egypt is the southern part of the country (the Nile flows south to north).
The other common title was 'son of Ra':
You will encounter repetition of certain epithets in many inscriptions that will simplify your initial readings:
Names in Egypt were often linked to that of a deity. Rameses, a royal variant of Ramose was the conjoin Ra plus Mose (born of). Simple adjectives were also used, nefer (good or beautiful) or again conjoined: mutnefert (beautiful as Mut).
If we examine the stela, left, from Karnak, at the top left we have the Horus name, the rectangular serekh surmounted by a falcon. Next is the prenomen preceded by the sedge and bee
The symbols below translate, 'beloved of Amon-Ra, chief over the two lands'.
On the right side top are the vulture and snake goddesses Nekhebt and Wadjet. Each is depicted on the basket sign (Lord or Lady). Together known as the 'Two Ladies', the pharaoh invoked their protection as the female balance to Horus and Seth.
The falcon above the gold necklace is called the golden Horus name and is representative of the unchanging nature of kingship.
The nomen cartouche is, as usual, preceded by title 'son of Ra'
The symbols below that translate as 'given all life, stability and dominion, and all health'
The final Ankh with snake group at the bottom of both columns means 'living forever'.
Figure 4 The five great names of Senusert I
Returning to the types of symbols. Champollion identified a core of phonetic elements of the script, some thirty symbols that maybe used to represent sounds, phonograms:
It was mentioned earlier that the vowels were not written, so how is it that some are included in figure 5? The reason is that these are modern approximations
To copy glyphs is somewhat time consuming and a frequent practice is to use the above phonetic symbols as a 'shorthand'. The method of using the alphabet of one language to represent sounds in another is known as transliteration. A few additional characters are used to represent other sounds. An example is the cobra glyph, represented with a d
Ideograms and phonograms
In addition to the single consonants listed above, there are symbols that represent double or triple consonants (bilaterals and trilaterals).
There are about 130 bilaterals but just a handful are commonly used. Again, a bilateral or trilateral can often also represent an object.
Glyph Object Meaning Transliteration
Eye, do, make, ir
Face, upon, on account of, hr
Water pots, foremost, hnt
Heart, beauty, nfr
If a single vertical stroke determinative is seen under a symbol, it represents a thing or concept not a sound.
The fourth category of hieroglyph, the phonetic complement, applies to the bilateral and trilateral symbols. It is used to clarify the pronunciation of the omitted vowel.
The sedge plant glyph mentioned earlier is the bilateral sw, pronounced sew. If it is followed by the U symbol it could be read sw-w but it is still read sw but the pronunciation will have a different vowel sound.
The symbols for number are:
Figure 6. Numerics
The standard classification is the one set out by Gardiner. It allocates each sign to one of 26 categories (A-I, K-Z, Aa) and numbers each within that category.
sign is denominated O4, the
A Man and his occupations
B Woman and her occupations
C Anthropomorphic deities
D Parts of the human body
F Parts of mammals
H Parts of birds
I Amphibious animals, reptiles, etc
K Fishes and parts of fishes
L Invertebrata and lesser animals
M Trees and plants
N Sky, Earth, Water
O Buildings, parts of buildings, etc
P Ships and parts of ships
Q Domestic and Funerary furniture
R Temple furniture and sacred emblems
S Crowns, dress, staves, etc
T Warfare, hunting, butchery
U Agriculture, crafts and professions
V Rope, fibre, baskets, bags, etc
W Vessels of stone and earthenware
X Loaves and cakes
Y Writings, games, music
Z Strokes, signs derived from Hieratic, geometrical figures
The sign-list detailed in his 'Egyptian Grammar' is quite extensive, a little over 100 pages (the grammar is 437 pages, the sign list 100 and the Egyptian-English and English-Egyptian vocabularies 80 pages).
Sir Alan Gardiner's canonic list is of those in use in the 'classic' Middle period and totals about 700. If a list were made of symbols from all periods (especially the Ptolemaic, when many were added) it would run to seven thousand.
The advent of 'theoretical grammars' in Linguistics has given a depth to our understanding of how native speakers produce their language. The 'generative grammar' developed by Noam Chomsky suggests that phrases may be analysed in terms of a tree diagram:
The phrase 'the cat bit the mouse' would have the noun phrase 'the cat' with the verb phrase 'bit the mouse' which in turn had the division of verb 'bit' and noun phrase 'the mouse' (the five sentence elements often suggested are: subject, verb, object, complement and adverbial).
Generative linguists hold that all are born with a universal innate linguistic knowledge in terms of rules.
As we learn a specific language we set 'parameters' (these are the differences which exist between different languages, say word order) to 'fine tune' our production.
It is further suggested that all sentences are composed of two levels of structure, deep structure and surface structure. It would seem to explain certain ambiguities, say: 'the chicken is ready to eat'. Here the two meanings are due to the two deep structures having the same surface structure.
The development of generative theory has produced several models with rule sets of how noun, verb and other sentence elements may be combined.
It would be interesting to investigate whether their application to modern language is also valid for the hieroglyphs (so implying that ancient Egyptian innate linguistic knowledge was the same as ours).
Egyptian lore has it that the hieroglyphs were given to them by the ibis headed divine scribe Thoth.
The hieroglyphs were considered to represent more than just language but in certain situations the essence of a person or thing. As long as they were read, the item had existence. The practice of removing the hieroglyphs of a person was considered the obliteration of their continued existence.
The burial chambers in pyramids were 'decorated' with hieroglyphs detailing spells and prayers to aid the deceased in their journey to the next life, dubbed 'pyramid texts' by Egyptologists (the most complete are found inside the pyramids of the kings of the Fifth and Sixth Dynasties). Later they appeared on the mummies and other items within the burial chambers of nobles who had sought to use them. These were known as 'coffin texts'. Finally, when written on papyrus scrolls they took the generic name 'The book of the Dead'. The most complete belong to the Ptolemaic period, and contain about 150 spells (not chapters). It is also helpful to be aware that the Pyramid texts consist, in the main, of a list of items to accompany the deceased and can therefore be easily translated, in part, by breaking them down into the inventory of standard items.
In addition to the above, there are five theological works of some importance:
Book of what is in the Netherworld
Describes the underground regions visited by the Sun god on his daily nocturnal journey (also known as the Am Duat
Book of Gates
Descriptions of the underworld
Book of Caverns
Descriptions of the underworld
Litany of the Sun
Description of the destruction of mankind by Ra.
The gods of a nation tend to evolve to represent
their cultural sophistication.
The Egyptian gods were initially local deities that were consolidated or merged
much like modern companies. Alas not the romantic, mysterious objects conceived
of by many western readers. Initially each city had it's own creation mythology. Hermopolis had eight primeval deities, Memphis had Ptah crafting the world but perhaps the most influential were the creator-god Atum and the falcun headed sun god Ra of Heliopolis.
Late in the dynastic time-frame, the
Egyptian gods were incorporated into the Greek and Roman pantheons. Towards the end of the pharaohic period the monotheistic belief systems of Christianity, Judaism and
Islam supplanted the old gods. Finally, with the rush of science, for many,
religion ceased to be the best rationalization. Ironically, the philosophic tenant of
'burden of proof' now drives science to furnish the huge amount of explanation a
deity may need.
The Egyptian pantheon may economically be
Many myths surround the above, and vary with geographical location and period.
One of the oldest, Amon (god of Thebes, a fertility deity), was 'joined' with Ra (the
sun god of Heliopolis) to form Amon-Ra.
Bastet, was a daughter of Ra (sometimes
said to be his sister and consort) whose cult originated in Bubastis, the
capital of a province of lower Egypt. Hathor, another daughter was thought to
have been the 'eye of Ra'.
Anubis, the son of Ra
(although later said to be the child of Osiris and Nephthys)
supervised the burial of Osiris and so became associated with funeral rites. He
is also said to have assisted in the judgement of the dead and is depicted with the head of
Isis was the Egyptian mother goddess who was
worshipped for more than 3000 years. Her cult later passed much of its imagery
to the virgin Mary. Isis resurrected her brother Osiris after he was killed by
his brother Seth. Osiris became god of the dead.
Horus was the son of Isis and Osiris, born after Osiris retired to the
Horus is depicted with the head of a falcon.
Philip Ardagh The Hieroglyphs Handbook, London, 1999
Maria Carmela Betro Hieroglyphics The Writings of Ancient Egypt, New York, 1996
Ronald L. Bonewitz Hieroglyphics, London, 2001
Gardiner A.H Egyptian grammar, Oxford, 1957
Angela McDonald Write your own Egyptian Hieroglyphs, London, 2007
Barbara Mertz Temples, Tombs and Hieroglyphs, New York, 1964
Andrew Robinson The Story of Writing, London, 1995
Barbara Watson Introducing Egyptian Hieroglyphs, Edinburgh, 1993
Hilary Wilson Understanding Hieroglyphs, London, 1995
© 2020 C.I. Burkinshaw | <urn:uuid:5394472a-e2d7-4d0a-9c60-5ed2aa5cf3da> | CC-MAIN-2021-21 | http://www.psifer.com/hier.htm | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989916.34/warc/CC-MAIN-20210513111525-20210513141525-00136.warc.gz | en | 0.951521 | 4,403 | 3.9375 | 4 |
Method and a device for sealing and/or securing a borehole
The invention relates to a method and a device for sealing a borehole from which oil discharges uncontrolled. Thereby, a hull is filled with cement and the cement mass is lowered around a cone-shaped pipe so that the cone-shaped pipe is simultaneously fixated and also sealed.
This application claims priority from German Patent Application No. 10 2010 024 496.1-24, filed Jun. 21, 2010, and German Patent Application No. 10 2010 027 530.1, filed Jul. 16, 2010, all of which being hereby incorporated by reference in their entirety.FIELD OF THE INVENTION
The invention relates to a method for sealing and/or securing a borehole under the water surface and a device for closing and/or securing a borehole under a water surface.BACKGROUND OF THE INVENTION
Reliance on so-called offshore techniques is increasing when extracting natural resources from the earth, and as a consequence, deeper and deeper sources of crude oil are being exploited by deep sea drilling, sometimes at depths far greater than 1,000 m.
In particular, the events on the drilling platform “Deepwater Horizon” have shown that this type of deep sea drilling is connected with high levels of danger and that it is difficult to seal boreholes from which a fluid, in particular, crude oil, discharges uncontrolled. For one, the great depth makes the sealing work generally difficult. For another, the oil often gushes out of the borehole at high pressure, so that it is not even possible to seal the borehole in one step even with a massive dome.
As a rule, known methods for sealing boreholes are, as described, for example, in European patent specification EP 1067758 B1, are designed for sealing ordinarily formed test boreholes and not suitable for sealing a larger hole from which crude oil gushes uncontrolled.
Keeping larger devices on hand with which even large holes can be sealed securely is hardly possible, because of the required size for such, for example, a dome, and would incur immense costs.OBJECTIVE OF THE INVENTION
The present invention is based on the objective of providing a simple, cost-effective method with which even larger boreholes from which crude oil gushes uncontrolled can be securely sealed or secured.SUMMARY OF THE INVENTION
The invention is solved already by a method for sealing a borehole under the water surface and by a device for sealing and/or securing a borehole according to one of the independent claims.
For one, the invention relates to a method for sealing and/or securing an opening under the water surface from which fluid gushes uncontrolled, in particular crude oil.
Preferably, the invention is used for sealing and/or securing boreholes in oil production. But an application for other leakages, for example, in oil and gas lines is conceivable. The invention can be used even for gas that is discharging from boreholes, in particular natural gas. Further, even use for prevention, or use in drilling in which the invention can be retrofitted and/or used as equipment and/or complement in new drilling operations. In particular, use in the area of producing gas hydrates is conceivable.
To the extent the term borehole is used in the following for the sake of simplicity, hereby, within the scope of the invention, any opening under the water surface is meant, i.e. also in the form of a defective pipe, etc.
In particular, the method according to the invention is suitable for sealing ordinary boreholes, which are made by an offshore platform for extracting the crude oil from the oil field.
According to the method, first a ship in which a vertical pipe extends through the bottom of the hull is placed above the borehole. Within the scope of the invention, the term ship refers to any floatable body. In particular, floating cranes and platforms, as well as ordinary hulls that include, as a rule, also a keel, can be used. It is also conceivable that the ship consists of a floatable and a non-floatable part. Thus, for example, a non-floatable container can be positioned above the borehole by a catamaran. The ship can also be described as a submersible body.
It is especially provided that older, scrapped hulls are used to reduce costs, as will be explained in more detail in the following.
A vertical pipe extends through the bottom of the ship, or a substantially vertical pipe, or a preferably vertical pipe that is open at the bottom, and which is designed to be placed over the borehole from which crude oil gushes uncontrolled.
Within the scope of the invention, any device that is at least tubular in sections is meant by pipe, for example, also an inverse funnel, a bell-shaped device, etc.
The vertical pipe is a pipe that extends horizontal to the bottom on which the ship is to be placed. The pipe is vertically configured in such a way that, in particular, the oil can be conveyed through the pipe from the bottom to the top. The pipe can also be mounted inclined to the bottom or to the plumb line. In particular, vertical pipe also means a pipe that extends from the underside of the ship through the hull to the upper side of the ship, for example from the keel to the deck.
The ship or a part of the ship is filled with a hardenable mass. In particular, it is provided that the ship is filled with liquid cement. Preferably, fibers or wires have been added to this cement for reinforcement.
In an alternative embodiment of the method, it is also conceivable that the ship is first filled with a dry mass, for example, dry cement and a cement-water mixture is prepared in the hull on location, for example, by introducing sea water. This design variant has the advantage that here, no additional ships must be provided for filling the liquid cement mass.
On the other hand, for this, mixing devices such as, for example, stirrers must, as a rule, be available in the hull, in order to create a sufficiently homogeneous mass.
The ship, or the detachable part of the ship, is submerged above the borehole in such a way that the vertical pipe sits on the borehole. To do so, the vertical pipe preferably includes a cone in the lower section, through which the diameter of the pipe is enlarged in the lower section, so that even larger boreholes are covered. The sinking can, as it is provided in one embodiment of the invention, simply occur thereby, that the ship is loaded with the hardenable mass in such a way, that it no longer has sufficient floatation.
But it is also conceivable that the ship is sunk otherwise, for example, by a number of buoys.
Then the hardenable mass is distributed around the pipe.
The invention is based on the knowledge that in this manner, in a very easy way, an enormous amount of a hardenable mass with great weight and volume surrounds the pipe. As a result of the preferably cone-shaped design and, as it is provided in a further embodiment of the invention, a collar, the hardenable mass engages in a secure, form-fitting connection with the pipe when hardening.
Simultaneously, depending on the ground soil, a hardenable mass such as cement also connects with the ground, so that the pipe, in addition to the mere weight, is additionally connected by material engagement and/or form-fit with the ground.
The invention is further based on the knowledge that only after partial hardening of the hardenable mass, in particular, the cement, the pipe can be sealed and/or secured and now also withstands the enormous pressure of the crude oil.
Inter alia, the invention is also based on the knowledge that sealing the pipe or opening it is not even possible in many cases, as the pressure is so high that a channel immediately forms in the sea bottom, for example, which also flows around a larger sealing device.
In particular, in the case of higher levels of pressures, sealing can therefore be dispensed with and the method according to the invention can be used to channel the discharging fluid.
Thus, within the scope of the invention, the term “securing” also means, in particular, channeling the fluid.
To do so, a directional control valve can be used, for example, in which after the pipe has been put on, the fluid can at first continue to discharge. At a connection of the directional valve that has been established first, a pipeline to pump off the fluid can then be connected. This is easily possible, because no fluid is discharged yet from the connection of the directional valve. Then the directional valve can be switched so that the fluid now discharges from the connection that is connected with the pipeline for extraction.
The opening from which the fluid is discharging can thus be secured without having to seal the stream of fluid. This also makes it possible to secure openings having high pressure.
Preferably, the sealing occurs slowly by using a metering valve so that no sudden peaks of force occur. Thereby, this can, for example, be a slider, a throttling valve or the like. Further, the device can be provided with additional safety units such as, for example, blow-out preventers, attachable riser pipes for discharging the fluid, etc.
After sealing the pipe by using a valve, an extraction hose can be connected to the upper end of the pipe and then by opening the valve, the crude oil can be removed at least partially controlled.
In one embodiment of the invention, the hardenable mass is lowered by a frame that is to be opened or is removable, which surrounds the vertical pipe.
For example, it is provided that a hull is cut open on the bottom and that a frame is inserted in the hull that seals the hull in floating state.
After submerging the ship, the frame can, for example, be opened by a sliding mechanism or removed, so that the hardenable mass discharges from the hull around the pipe.
The hull can, as it is provided in one embodiment of the invention, be lowered, for example, on feet that are at such a height that the hardenable mass discharges from the hull almost completely. In this embodiment of the invention, the hull could be pulled up again and reused. To do so, the pipe would have to be detachable from the hull.
In an alternative embodiment of the invention, the hull can also attach to the hardenable mass and thus represent an additional weight that contributes to sealing the borehole.
Preferably, the ship is submerged by using cables to prevent it from taking on a lateral position. In a preferred embodiment of the invention, the ship includes laterally mounted feet at which, as it is provided in an additional embodiment of the invention, a carrier is mounted at which the cables can be fastened.
To the extent this construction is attached to the outside of the ship, an old ship can be retrofitted in a very easy way into a device according to the invention for sealing a borehole.
The invention further concerns a device for sealing a borehole from which fluid discharges uncontrolled, in particular crude oil. The device consists of a hull that can be filled with a hardenable mass, has a pipe that penetrates the bottom vertically and has means so that the hardenable mass can discharge around the pipe.
The bottom of the hull is preferably open around the pipe and a frame is located around the pipe which seals the floating hull. After lowering the ship, this frame can be opened or—as it is provided in a further embodiment of the invention—be used as shaping element for the hardenable mass, so that it does not form a thin layer in an area that is too wide.
For this, as it is provided in a further embodiment of the invention, the frame can also be lowered downward out of the hull.
In a further development of the invention, the pipe is heatable. By heating the pipe, in particular when using the device at very great depths it is prevented that the discharging oil solidifies in the pipe after a short time, which can cause that oil, which is streaming in builds up at such a high level of pressure that the pipe is pushed away before the hardenable mass has hardened.
By using a hull, large amounts of hardenable mass can be applied directly at the borehole. In particular, it is provided that a hull is filled with at least 2,000, preferably at least 4,000 m3 hardenable mass.
In order to reach sufficient floatation even at large volumes for transporting the hardenable mass, the ship can be provided with buoys. Alternatively or additionally, a hardenable mass can be used that has a density that approximately corresponds to the density of water. Preferably, the density is in a range of approximately 0.8 g/cm3 to approximately 1.2 g/cm3. In particular, light-weight concrete can be used. In one embodiment, a hardenable mass is used having a density in the range of 0.6 g/cm3 to 2.4 g/cm3.
Preferably, the pipe has a diameter of at least 1 m at the lower end, preferably at least 3 m. The opening in the bottom of the ship, also described as frame, preferably has a diameter of at least 5 m.
As an alternative or complement to the fastening according to the invention cited above, by using the hardenable mass, the fastening occurs by means of a type of suction or adherence to a bottom, here the seafloor.
Further, the invention relates alternatively or complementary to the use of a suction box, which is provided in particular for use as suction dome for sealing an opening in a previously described method. The suction box is a submersible body or comprises a submersible body.
In one embodiment, the suction box thus comprises several, i.e. at least two separate compartments, whereby one compartment is designed for extracting a fluid, and whereby an additional compartment is designed open downward. The unit described here as suction box represents a system, which has at least one construction for fastening and/or bearing pipe 4, and a unit for fastening on the seafloor.
In contrast to known domes that are used, for example, for extracting crude oil, a second and/or the additional compartment that is not used for extraction or sealing the borehole, is substantially used only for fastening.
In a preferred embodiment of the invention this is performed thereby, that the additional compartment can be evacuated within its surroundings. In particular, the additional compartment is evacuated by pumping off the water that is contained in it. Because for submersion, the inner compartment can, for example, contain water or be filled with water.
Thus, it is provided, for example, to provide the suction box with pumps that are used to evacuate the compartments which are for the purpose of fastening and which are preferably located around a centrally located compartment for extracting the fluid, and can thus adhere to the ground. It is understood that within the scope of the invention, this does not mean the complete removal of a fluid from the compartments, but only the generation of an underpressure with respect to the surrounding environment.
The pumps for evacuation are preferably mounted on the suction box itself. But at lower depth it is also conceivable to evacuate the compartments by using a pipe.
In one further development of the invention, the suction box comprises at least an anchor. In particular, it is provided that the suction box has a number of anchors at the edge, using which the suction box can likewise be secured on the bottom.
The suction box is used primarily for sealing leaks of boreholes or pipelines in deep sea. But a use in flat water is also conceivable.
Further, use for prevention in drilling operations is also conceivable where the suction box can be retrofitted, or be part of the equipment or complement of such for new boreholes. In particular, application in the area of extraction of gas hydrates is conceivable.
The basic principle of the suction box is similar in function to the previously described method for sealing openings.
First, a container, which is open downward and is additionally open upward, but which can be closed is used to channel the fluid stream of a leak.
In a next step, this container is connected firmly and imperviously with the seafloor.
This can be done, for example, by the previously described evacuation of compartments and/or by the previously described hardenable mass.
The suction box or suction container can also represent an additional safety system, for example, in off-shore drilling, as the box forms a protective casing around the pipe or the borehole.
Thereby, the suction box can include compartments and/or openings through which drilling equipment can be guided. For example, this can be a drill or also equipment such as a blow-out preventer or a riser pipe. Should the standard system that is present fail, the upper exit of the box can be closed, for example, by a slider, by a valve or by a blow-out preventer.
Further, the invention is suitable especially for achieving a controllable extraction of gas hydrates. In the perimeter of the container, the sea bottom can be sealed with concrete in order to stabilize it. Thereby, geotextiles can be used.
The invention further relates to a method for sealing an opening under the water surface. Thereby, a device that includes a pipe for extracting a fluid is lowered to the seafloor whereby the device includes at least one container that is open downward but closed upward.
Then the container that is open downward is evacuated within the surrounding environment, so that it adheres to the seafloor. After the device has been fastened, the pipe can be sealed and a pipe for extracting the fluid can be attached.
It is also conceivable that the device according to the invention is designed modular. For example, modules which are designed as suction dome for adhering to the sea bottom can be provided to which additional equipment can be attached and which thus serve to fasten such equipment.
In the following, the invention will be explained in more detail by referring to the drawings in
The drawings show a device for sealing a borehole 1 in various operating states.
In hull 2, a hardenable mass, in particular cement, can be housed in large amounts. The bottom of the hull 6 includes a cut-out into which a frame 5 is inserted.
Approximately in the center of this cut-out, a pipe 3 is located, which has an open cone located downward. The fastening of the pipe on the hull planking is not shown in further detail in this exemplary embodiment.
After filling the hull, the device for sealing a borehole is lowered by cables 7 as shown in
To stabilize the device for sealing a borehole 1 during lowering, cables 7 are fastened at a carrier which is mounted on the outside of the hull and which simultaneously serves to house four feet 9.
The device is lowered in such a way that cone 4 is located above the borehole.
Feet 9 ensure that the hull is aligned above the borehole.
For using the device in uneven terrain it is conceivable to also equip the device with adjustable feet.
Frame 5 is now opened or lowered down, so that the reinforced cement present in the hull can leak out next to pipe 3. Then, as shown in
Thereupon, the discharging oil can slowly be stopped by using a throttling valve 11, and a pipeline 13 can be connected to the pipe.
By renewed opening of valve 11, the crude oil can now be extracted.
As an alternative or complement to the fastening cited above according to the invention using the hardenable mass, the fastening is performed by a type of suction or adhesion to the seafloor.
Suction box 20 includes a number of compartments, whereby here by way of example, eight outer compartments 21 are located around a centrally located inner compartment 22.
Preferably, inner compartment 22 is higher than outer compartment 21.
Inner compartment 22 is open on the top and includes an extraction opening 23 for extracting the fluid. Inner compartment 22 includes pipe 3 according to the invention for extracting the oil (concerning this see also
Compartments 21 represent the actual suction boxes or units for fastening or adhesion 30. They are closed on the top and open on the bottom and can be evacuated by pumps 28 within the surrounding environment. Thereby, a difference in pressure with respect to the surrounding environment is generated to that compartments 21 are sucked onto seafloor 40 and are consequently fastened. As compartments 21 are connected with inner compartments 22, thus pipe 3 that is located in inner compartment 22 is also fastened on seafloor 40.
Suction box 20 can, for example, have dimensions of approximately 25 m×25 m. The outer compartments 21 can at first contain water or be filled in order to submerge it and thereby the entire unit 20. The outer compartments 21 are evacuated when the suction box has reached the seafloor, and they thus firmly adhere to floor 40.
Because of the great weight, suction box 20 presses into seafloor 40 so that as a rule, all outer compartments 21 are sufficiently sealed.
As a result of evacuating compartments 21, in particular at great depth, a very large force and a very high level of pressure can be exerted on the lower edges of the construction.
For additional fastening and/or for positioning, suction box 20 in particular also includes ship 2 and/or all other embodiments according to the invention, further anchors 26 located at the edge, which are connected with winches by a steel wire rope (not shown), which runs over guide pulley 25.
This is how the anchors can be lowered and suction box 20, in particular also ship 2, and/or all other embodiments according to the invention can be positioned by using winches 24 and fastened.
Suction box 20 further includes, in particular also ship 2 and/or all other embodiments according to the invention, eyes 21 located at the edges, in particular for lowering it.
Positioning of a suction box 20, in particular also ship 2 and/or the other embodiments according to the invention can, for example, especially at low current, or if the depth position under water permits, take place by using underwater tug boats such as robotic vessels or U boats. Use of underwater tug boats, robotic vessels and/or U boats preferably takes place at great depths. Preferably or additionally, by using cables at which suction box 20 hangs, positioning is performed from the top.
It is a further possibility, for example, to position suction box 20, in particular also boat 2 and/or all other embodiments according to the invention, preferably exclusively by anchors 26, which are lowered prior to lowering suction box 20, via deflection pulleys
The carrier construction and/or fastening construction for pipe 3 is formed by carrier 29, or it includes carrier 29 or frame 29. Preferably, it is a steel plate frame. However, other materials with comparable properties can also be used. Frame 29 or carrier 29 has, for example, a diameter or a wall thickness of 10 mm to 30 mm, preferably of approximately 20 mm. The edge length of inner compartment 22 is approximately 20 m to 25 m here. Carriers 29 can form a reinforcement or stiffening for inner compartment 22. Carriers 29 form a type of grid or grid construction and/or box, in particular for pipe 3. Carriers 29 are located by way of example on the top and on the bottom or in the area of the upper side and the lower side of inner compartment 22. Carriers 29 partially extend horizontal to each other. For example, the mesh of this grid is selected here to be rectangular so that carriers 29 extend parallel or rectangular to each other. The mesh or the two meshes of the grid through which pipe 3 extends are designed in such a way here that the upper mesh has a larger cross section than the lower mesh. Thereby, inter alia, funnel-shaped pipe 3 can be fastened effectively.
System 22 from
In the embodiment according to
The configurations shown in
The square configuration or frame construction 22 shown in
Preferably, the edge lengths of the triangle are likewise in the range of approximately 20 m to 25 m. Depending on size and material selection of frame construction 22, reinforcement with a cross beam is possible or even necessary. Pipe 3, preferably positioned or attached in the center can be designed as cylindrical riser pipe and thus without a cone.
The diameter of pipe 3 is approximately 2 m to 5 m. Cylinder 3 preferably projects approximately 3 m to 4 m under the frame or frame construction 29, so that it can put itself—with frame weight and pipe weight—over an existing or a future borehole.
In a preferred embodiment, pipe 3 narrows within the height of frame 29 in such a way that toward the top, a possibility of connecting a valve, for example, a three-way valve 11 with a diameter of, for example, 30 cm, as well as for coupling riser pipe 37 is given. Beyond that, the narrowing of pipe 3 within the height of frame 29 can connect the narrowed pipe to a greater degree with the concrete to the borehole actuated by gravity.
In a further preferred embodiment, vertical ribs 38 are located on pipe 3—within the height of frame 29—which prevent a rotation of the concrete mantle with respect to pipe 3, and thereby further improve the composite of concrete and pipe.
Preferably, height-adjustable stilts 33 are also attached to the triangle per corner or edge that can, in particular, be adjusted in height by approximately 2 m to 4 m (concerning this see
The method is exemplified in
After placing frame 22 with cylinder 3 on an existing borehole or on a future borehole—from the top—for example, by working ships, subject to gravity by, in particular 2 to 3 lines 34 or hose lines 34, concrete 35 or a hardenable mass 35 is filled into frame 22 and/or through frame 22 underneath. This mass 35 is generally filled up to a necessary diameter and a required strength. In particular, concrete 35 is a fresh, quickly hardening and/or fiber-reinforced concrete 35. Preferably, concrete 35 has a weight class starting at or larger than 1,400 kg/m3.
This connection actuated by gravity or by material engagement between cylindrical riser pipe 3, borehole and seafloor 40 makes forming a concrete slab 36 possible by the hardening of concrete 35 after a few days. Three-way valve 11, which is attached to the head, guides the oil briefly through a rotation, for example, a 90° rotation of valve 11, laterally into the sea again. Thereby, it is made possible that to the extent it has not already been installed in advance, oil riser pipelines 37 can be installed above three-way valve 11. After installation of these riser pipelines 37 to oil tankers or the like has been completed, which can also occur for a short time, three-way valve 11 is again opened toward the top, for example, by a rotation of 90°, so that the oil stream flows to the top under its own pressure and/or is pumped. In the meantime, concrete slab 36 has reached a strength and density to prevent a leakage of oil and/or also gas.
This “triangular frame” 22 with centered cylinder 3 in the middle and three adjustable feet 33 at the frame of 2-4 m length is the most economical and fastest possibility of sealing or securing collapsed, existing or future boreholes for oil and/or gas. The height of triangular frame 22 should be between 2 and 5 m.
To detach the devices brought into position on the seafloor from the working ship, retaining devices 39 can be attached, for example, at the three or four edges of the devices, which release the cables and thus decouple from the devices, so that they are released for retrieval and can be used again.
After concrete 35 that has been filled in first has hardened (here, for example, in a thickness of 2 m to 3 m concrete slabs 36 and/or approximately 20 m to 40 m diameter), triangular frame 22, for example, can be additionally filled up to the upper edge (see
By using an underwater camera, the positioning above the borehole can be determined precisely and/or monitored. In the case of greater ocean currents it is recommended, however, that by using previously dropped anchors, in particular, with remote control windlass, a positioning of system 20 or 22 that is centimeter-precise is performed at the triangular frame.
In place of the triangular frame, a 2 m to 4 m high large-diameter pipe can also be used, for example, in particular with a diameter of approximately 20 m to 25 m and preferably with pipe 3 integrated in the center and/or preferably cylindrical riser pipe 37 can replace triangular frame 22. This complete economical and relatively easy device can be lowered with steel cables, for example by 1 or 2 working ships or with a catamaran.
As a result of this invention, a borehole can be securely sealed in a very easy and economical way. The devices and the method according to the invention are essentially usable for all ocean depths, for example from 7 m to 8,000 m and more, easy to use on very short notice and even economical. These types of devices and methods can be held available by all offshore-abutting countries in the event of catastrophes, in particular by oil companies themselves.REFERENCE NUMBERS
- 1 device for sealing a borehole
- 2 hull or submersible body
- 3 pipe or suction pipe or funnel or riser pipe
- 4 cone
- 5 frame
- 6 hull
- 7 cable
- 8 carrier
- 9 foot
- 10 oil
- 11 valve
- 12 cement
- 13 pipeline
- 20 suction box
- 21 compartment
- 22 compartment of frame construction or frame
- 23 extraction opening
- 24 winch
- 25 guide pulley
- 26 anchor
- 27 eye
- 28 pump
- 29 carrier or carrier construction and/or fastening construction or frame for extraction pipe 3
- 30 suction box or unit for fastening or for adhering to the seafloor
- 31 fastening means
- 32 extraction pipe or unit for fastening or for adhering onto the seafloor
- 33 foot or stilt
- 34 supply line or pipe for concrete and/or cement
- 35 liquid concrete or hardenable mass
- 36 concrete slab
- 37 riser pipe
- 38 vertical rib
- 39 retaining device
- 40 seafloor
1. A method of sealing or securing an opening in a floor under a water surface from which a fluid gushes uncontrolled comprising the steps of:
- placement of a ship having a hull and a vertical pipe extending through a bottom of the hull over the opening,
- loading the ship with a hardenable mass,
- submersing the ship above the opening in such a way, that the vertical pipe sits on the opening and the hull is aligned above the opening,
- distributing the hardenable mass around the vertical pipe allowing the hardenable mass to leak out of the hull whereby to fix the vertical pipe to the opening,
- sealing or securing the vertical pipe after the hardenable mass has hardened at least partially.
2. The method of claim 1, wherein cement is used as the hardenable mass, the cement reinforced with fibers or wires.
3. The method of claim 1, wherein the vertical pipe has a cone or collar.
4. The method of claim 1, wherein the hardenable mass is discharged via a frame that is at an opening at the bottom of the hull of said ship, which surrounds the vertical pipe.
5. The method of claim 1, wherein the vertical pipe is sealed by a throttling valve.
6. The method of claim 1, wherein the ship is lowered by cables.
7. The method of claim 1, wherein the ship is an old ship and comprising retrofitting the ship for implementing the method.
|3664136||May 1972||Laval, Jr. et al.|
|3719048||March 1973||Arne et al.|
|4081970||April 4, 1978||Dowse|
|4133761||January 9, 1979||Posgate|
|4220421||September 2, 1980||Thorne|
|4318442||March 9, 1982||Lunde et al.|
|4323118||April 6, 1982||Bergmann|
|4343598||August 10, 1982||Schwing et al.|
|4358218||November 9, 1982||Graham|
|4405258||September 20, 1983||O'Rourke et al.|
|4416565||November 22, 1983||Ostlund|
|4497594||February 5, 1985||Fern|
|4553600||November 19, 1985||Vigouroux et al.|
|4568220||February 4, 1986||Hickey|
|5024613||June 18, 1991||Vasconcellos et al.|
|5224962||July 6, 1993||Karal et al.|
|6592299||July 15, 2003||Becker|
|7987903||August 2, 2011||Prado Garcia|
|8025103||September 27, 2011||Wolinsky|
|8322437||December 4, 2012||Brey|
|20110274495||November 10, 2011||Estes|
|20110315395||December 29, 2011||Wolinsky|
|20110315396||December 29, 2011||Wolinsky|
|20120024535||February 2, 2012||Lieske, Ii|
|690 06 623||July 1994||DE|
|2 091 321||July 1982||GB|
- Wright, Edward. “The Blockships”. May 2, 2009. Link: http://web.archive.org/web/20090502182907/http://www.greatwardifferent.com/Great—War/Naval/Zeebrugge—01.htm.
- Coutts, Ashley; Dodgshun, Tim. “The Nature and Extent of Organisms in Vessel Sea-Chests: A Protected Mechanism for Marine Bioinvasions”. 2007. Link: http://www.ncbi.nlm.nih.gov/pubmed/17498747.
- Dover-Kent. “Dover in World War I —The Zeebrugge Raid”. Feb. 7, 2007. Link: http://web.archive.org/web/20070207225834/http://www.dover-kent.co.uk/history/ww1b—zeebrugge—raid.htm.
International Classification: E02B 15/08 (20060101); B67D 7/78 (20100101); E03F 1/00 (20060101); E02B 17/00 (20060101); E21B 41/00 (20060101); E21B 43/01 (20060101); | <urn:uuid:8854039d-a104-4b5d-9bb5-1489a12a8c77> | CC-MAIN-2021-21 | https://patents.justia.com/patent/8888407 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00537.warc.gz | en | 0.937985 | 7,447 | 2.765625 | 3 |
This is entitled Treating arthritis I, because I want to highlight that it is the first phase of what I think is of the most fundamental importance for people suffering from any form of arthritis. It should really be entitled Treating and preventing any and all disease conditions in everyone I, because these measures are truly fundamental to optimal health in all respects and for everyone throughout life. So even if you don’t have arthritis, you should read on.
This first phase should be viewed as one during which you train yourself to acquire new habits. It is not a treatment per se, but rather a prescription for the basis of a new daily rhythm where hydrating and cleansing the body are of the most fundamental importance. In the end, it is really very easy and very simple. It’s just that we need to get used to it.
Arthritis is a word that means joint (arthro) inflammation (itis). There are tons of different types of arthritis (in the hundreds), but all of them are manifestations of the same thing in different joints and somewhat different ways. And the symptoms: the stiffness, the breakdown of cartilage and other tissues, the ossification or rather calcification, the crippling pain, are all related to the inflammation. But what if there were no inflammation? Would there be no arthritis?
Without inflammation there is no tendonitis where a tendon gets inflamed like in the well known tennis elbow. Without inflammation of the lining of the arteries there is no plaque and no atherosclerosis, and thus no heart disease and no stroke. Without inflammation there is no Multiple Sclerosis (MS), the inflammation of the myelin sheath that covers nerves, and no Crohn’s disease either, inflammation in the gut. We could go on and on like this because inflammation is at the heart of almost every single ailment from which we suffer. The reason is simple: inflammation is the body’s way of responding to injury in our tissues.
We sprain an ankle and it swells up by the inflammation that follows the partial tearing of ligament and tendon: this is essential for bringing plenty of blood carrying all the specialised molecules and nutrients necessary to repair the injured tissues. What is the best course of action? Just rest and allow the ankle to heal. The more we use it, the slower the healing will be, the longer the inflammation will last, and the more we will increase the chances of causing some more serious or even permanent damage to these fragile tissues. Without the body’s inflammatory response mechanisms, healing would be impossible.
In fact, repair and growth would also be impossible; muscle growth would be impossible. The process is rather simple: stress and tear (injury) followed by inflammation and repair or growth. This applies to body builders who develop enormous muscle mass over years of intense daily workouts, but it also applies to a baby’s legs kicking and tiny hands squeezing your index finger tightly. It applies to their learning to hold their head up and pulling themselves to their feet with the edge of the sofa to then take those first few steps. It applies to me, to you and to every animal. So, once again: repair and growth of tissue depends on the body’s inflammatory response mechanisms. In a well-functioning metabolism, this process takes place continuously in a daily cycle regulated by activity during the day and rest during the night: stress, tear and injury to tissues during activity; repair, growth and cleaning during the night.
Difficulties arise when inflammation becomes chronic. Either a low-grade inflammation that we can ignore completely and go about our business until it manifests in the form of a serious health concern, or a sustained, sub-acute state of inflammation that does indeed make it difficult to go about our business, but that we can nonetheless learn to ignore or cope with hoping that it will eventually disappear. Unfortunately, this is how it is for most of us to a greater or lesser extent, whether we are aware of it or not. If it weren’t the case, there wouldn’t be hundreds of millions of people suffering from arthritis the world over, and atherosclerosis-caused heart attacks and strokes would not be claiming the lives of more than one quarter of the population of industrialised countries.
As an aside, for those of you who are interested in measurements and quantifiable effects, among the best markers of chronic inflammation are C-Reactive Protein (hsCRP) and Interleukin-6 (IL-6). The number of white blood cells relate to immune response, and if elevated mean the body is fighting something. Elevated concentrations of Ferritin and Homocysteine (HcY) are also associated with chronic inflammation much elevated risks of heart attack and stroke. You can easily get a blood test to check those numbers among other important ones (see Blood analysis: important numbers).
So what is it that causes a person to develop arthritis at 50 or even 40 years of age, while another person only begins to have mild signs of it at 80? What is it that causes a teenager to develop the crippling Rheumatoid Arthritis (RA) at 16, while none of her friends do? Why does only 1 in 400 develop Ankylosing Spondylitis (AS) or bamboo spine, characterised by the chronic inflammation of the spine, the ossification and gradual fusion of the vertebrae? Who knows?
But, for example, approximately 90% of AS patients express the HLA-B27 genotype and exhibit the HLA-B27 antigen, which is also expressed by Klebsiella bacteria. Could it be the bacteria that causes the damage and injury to spinal tissues and structure, which then follows by inflammation that over time becomes chronic, and since the bacteria remains and continues its damaging activities, the inflammation continues to grow together with all the awful symptoms? Maybe. The debilitating effects of certain bacteria and viruses such as Epstein Barr or HPV for example, that persist in the bloodstream over years and decades, are well known. And the chronic inflammation that results of the activity of infectious agents such as these is also a well established effect, even claimed by some to be among the primary causes of arterial disease (see Fat and Cholesterol are Good for You in the Bibliography page.
But whether it is AS or arterial disease, MS or tendonitis, what is common to all is inflammation, and what needs to be addressed are the causes of the inflammation, not the inflammation itself, which is what we do with anti-inflammatory medication. The inflammation is the body’s response to the injury. What we need to do is find and stop the process causing damage and injury to our tissues, and once the tissues have healed, the inflammation will disappear of itself.
There are many things that cause injury to our tissues, and we will look at all the most important ones in greater detail in subsequent posts, but it is fundamental to address first order issues first. Among the most fundamental issues of all are therefore those with which we concern ourselves in the first phase of treatment: super-hydration, alkalisation and magnesium. But the truth is that these fundamental elements are what everyone concerned with optimising their health should actually concern themselves with first, before everything else.
Chronic dehydration is at the root of so many health problems that it is hard to know where to begin. I’ve written a few posts on the importance of water that you can identify by their title. If you’ve read them and want to know more, you should read Your Body’s Many Cries for Water (see Bibliography). In relation to arthritis, however, water is not only the primary means to reduce inflammation of stressed cells and tissues, but it is also what gives our cartilage suppleness and flexibility.
Cartilage a very simple tissue. It is water, 85% in healthy cartilage, down to 70% or less in compromised cartilage and in most older people, held within a matrix of collagen and other proteins that consists of a single type of cell called chondrocyte. These cells have very special electrical properties that give cartilage its amazing resistance to friction and pressure. Without sufficient water, however, the chondrocytes cannot work correctly, cartilage dries out and breaks down, and calcification grows.
What is totally under-appreciated is that because cartilage does not have a blood supply, nerves or lymphatic system, water makes it into the cartilage through the porous end of the bone to which it is stuck, and the only way water can make it into the bone in order to get to that porous end to which the cartilage is attached is through the blood that makes it into the bone.
Since there is, within the body’s functions, a definite hierarchy in water usage in which the digestive system is naturally the first served since it is through it that water enters, even the mildest dehydration can be felt in the function of the most water-sensitive tissues like those of the lungs (90% water) and muscles (85% water), (something any athlete who has drank alcohol the night before a race or even training run or ride will have noticed), it is unfortunately often the cartilage that suffer the most.
Dehydration will make it such that the soft conjunctive tissues at the ends of our bones, in every joint, and that allow us to move will not get the water supply they need to remain well hydrated, supple and flexible. This is really the most important point to remember. What is also highly under-appreciated is the vital importance of silica in the form of silicic acid in the growth, maintenance, repair and regeneration of all connective tissues, including and maybe especially bones and cartilage (here is a good article about it). Silicic acid should therefore be included in all arthritis treatment programmes.
How do we super-hydrate? By drinking more, as much as possible on an empty stomach, and balancing water with salt intake. You should read How much salt, how much water, and our amazing kidneys, and make sure you understand the importance of a plentiful intake of water, an adequate intake of salt, and the crucial balance of these for optimal cellular hydration and function. Detailed recommendations are given below.
Chronic acidosis, some would argue, is not only at the root of innumerable health complaints and problems, but that it actually is the root of all health disorders. The reading of Sick and Tired, The pH Miracle and Alkalise or Die is, I believe, enough to convince most readers that that premise is in fact true. Not surprisingly though, it is not possible to alkalise bodily tissues without optimal hydration. And so we immediately understand that chronic dehydration is the primary cause of chronic and ever increasing tissue acidosis. Therefore we address both simultaneously, and in fact, cannot do otherwise.
Briefly, what is essential to understand is that healthy cells thrive in an alkaline environment, and indeed require an alkaline environment to thrive. Conversely, pathogens such as moulds, yeasts, fungi, viruses and bacteria thrive in acidic environments. Healthy cells thrive in well oxygenated aerobic environments, whereas pathogens thrive in anaerobic environments deprived of oxygen. Since this is so, we can say, crudely speaking, that if the tissues and inner environment of the body—its terrain—is alkaline, then pathogens cannot take hold nor develop nor evolve nor survive in it. On the other hand, if the body’s terrain is acidic, then they thrive, proliferate, and overtake it, sometimes slowly and gradually, but sometimes quickly and suddenly, causing sickness and disease.
Everything that we eat and drink has an effect that is either alkalising, acidifying or neutral. This is after digestion, and has little to do with taste. All sweet tasting foods or drinks that contain sugars, for instance, are acidifying. I will write quite a lot more about pH and alkalisation in future posts. For now, we are concerned with alkalising through super-hydration, and this involves drinking alkaline water and green drinks. By the end of phase I, drinking your 2 litres of alkaline water and 2 litres of super-alkalizing green juice should be as second nature to you as brushing the teeth before bed.
As I attempted to express and make evident the importance of magnesium for every cell and cellular process in the body in Why you should start taking magnesium today, and thus show that we all need to take plenty of magnesium daily in order to both attain and maintain optimal health, for someone suffering from arthritis it is extremely important, it is crucial. And the reason is very simple: arthritis is characterised by inflammation, stiffening and calcification. They come together, of course, and it is useless to even wonder if one comes before another. Regardless, the best, most effective, most proven treatment or antidote for inflammation, stiffening and calcification is magnesium.
Magnesium, injected directly into the bloodstream, can almost miraculously stop spasms and convulsions of muscle fibres, and release, practically instantaneously, even the most extreme muscular contraction associated with shock, heart attack and stroke. This is used routinely and very effectively in birthing wards and surgery rooms. Magnesium is the only ion that can prevent calcium from entering and flooding a cell, thereby causing it to die, and magnesium is the best at dissolving non-ionic calcium—the one that deposits throughout the body in tissues and arteries, and over bone, cartilage, tendons and ligaments—and allowing all this excess calcium to be excreted: precisely what we must do in treating arthritis.
In addition, magnesium is very effective at chelating (pulling out) both toxic heavy metals like mercury and persistent chemicals that bio-accumulate in blood, brain and other tissues. For too many unfortunately unsuspecting people, heavy metal toxicity is the cause of a plethora of various symptoms, wide-ranging in nature, hard to understand or associate with some known and easily identifiable condition, but that cause them often immense discomfort up to complete disability.
Putting all of this into practice
When you get up in the morning, you go to the bathroom, undress and spray or spread on your legs, arms chest and belly, neck and shoulders, the 20% magnesium chloride solution (4 teaspoons of nigari with 80 ml of water for a total of 20 g in 100 ml of solution). You wash your hands and face well, put your PJs back on, and head to the kitchen to prepare your water and green drinks for the day.
Line up three wide-mouth 1 litre Nalgene bottles. In each one put: 5 drops of alkalising and purifying concentrate (e.g. Dr. Young’s puripHy) and 10 drops of concentrated liquid trace minerals (e.g. Concentrace).
In the first bottle, add 50 ml of the 2% solution of magnesium chloride (made with 4 teaspoons of nigari dissolved in 1 litre of water), 50 ml of aloe vera juice, 20 ml of liquid silicic acid, fill it up with high quality filtered water, shake well to mix, and take your first glass with 1 capsule of Mercola’s Complete Probiotics. You should drink this first litre over the course of about 30 minutes, taking the third or fourth glass with an added 1-2 teaspoons of psyllium husks. (The aloe vera and psyllium husks are to help cleanse the intestines over time.)
In the second and third bottles, add a heaping teaspoon of green juice powder (e.g., Vitamineral Green by HealthForce), 1/2 to 1 teaspoon of fine, grey, unrefined sea salt, 1/4 teaspoon of finely ground Ceylon cinnamon, a heaping mini-spoonful of stevia extract powder and a single drop of either orange, lemon or grapefruit high quality, organic, food-grade essential oil. Shake well. One of them you will drink between about 10:00 and 12:00, the other between 15:30 and 17:30. Shake every time you serve yourself a glass or drink directly from the bottle to stir up the solutes in the water. You should take these two bottles with you to work and/or keep them in the fridge until needed: the drink is really nice when it’s cool.
Now that the magnesium has been absorbed through the skin—this takes around 30 minutes, you can go have a shower to rinse off the slight salty residue that feels like when you let sea water dry on your skin without rinsing it off. You should wait at least 30 minutes after you have finished your first litre of water before you eat anything.
By about 10 or 10:30, depending on when you finished breakfast, you should start to drink your first litre of green drink and continue until about 12:00 or 12:30. Make sure you finish drinking 30-45 minutes before you eat. Wait at least couple of hours after eating. Then start drinking the second litre of green drink by about 15:30 or 16:00 until about 17:30 or 18:00. Again, make sure you stop drinking always at least 30 minutes before eating. Depending on when you eat dinner, you should drink a half litre of plain water 30 minutes before the meal. The general rules for drinking you should follow are: 1) always drink at least 500 ml up to 30 minutes before eating, and 2) do not drink during or within 2 hours after the meal.
Before going to bed, take a small glass of water with 50 ml of 2% magnesium chloride solution. And that’s it for the day. And tomorrow and the next day and the day after that, keeping to this schedule, until it becomes perfectly natural and customary. After four weeks, you should do another blood test and see how the numbers compare to those before starting. In addition, if you are interested in this from the scientific standpoint, or just curious, or both, you should get Doppler imaging of your coronary and cerebral arteries, as well as an MRI of the joints in your body, including the spine, before you start and at then end of every phase. It will also be extremely informative to test and record the pH of at least your first urine every morning; any additional urine pH readings will be very useful and tracing the progress of the gradual de-acidification of your tissues and the days and the weeks progress. And finally, the transdermal magnesium therapy (putting the 20% solution on your skin), should last 6-8 weeks. By that time, you intracellular magnesium stores should have been replenished. We continue taking the 2% solution indefinitely, and use transdermal magnesium once in a while (once or twice per week).
The great advantage of the transdermal magnesium is that almost all of it is absorbed into your tissues and bloodstream. The oral magnesium is absorbed a level between 25 and 50%, and this depends primarily on the amount of magnesium in the blood when you take it. This is why it is very important to take it first thing in the morning when magnesium is at its lowest, and then in the latter half of the afternoon and before bed, those times when concentrations are lowest. You don’t have to worry about too much magnesium because any excess will be excrete in the urine and faeces.
You should just worry about not enough: that’s the real problem. Incidentally, the fact that almost all the magnesium that you put on your skin is absorbed underlines the importance of carefully choosing what we put on our skin. Because in the same way, anything we put on it will be absorbed into our system. So putting coconut and almond oil is just as good for our skin and our health, as it is bad to put on creams and lotions with synthetic chemicals and compounds that all make their way into our blood. General rule: if you cannot eat it, don’t put it on your skin.
Update: read these Updated recommendations for magnesium supplementation.
That’s it for the first phase: mostly drinking a lot more than you used to, with a few special tweaks to what and when you drink. I haven’t mentioned anything about food even though you can obviously know from the rest of the articles on the blog that this will come in time: in the second phase. We first deal with the first order terms, then the second order terms, and after that with the third and fourth order terms. That’s very important to grasp: what has the most and what has the least impact and thus importance.
If you enjoyed this article, please Like and share it to help other people. | <urn:uuid:61b80e8e-5ab1-4b57-9303-16ccfde2b394> | CC-MAIN-2021-21 | https://healthfully.net/2012/09/27/treating-arthritis-i-super-hydration-alkalisation-and-magnesium/?replytocom=2585 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989705.28/warc/CC-MAIN-20210512193253-20210512223253-00177.warc.gz | en | 0.94974 | 4,293 | 2.53125 | 3 |
It is one thing writing a descriptive thesis about a single subject, focusing on the ins and outs of the poem, its structure, its language and its style, but it is quite another to write a college essay in which you compare two or more poems.
Fortunately for you, we are here to explain to you how to outline your comparative discussion and how writing about poetic contrast during an exam is often easier than having to come up with numerous statements and ideas to write about just one single text.
For instance, we don't need to point out the fact that having multiple poems means that there is a higher number of points to discuss in your narrative composition. So, the pure fact that you are comparing two texts means that you can dedicate a whole paragraph, if not more, to simply pointing out the clear differences in the structure and flow of two poems!
Furthermore, if you are being asked to compare and contrast texts, it usually means that there are significant differences or at least an alternative point of view to pick up on. Whether these be in reference to the era during which they were written, the writing style that the poet has chosen to use, or the different angles adopted to emphasize the same theme or message, the chances are that you will find loads of avenues to explore in your descriptive essay.
Finally, by focusing your attention on comparison and contrast, you can develop a much better understanding and a deeper appreciation of each citation.
Check here to find poetry classes London.
For those on their way to completing their A Level exams and in need of some extra help and reassurance when it comes to their literature assessment, here are some tips on how to write an A-Level poetry essay.
How To Write A Poetry Essay A Level Style
Coming Up With Ideas For Your Research Paper
By the end of your A Level poetry course, you'll likely be familiar with a range of poets, poems and poetry styles but you may be surprised to know that the final exam often asks you to look at and contrasting two or more unseen poems.
Now, this doesn't necessarily mean that you will be faced with poems that you have never laid eyes on before, by poets you've never even heard of. All it means is that neither you or your teacher will know which revised poems might come up in the assessment. It may well be that a poem you did not study crops up, but by an author who you are quite familiar with.
Not having seen a particular poem before, far from what many people think, is not a disadvantage in a timed exam. In fact, some might say that it works in their favour.
Being faced with a whole new set of words and stanzas to analyse is quite refreshing and if you apply all of the things you have learned over your GCSE and A Level course then you should have absolutely no problem finding leads to follow or points to argue.
Remember that, even if you aren't very informed about the poet or the era during which they lived, you can often decipher hidden messages that might indicate when they were writing and what they were writing in response to. For example, if you find that a poem uses lots of words that are linked to battle, this evidence might be used to prove that the poem was written during the period of a war. Even though you may not know exactly which war, this still gives you something analytical to offer the examiner and a subject to use in your persuasive essay. Even if it is wrong, it may be an important element that the poet was trying to put in there.
In order to get this first impression that you can then report on in your text, be sure to read all of the texts thoroughly before starting to plan and write your essay. Your introductory paragraph, or thesis statement, might include a brief summary of each poem and set out a few observations that you'd like to look at in more detail further into your poetic analysis.
Remember that this will be a timed assessment so you only have so many minutes in which to read, plan, and write your essay. As such, don't give yourself too much to cover and find that you have to rush your conclusion to bring the comparison to an end (or worse, that you end up with an unfinished essay). Pick out a few points that are relevant to the question being asked and focus on expanding on them as much as possible during your critique.
Don't forget, if you want the examiner to see that you've noticed other things in the poems, then you can always refer to them briefly whilst backing up one of your other arguments.
Finally, remember to not only focus on the historical context or themes of the poem but to also demonstrate your understanding of intellectual poetry techniques. So, as well as exploring the ideas, attitude, and tone of the poems, be sure to look out for structure, form, and literary techniques used by the poet.
See some poetry classes here.
Structuring Your Timed Essay
When it comes to writing a paper, the main thing to remember is that you need to have an introduction, the main body, and a conclusion, just like any other term paper you have written in the past. Yet one thing that may not have crossed your mind as being imperative is to write an equal amount on each of the poems that you are discussing. Ultimately, without dedicating the same amount of time to each text, there is no way you can analyse the poems effectively in the comparative way the examiner wants.
Imagine if you wrote an essay where you discussed one poem for four paragraphs and then referred to the second poem in one single paragraph, the flow of the analysis would be completely off-balance and the examiner would only really be able to mark you on your direct analysis of the one poem that has taken centre-stage.
Ideally, each paragraph of your essay should address one or more specific poetic elements or aspects of the works in question. Furthermore, each paragraph should contain a dissection of both works, rather than expounding on only one poem. You might strive for something along these lines:
Poem XYZ expounds of the narrator's perception of his mother's love, whereas poem ABC describes a mother's unconditional love for her child.
With this opening line, you have pointed at the theme of the poems - parental love. You have also uncovered an important difference between the two: perspective. That opening sentence paints a contrast between the two works which you would explore in depth throughout the paragraph.
Note the use of 'whereas' in this sentence. Used as a conjunction, one of its meanings is, literally, 'while in contrast'. As your assignment is to compare and contrast, using this conjunction is perfectly acceptable.
On the other hand...
The students were eagerly anticipating their marked papers and the teacher did not disappoint. As soon as class started, she handed her students their essays back. One student in particular was dismayed to find that she had scored poorly. Most curiously, her teacher had written, across the top: how many hands do you have?
On the one hand, it is perfectly acceptable to use 'on the other hand' to preface a comparison or contrast. On the other hand, it is not acceptable to use it as the only indication of comparison throughout your entire essay!
In fact, that is what had cost that student points off her grade: every single comparison was introduced with the phrase 'on the other hand', leading the teacher to wonder how many hands that essay writer intended to employ!
While some forms of repetition are considered literary devices - parallel structure being a case in point, using the same transitional phrase throughout your work will surely cost you in points!
It might help you to study alternate phrases and incorporate a few into your personal lexicon. That way, when one is needed, you have an entire arsenal at your disposal!
If you can, jot down a table or checklist of similarities and differences during your planning phase and then roughly set out the essay paragraph by paragraph to ensure that it looks even. Not only will this be a helpful guide as you start writing, it will also keep you on track. You don't necessarily have to keep the analysis paper in chronological order.
Your table might look something like this:
Once you have developed your ideas in such a brainstorming session, crafting your essay is a piece of cake!
Structuring an essay is actually much easier than people think. What the examiner wants to see is that you can clearly explain a point, justify it and then ask questions about why that is important to the overall text. So, for example, just like the essay as a whole, each point you make should ideally be made up of an introduction, middle section, and a conclusion.
The BBC Bitesize website likens this process with a sandwich, suggesting that the two pieces of bread are the intro and conclusion and the layers of filling are made up of each individual point you make in response to that argument. Others also talk about the technique being like a hamburger.
Remember, a plain beef burger with no sauce or fillings makes for quite a dry hamburger, and it's much the same with your essay.
Poem Analysis Essay Tips
Check What The Examiner Expects Of You
Before you start writing any kind of poetry analysis, you should always be certain of what is expected of you. To find out what the English examiner is looking for in a good thesis, visit your exam board's website and look for the Mark Scheme, Examiner's Notes and any other documents you can find and cross-reference these with the specimen question papers to get a good idea on what you should be doing when your final exam comes around.
Although your English Instructor will no doubt offer you guidance and set useful homework and classroom tasks, don't underestimate the benefit of doing past papers. As such, do as many of the available specimen papers that you can and don't just settle for doing the bare minimum! We don't recommend using an essay writing service for your coursework or as an exemplary revision resource because you simply can't guarantee that they are genuine, professional writers nor can you be certain that work hasn't been plagiarized. Be confident and stick to your own work!
Proofreading Is Key
Remember to leave yourself enough time at the end of a timed exam to read through your work, check for any obvious spelling mistakes, and to ensure it is coherent. It can be quite easy to get ahead of yourself when you are faced with a deadline so taking time to check over the wording on your literary essay can actually help you to strengthen your response. Depending on the requirements, you may like to use some of this time adding a bibliography, checking things like capitalization and looking out for repetition.
Practice Putting Poetry Into Your Own Words
Putting a poem into your own words not only shows that you understand what the poem is about, it also helps you to gain a better understanding and a deeper appreciation of the message trying to be conveyed by the poet. Some poems, specifically those written centuries ago, are quite hard to read aloud so why not add an informal annotation underneath each line to make the wording a bit easier to decipher as you go back and forth between the poems during the exam?
Remember To Reference Any Quotations. You may have written down some distinctive quotes on your revision cards, or you may simply want to paraphrase what a poet or critic has said from memory, but either way, it is important that you do so properly. Any words that aren't your own should be referenced using quotation marks (if a direct quote) or by making it clear that a particular sentence is an opinion of another individual.
Brush Up On Your Poetry Terms
You simply can't expect to rack up those top marks if you don't have the knowledge and expertise to back up your ideas. Showing that you know a wide range of literary terms and poetry techniques will help to impress the examiner. That said, it is just as important to understand what the terms mean as it is to know their names. The examiner won't be fooled if you simply reel off a list of terms, saying that they are included in the poem but without explaining where each one crops up and why.
Starting taking poetry courses online.
Terms to Use in Poetry Analysis
-Rhyme scheme: the pattern of rhyming words - typically, the last word of every poem line.
Rhyme schemes are generally indicated by a combination of letters, AB, CD and so forth. If you wish to describe a scheme in which alternating lines rhyme, you would use ABAB. However, if the first line rhymes with the last and the middle two lines rhyme, the designation ABBA would be correct.
Note: most Shakespeare quatrains are written in the ABAB scheme.
-Meter: which syllables in each line of poetry are stressed.
Poetry, by its very nature and definition, is meant to be rhythmic; indeed that very rhythm contributes to the tone and meaning of the work itself.
By alternating stress with lack of stress when reading each line, you may arrive at a different tone for that work altogether!
-Iambic Pentameter: the perfect example of meter in poetry!
The name itself, pentameter, indicates that there will be 5 stressed syllables, each one alternating with an unstressed that should sound like so:
Consider this stanza:
As I was walking down the street one day, / The sun from behind clouds came out to play.
Here, not only do you have five feet - five stressed syllables contrasting with five unstressed, you also have an AA rhyming scheme!
The Bard make great use of the iambic pentameter when writing his sonnets... so do select rock songs!
-Metaphor: a figure of speech employing a known object or situation to represent something figurative.
A writer's job is to paint pictures with words. S/he does so by using words and phrases that create vivid images in the readers' minds.
Naturally, you are not expected to be an author on par with great historical essay writers such as Ralph Waldo Emerson or Lewis Carroll. However, you can and should make use of metaphors in your analysis, where appropriate. For example:
"The tone of poem XYZ sends the reader's heart soaring into a cloudless, springtime sky. However, poem ABC fills those skies with dark clouds, reflecting the author's own gloom."
As you might have guessed, in neither poem does a sky feature. Nevertheless, using the sky as a metaphor for the tone of the poems is apt in more than one way: the sky lies above us just as a poem's tone 'oversees' the words it comprises of.
Examples of well-known metaphors include:
- I am boiling mad!
- It's clear sailing from now on.
- I'll ask my teacher, but it will be an uphill battle to get her to change my marks.
- What pearls of wisdom will he dispense today?
Beware not to misuse a metaphor as a simile!
-Simile: a figure of speech that compares two unlike things (situations, objects, etc.)
Many people confuse similes with metaphors because their use and purpose is quite nearly the same. In each case, the writer is creating a visual for the reader to better get a sense of what s/he intended to convey.
Let us look at two examples of describing a madman:
'He was quite mad.' versus 'he was mad as a hatter!'
The first sentence conveys the impression that that poor soul was to be pitied; after all, he couldn't help being mad, could he? However, the second sentence indicates that not only is this man mad but he must be the very spectacle of madness!
Similes are generally recognised by 'like', 'than' or 'as', preceding the comparison. Here are a few examples of similes:
- Shopping is more fun than a barrel of monkeys!
- Her laughter is like crystal, tinkling in the breeze.
- He's as strong as an ox!
- This exam is so easy, it's like shooting fish in a barrel.
You might use similes in your poetry analysis to distinguish differences in tone (as different as night and day), theme (as grating as nails on a blackboard), structure (constructed like cookie-cutter houses) or content (it's like comparing apples and oranges).
-Setting: either a literal or figurative place where the action or situation occurs
Although it would seem like a minor consideration to the overall work, setting is critical to poetry (or any other type of writing) because it helps the reader develop a connection to the narrative.
It may used to help identify the characters, create a conflict for the character(s) to resolve or even be an antagonist that the characters must vanquish. It can help set the tone and the mood of the piece; most certainly it would act as a backdrop.
The works' setting may be explicitly described or merely implied. However they are presented, don't neglect to touch upon them in your essay!
In fact, a poem's setting offers a wealth of analysis opportunity.
-Allegory: a literary device, usually in the form of a metaphor, meant to deliver a broader message.
Should you read a story or poem that resonates on a completely different level, meaning you see a parallel between this work of fiction and real-world occurrences, you may have found an allegory!
Beware, however, that this supposed allegory is not a fable.
The difference between those two types of works is slight but profound. A fable is meant to reinforce a truth or precept while an allegory represents abstract principles.
You may be familiar with H.G. Wells' Animal Farm, a classic example of allegory meant to depict the overthrow of the Russian Tsarist system. However, in spite of the fact that this allegory uses animals - as most fables do, it could in no way be considered a fable.
Should you discover, during your exam, that one of your texts is allegorical and the other more of a fable, you may consider contrasting that aspect of those works in your essay.
-Alliteration: when the same letter or sound starts a series of words
If you entertain your friends with your ability to utter tongue twisters to perfection, you may already be familiar with alliteration.
In fact, the above sentence includes an alliteration!
Here is a stylistic element that, in poetry, could be used to emphasise a line or stanza of particular import, or even to stress an important characteristic - either of the narrative itself or of the poem's theme.
Alliteration is fairly common in poetry so, if the works you are comparing each contain alliterations, you might write a paragraph about the differences between them.
-Assonance: repetition of a vowel sound or diphthong
Whereas alliteration repeats consonant sounds; assonance does so with vowels. Assonance is quite common in proverbs; the vowel sounds those words have in common help make those phrases memorable:
The squeaky wheel gets the grease.
The second, third and sixth words all have the same consonant sound, making this sentence a perfect example of assonance.
Beware, though: the words must be noticeably close together; within the same line or sentence. You can't scan the entire work for similar-sounding vowel combinations and call them assonance!
-Caesura: essentially, a pause
As the old joke goes: "And she talked on, never pausing for breath..."
In fact, we all pause for breath at strategic points in our verbal narrations, but how to find those breaks in poetry? Fortunately, poets make it easy for us through a variety of ways: punctuation, the natural rhythm of the work or by the 'll' symbol.
Caesura are broadly divided into 'male' and 'female' with the latter a softer stop than the former.
- Feminine caesura appear after non-stressed or short syllables; such as a past tense (-ed) or progressive (-ing) ending.
- Masculine caesura occurs after long or accented syllables - those that end in a consonant pair(-rd, ck, st...), or plurals.
The use of caesura in poetry can be used to convey tone or mood; thus it makes an excellent instrument for comparing two works!
-Enjambment: no pause at the end of a stanza, line or couplet
In opposition of the caesura comes the enjambment, used to convey heightened emotion or a running thought, although sometimes it is used to trick the reader by presenting a conflicting idea in the very next line:
Among the bracken and thorns / Beautiful red roses bloom.
-Hyperbole: an exaggeration
Does one ever really think of poetry as humorous? It can be and one way that the writer demonstrates that literary tickle is with the use of hyperbole.
Still she, with skirts large as a circus tent, was only on his love intent.
Here, making use of a simile as a hyperbole to bring about a comical image (who could really have such a large skirt?) the writer meant to create a picture of a lovelorn woman pursuing unrequited love.
Naturally, one should not take the hyperbole seriously...
-Satire: a humorous, ironic, exaggerated or ridiculous criticism
Have you ever seen any political cartoons? If so, you have had exposure to satire. Do you know any limericks? If so, you may be familiar with the use of satire in poetry.
Contrary to the overarching belief that all poetry must be beautiful, wistful and romantic, poetry can also be scathing and scorning.
Might you encounter such works in the course of your exam? If so, be sure to highlight the odes' satirical tones!
-Personification: literally render into a person.
Trees, animals... even emotions and situations can be personified in poetry. This is a way for the author to give the work life, motion... maybe even fluidity!
The peaks rear'd up, in a ring, as a band of patriarchs who would surround a newborn heir...
By turning an ancient mountains into kindly grandfathers who, most likely, watch over the people living in those shadows, the writer has personified the landscape into a benevolent protector.
Take note of any personification in the works you are assigned to analyse; as personification is a tool widely used in poetry, surely you could find points to compare between personifications!
Final thought: while anyone may notice the larger differences between two poems, perhaps, recognising and writing your comparative essay on these finer points listed above might earn you higher marks.
Good luck! Let us know how you get on, will you? | <urn:uuid:af7786b7-11fb-42fe-812e-7dd57fa044a6> | CC-MAIN-2021-21 | https://www.superprof.com/blog/how-to-write-a-poetry-essay/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988837.67/warc/CC-MAIN-20210508031423-20210508061423-00537.warc.gz | en | 0.959482 | 4,765 | 2.890625 | 3 |
2 One Nation Under God Ten things every Christian should know about the founding of America. An excellent summary of our history in 200 pages.
3 One Nation Under God America is the only nation in the world that is founded on a creed. That creed is set forth with dogmatic and even theological lucidity in the Declaration of Independence. G.K.Chesterton
4 1. Christopher Columbus Christopher Columbus was motivated by his Christian faith to sail to the New World.
5 Christopher Columbus Let Christ rejoice on earth, as he rejoices in heaven, when he foresees coming to salvation so many souls of people hitherto lost. Christopher Columbus Journal (1492)
6 2. Mayflower Compact The Pilgrims clearly stated that they came to the New World to glorify God and to advance the Christian faith.
7 Pilgrims Mayflower Pilgrims were blown off course from Virginia. Instead they landed at Cape Cod and eventually settled on the site of Plymouth Colony. Before the disembarked, they drafted the Mayflower Compact.
8 Mayflower Compact
9 Paul Johnson What was remarkable about this particular contract was that it was not between a servant and a master, or a people and a king, but between a group of likeminded individuals and each other, with God as a witness and symbolic co-signatory.
10 3. The Puritan Covenants Massachusetts Bay Colony The Puritans created Bible-based commonwealths in order to practice a representative government that was modeled on their church covenants.
11 Pilgrims and Puritans Pilgrims felt reforming the Church of England was hopeless. They wanted to separate from the church they were called Separatists. Puritans wanted to reform the church and purify it from within they were called Puritans.
12 Puritan Covenants There had been no written Constitution in England. The British common law was a mostly oral tradition. The Puritans determined to anchor their liberties on the written page, a tradition taken from the Bible.
13 Puritan Covenants The Body of Liberties (1641) contained ninetyeight separate protections of individual rights, including: due process of law, equal protection, trial by a jury of peers, and prohibitions against cruel and unusual punishment.
14 4. Haven for Dissidents This nation was founded as a sanctuary for religious dissidents. Roger Williams 1643 Charter for Rhode Island
15 William Penn Main author of founding governmental document called The Concessions. By 1680, The Concessions had 150 signers and provided far-reaching liberties never before seen in Anglo- Saxon law.
16 5. Beginning of Wisdom New England Primer The education of the settlers and founders of America was uniquely Christian and Bible-based.
17 New England Primer In Adam s fall, We sinned all. Heaven to find, The Bible Mind. Christ crucified, For sinners died.
18 Education in the Colonies Passed a law called the Old Deluder Act because it was intended to defeat Satan, the Old Deluder, who had used illiteracy in the Old World to keep people from reading the Word. The main purpose of schools in Puritan New England was to teach children to read the Bible.
19 Harvard College Let every student be plainly instructed and earnestly pressed to consider well the main end of his life and studies is to know God and Jesus Christ which is eternal life (John 17:3). Laws and Statutes (1643)
20 Yale College All scholars shall live religious, godly, and blameless lives according to the rules of God s Word, diligently reading the Holy Scriptures, the fountain of light and truth; and constantly attend upon all the duties of religion, both in public and secret. Regulations at Yale College (1745)
21 6. The Great Awakening A religious revival was the key factor in uniting the separate pre-revolutionary War colonies. Jonathan Edwards and George Whitefield
22 The Great Awakening Jonathan Edwards revival began in his church in 1734 like a flash of lightning. George Whitefield made first continental tour of colonies in 1740.
23 Paul Johnson The Great Awakening may have touched as many as three out of four American colonists. He points out that this Great Awakening sounded the death-knell of British colonialism.
24 7. The Black Regiment Many of the clergy in the American colonies, members of the Black Regiment, preached liberty. Since the clergy wore black robes, they were called the Black Regiment.
25 John Adams The Revolution was effected before the War commenced. The Revolution was in the mind and hearts of the people: and change in their religious sentiments of their duties and obligations.
26 Patriot Preachers John Adams wrote The Meaning of the American Revolution in He listed those men responsible for the revival of American principles that led to the American Revolution. Two of the men he mentioned were Dr. Mayhew and Dr. Cooper.
27 Patriot Preachers Rev. Jonathan Mayhew, minister of West Church (Boston) Dr. Samuel Cooper, minister of the Brattle Street Church (Boston) Rev. John Peter Gabriel Muhlenberg (Woodstock, Virginia)
28 John Peter Muhlenberg Lutheran pastor who served in House of Burgesses. Preached on Eccl. 3 Left church with 300 men from his congregation. Became major general in Continental Army. Elected to Congress with brother Frederick.
29 Patriot Preachers John Adams The Philadelphia ministers thunder and lighten every Sabbath against George III s despotism. Thomas Jefferson pulpit oratory ran like a shock of electricity through the whole colony.
30 8. Christian Patriots Biblical Christianity was the driving force behind the key leaders of the American Revolution.
31 Samuel Adams He had been telling his countrymen for years that America had to take her stand against tyranny. He regarded individual freedom as the law of the Creator and a Christian right documented in the New Testament.
32 Signing of Declaration We have this day restored the Sovereign to Whom all men ought to be obedient. He reigns in heaven and from the rising to the setting of the sun, let His kingdom come. Samuel Adams
33 9. Declaration & the Bible Christianity played a significant role in the development of our nation s birth certificate, the Declaration of Independence.
34 Declaration & Constitution The Declaration of Independence is the why of American government. The Constitution is the how of American government.
35 Declaration of Independence laws of nature and nature s God they are endowed by their Creator appealing to the Supreme Judge of the World protection of divine Providence
36 John Locke John Locke explained that the law of nature is God s general revelation He writes on our hearts. He also spoke of the law of God as God s eternal moral law revealed and published in Scripture.
37 George Mason That all men are by nature equally free and independent and have certain inherent rights... namely, the enjoyment of life and liberty... and pursuing and obtaining happiness and safety. Virginia Declaration of Rights
38 Mecklenberg Declaration Presbyterian Elders of North Carolina drafted resolutions in May When Jefferson corrected his first draft, he erased the original words and inserted those first found in the Mecklenberg Declaration. Must have had the resolutions before him as he was drafting the Declaration.
39 Paul Johnson There is no question that the Declaration of Independence was, to those who signed it, a religious as well as secular act, and that the Revolutionary War had the approbation of divine providence.
40 10. Constitution & the Bible The Biblical understanding of the sinfulness of man was the guiding principle behind the United States Constitution.
41 Source of Political Ideas Constitutional scholars assembled 15,000 writings from the Founding Era. Counted 3154 citations in these writings. The Bible was quoted 34 percent of the time.
42 Source of Political Ideas Writers from this era quoted from the Bible 34 percent of the time. About three-fourths of all references to the Bible came from sermons from that era.
43 Signing of Constitution M.E. Bradford A Worthy Company 50 of the 55 men who signed the Constitution were church members who endorsed the Christian faith.
44 Conclusion Christianity was important in the founding of this country and the framing of its government. If Christianity was so important in the founding of this republic, why do so many think it is irrelevant in the maintenance of this republic?
45 Over 1300 articles, commentaries, and answers. Over 35 PowerPoint presentations. Over 250,000 visitors from 140 countries. Over 135 Probe articles in Spanish.
Black-Robed Regiment Black-Robed Regiment Dan Fisher is a pastor and former member of the Oklahoma House of Representatives. His book records the history of the Patriot Preachers, also known as the Black-Robed
Non-fiction: Pilgrims and Puritans Who Were the Pilgrims? Pilgrims and Puritans Who Were the Pilgrims? In 1620, a new group of English settlers 1 arrived in New England. Today, they are known as the Pilgrims.
Pilgrims &Puritans: Coming to America Seeking Religious Freedom Religious Issues in England King Henry the 8 th The Supremacy Act of 1534 1. The King creates the Church of England as the Official Church
HISTORY OF THE CHURCH: LESSON 4 RELIGIOUS CLIMATE IN AMERICA BEFORE A.D. 1800 I. RELIGIOUS GROUPS EMIGRATE TO AMERICA A. PURITANS 1. Name from desire to "Purify" the Church of England. 2. In 1552 had sought
America: The Last Best Hope Chapter 2 A City Upon A Hill 1. The English called the coast of America between Newfoundland and Florida A Carolina B Massachusetts C Maryland D Virginia 2. Sir Walter Raleigh
A Quick Overview of Colonial America Causes of England s slow start in North America: 1. Religious conflict (Anglican v. Catholic) 2. Conflict over Ireland 3. Rivalry with an Catholic Spain Queen Elizabeth
Name: Date: Per. Chapter 3 Study Guide Settling the Northern Colonies: 1619-1700 You need to know the historical significance of the following key terms. I suggest you make flashcards. 1. John Calvin 20.
Chapter 3, Section 2 The New England Colonies Religious tensions in England remained high after the Protestant Reformation. A Protestant group called the Puritans wanted to purify, or reform, the Anglican
The New England Colonies How Do New Ideas Change the Way People Live? Seeking Religious Freedom Guiding Question: Why did the Puritans settle in North America? The Jamestown settlers had come to America
13 BRITISH COLONIES P E R I O D 2 : 1 6 0 7 1754 KEY CONCEPT 2.1 II. In the 17 th century, early British colonies developed along the Atlantic coast, with regional differences that reflected various environmental,
Do Now Was the colony of Jamestown, Virginia an instant success or a work in progress? Explain. THE NEW ENGLAND AND MID-ATLANTIC COLONIES Ms.Luco IB US History August 11-14 Standards SSUSH1 Compare and
~A BINGO BOOK~ Massachusetts Bingo Book COMPLETE BINGO GAME IN A BOOK Written By Rebecca Stark Educational Books n Bingo 2016 Barbara M. Peller, also known as Rebecca Stark The purchase of this book entitles
Chapter 3 Colonial America 1587-1776 Section 1: Early English Settlements This colony became the first successfully established English colony in North America. Jamestown Comparison Foldable Directions
Chapter 5 Lesson 1 Class Notes The Lost Colony of Roanoke - England wanted colonies in North America because they hoped America was rich in gold or other resources. - Establish a colony is very difficult
Why did English men and women colonize America? They were looking for religious freedom? They wanted to spread their religion? They were seeking adventure? They were seeking fame? They wanted to grow the
NEO-EUROPEAN COLONIES NEW FRANCE, NEW NETHERLANDS, AND NEW ENGLAND THINK ABOUT IT How did the prospects differ for Europeans who traveled to tropical plantations like Barbados from those who traveled to
Tuesday September 5 th, 2017 Spiral Activity #8 Plymouth Colony Cornell Notes DRAW A CORNELL NOTE TEMPLATE FOR ASSIGNMENT #8. (Use Page 1 of your spiral as a reference!) The Pilgrims left England Pilgrims
Colonial America Roanoke : The Lost Colony Founded: 1585 & 1587 Reasons for Settlement Vocabulary a country s permanent settlement in another part of the world. the ability to worship however you choose.
May 14, 2018 Dear Student, Welcome to 2018-2019 Advanced Placement United States History! Our study this year will encompass the foundations of American political philosophy from Colonial America to present
Early American Literature An Era of Change Early American Literature Time Period: 1600-1800 Historical Context: First "American" colonies were established Religion dominated life and was a focus of their
The Puritans vs. The Separatists of England England was once a Catholic country, but in 1532 King Henry VIII created the Anglican Church (Church of England). However, over the years that followed, many
Declaration and Constitution: 18 th Century America Psalm 33:6-12 From the Reformation to the Constitution Bill Petro your friendly neighborhood historian www.billpetro.com/v7pc 06/25/2006 1 Agenda Religion
Pilgrims Found Plymouth Colony Name: Class: List as many reasons as you can as to why a family today might decide to move. For what reasons did the settlers start the Jamestown colony? Why come to America?
Prompt: In the seventeenth century, New England Puritans tried to create a model society. To what extent were those aspirations fulfilled during the seventeenth century? Re-written as a Question: To what
The New England Colonies Chapter 3 section 2 Pilgrims and Puritans Religious tension in England: a Protestant group called Puritans wanted to purify the Anglican Church. The most extreme wanted to separate
The 13 American Colonies F O C U S O N T H E B L A C K B O L D E D N O T E S. Roanoke 1580s The Lost Colony Poorly planned and supplied Failed due to hunger and bad relations with the Native Americans.
Puritans and New England Puritans (Congregationalists) John Calvin Wrote Institutes of the Christian Religion Predestination Calvinism in England in 1530s Wanted to purify the Church of England of Catholicism
Colonial Period 1607-1763 Ben Windle Corporate Colony Proprietary Colony Royal Colony Started by investors, for profit Gifted to individuals by British Crown Controlled by British Crown Jamestown Maryland,
The Age of Enlightenment (or simply the Enlightenment or Age of Reason) was a cultural movement of intellectuals in 18th century Europe, that sought to mobilize the power of reason in order to reform society
Slide 1 The Faith of our Founding Fathers Slide 2 Psalm 33:12a. Blessed is the nation whose God is the Lord... Slide 3 Were our Founding Fathers Really Men of Faith? [Did you realize most of the 55 founding
THEME #3 ENGLISH SETTLEMENT Chapter #3: Settling the Northern Colonies Big Picture Themes 1. Plymouth, MA was founded with the initial goal of allowing Pilgrims, and later Puritans, to worship independent
Who were the Pilgrims and why did they leave England? The Pilgrims were a group of people who were brave and determined. They sought the freedom to worship God in their own way. They had two choices: 1)
The Thirteen Colonies Timeline Cards ISBN: 978-1-68380-183-2 Subject Matter Expert J.Chris Arndt, PhD Department of History, James Madison University Tony Williams Senior Teaching Fellow, Bill of Rights
Sir Walter Raleigh Roanoke Sir Walter Raleigh was an English explorer, soldier and writer. At age 17, he fought with the French Huguenots and later studied at Oxford. He became a favorite of Queen Elizabeth
The Puritans 1600s-about 1750 This is also known as the Colonial period The reason we call this part of the Colonial Period (which represents all of the time that the British ruled the colonies in North
EXCERPTS from the Leader s Guide and Student Handouts for the video documentary Prepared by Diana L. Severance, Ph.D. Table of Contents Sections Page numbers Alexis de Tocqueville on early American Christianity........................................
Section 1 25/02/2015 9:50 AM 13 Original Colonies (7/17/13) New England (4 churches, Congregationalists, Presbyterians, Calvinists, reform churches, and placed a lot of value on the laypersons, who were
seeking religious freedom Color in the location of Massachusetts Pilgrims were also called. They wanted to go to Virginia so they, unlike the Church of England. Puritans didn t want to create a new church,
Protestant Reformation and the rise of Puritanism 1517, Martin Luther begins break from Catholic church; Protestantism Luther declared the bible alone was the source of God s word Faith alone would determine
New England Colonies 2 3 New England Economy n Not much commercial farming rocky New England soil n New England harbors n Fishing/Whaling n Whale Oil n Shipping/Trade n Heavily Forested n Lumber n Manufacturing
The English Colonies in North America I N T E R A C T I V E S T U D E N T N O T E B O O K What were the similarities and differences among the colonies in North America? P R E V I E W Examine the map of
Colonial Literature The Puritan Period How did religion shape the literature of the Puritan period? We will look into themes, formats, and purposes of the Puritan writers to answer this question. Important
Life in the Colonies Immigration was important to the growth of the colonies. Between 1607 and 1775, an estimated 690,000 Europeans came to the colonies. During this time, traders also brought in 278,000
Religious Reformation and New England Martin Luther began the Protestant Reformation in 1517. Hatred of Indulgences and Catholic corruption Translated Bible into German so common people can read it. Reformation
Puritanism Puritanism- first successful NE settlers Puritans: Want to totally reform [purify] the Church of England. Grew impatient with the slow process of Protestant Reformation back in England. Separatists:
New England: The Pilgrims Land at Plymouth Depicting the Pilgrims as they leave Holland for new shores, "The Embarkation of the Pilgrims" can be found on the reverse of a $10,000 bill. Too bad the bill
America: The Story of US Chapter 3: sections 1-4 In this Chapter What will we see? Setting: Time & Place Time: 1588 Place: Europe: England & Spain How it all started. Spain and England always fought against
THE CALL TO PRAY FOR THE LAST AWAKENING GLORIA COPELAND AND BILLYE BRIM DAY 1 PRAYING FOR THE LAST AWAKENING TO GOD God has blessed us with the revelation of walking in His love, walking in faith principles,
Document One A Description of New England John Smith from the Jamestown colony in Virginia explored the coast of what is now Massachusetts. In 1616 Smith published a book A Description of New England in
Session 3: Exploration and Colonization The New England Colonies Class Objectives Locate and Identify the 4 New England colonies and the 2 original settlements of the Pilgrims and Puritans. Explain the
Settling the Northern Colonies, 1619-1700 Chapter 3 New England Colonies, 1650 Protestant Reformation Produces Puritanism Luther Bible is source of God s word Calvin Predestination King Henry VIII Wants
Lockean Liberalism and the American Revolution By Isaac Kramnick, The Gilder Lehrman Institute of American History, adapted by Newsela staff on 04.27.17 Word Count 988 Level 1020L English philosopher John
Intermediate World History B Unit 7: Changing Empires, Changing Ideas Lesson 1: Elizabethan England and North American Initiatives Pg. 273-289 Lesson 2: England: Civil War and Empire Pg. 291-307 Lesson
Terms and People public schools schools supported by taxes dame schools schools that women opened in their homes to teach girls and boys to read and write Anne Bradstreet the first colonial poet Phillis
Original American Settlers Roanoke, Jamestown, Pilgrims, and Puritans 7th Grade Social Studies Roanoke Colony Roanoke Island (Lost Colony) Sir Walter Raleigh asked Queen Elizabeth if he could lead a group
TEACHING AMERICAN HISTORY PROJECT Lesson Title - Mayflower Compact, a Closer Look By Jessica Cooley Grade Fifth Grade Length of class period 1 Hour Inquiry (What essential question are students answering,
Bell Ringer: The Declaration of Independence states people have the right to Life, Liberty, and the Pursuit of Happiness. What does this mean to you? Declaring Independence Road to Revolution One American
A Chronology of Events Affecting the Church of Christ from the First Century to the Restoration These notes draw dates and events from timelines of www.wikipedia.com. The interpretation of events and the
Moving Toward Independence Chapter 5, Section 4 **Have you ever read the Declaration of Independence? We hold these Truths to be self-evident, that all Men are created equal, that they are endowed by their
Advanced Placement United States History Summer Assignment Due date: First day of class, August 2017 Welcome to Advanced Placement United States History for Fall-Spring 2017-18 at Fayetteville High School.
frontmatter 1/30/03 9:15 AM Page 1 Introduction American independence from Great Britain was achieved on the battlefield, but the establishment of a new republic, conceived in liberty, was as much a product
Easy Classical Press Early Modern History Copybook GDI Basic Edition Grades K-3 Easy Classical Writing Early Modern History Copybook GDI Basic Edition Grades K-3 By Julie Shields Easy Classical Writing
Colonies Take Root 1587-1752 Essential Question: How did the English start colonies with distinct qualities in North America? Formed by the Virginia Company in search of gold Many original settlers were
Early Colonies & Geography Sept 9/Sept 12 Warm Up Continue working on your vocab terms - Use notes that we ve completed in class Use a textbook or internet to help if you want Pick up a Colonial Region
Click on the link below to listen to sermon A New Order of the Ages (2 Chronicles 7:14) Dr. Sidney Yuan (email@example.com) Preached at Hillside Community Church of the Nazarene 2804 S. Fullerton Rd, Rowland
AP U.S. History Mr. Mercado Name Chapter 3 Settling the Northern Colonies, 1619-1700 A. True or False Where the statement is true, mark T. Where it is false, mark F, and correct it in the space immediately
The English literature of colonization 2. The Puritans The Puritans They were radical Calvinist who believed that the Church of England had betrayed the spirit of the Reformation http://www.historyguide.org/earlymod/lectur
Colonization 1 st English Colony in North America: Roanoke Mystery of Roanoke..only clue of the lost colony was a tree with the word Croatoan carved on it. Based on Limited clues what theories of the lost
Chapter 4, Section 4 How ideas about religion and government influenced colonial life. The Great Awakening, one of the first national movements in the colonies, reinforced democratic ideas. The Enlightenment
Religion and Representative Government in the American Colonies Puritan Beliefs 101 Puritans believed in: Reform Congregational Control (no bishops or popes!) Salvation by Grace Alone The sovereignty of
Chapter 3 APUSH Mr. Muller Aim: How are the New England colonies different from the Middle and southern Colonies? Do Now: Read the Colombian Exchange passage and answer the 3 questions that follow. You
Benjamin Franklin and The Great Awakening The Great Awakening, also known as the Age of Reason, was a religious movement, creating many religious groups and education opportunities to train ministers (a
The Great Awakening was... the first truly national event in American history. Thirteen once-isolated colonies, expanding... north and south as well as westward, were merging. Historian John Garraty THREE
Thanksgiving Reflections on Gratitude Historical Reflections The Mayflower sailed from Plymouth on September 16, 1620, with 101 people plus officers and crew 35 were from Leyden, 66 from Southampton and
SSUSH2 The student will trace the ways that the economy and society of British North America developed. a. Explain the development of mercantilism and the trans-atlantic trade. b. Describe the Middle Passage,
Topic Page: Pilgrims (New Plymouth Colony) Definition: Pilgrims from Philip's Encyclopedia (Pilgrim Fathers) Group of English Puritans who emigrated to North America in 1620. After fleeing to Leiden, Netherlands,
American Revolution Study Guide ESSAYS four of the five essays on this review sheet will be on your test. The material from the essay not on the test may appear in another section of the test. You will
PowerPoint Questions (1630-1750) 1. Where did the colonists settle in 1630? (Slide 3) 2. Who were the Puritans? (Slide 4) 3. Who was elected the first governor of the colony of Massachusetts? (Slide 4)
#11. (152014) 3B ISN 5 22 23 Colonial Society Class Like today, class differences existed Gentry (top of society)- wealthy planters, merchants, ministers, successful lawyers, and royal officials. Middle
Faith in America Mitt Romney December 6, 2007 George Bush Presidential Library in College Station, Texas The following is a transcript (as prepared for delivery) of former Massachusetts Gov. Mitt Romney's
AP and Honors Summer Work Responsibilities for Rio Americano HS AP United States History Dear AP US History student Congratulations and welcome to AP U.S. History for the 2018-2019 school year! Attached
European Land Holdings on the Eve of the French and Indian War (1754-1763) PERIOD 2: 1607-1754 The British are Coming: Jamestown and Puritan New England DEFEAT OF SPANISH ARMADA Spain overextends itself; | <urn:uuid:0d98551c-8374-4c58-a14a-c0bd0ef80014> | CC-MAIN-2021-21 | https://religiondocbox.com/Christianity/66699001-One-nation-under-god.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991378.48/warc/CC-MAIN-20210515070344-20210515100344-00177.warc.gz | en | 0.924285 | 5,518 | 3.265625 | 3 |
Mataquescuintla seen from Miramundo and Pino Dulce
|• Body||Mataquescuintla municipal council|
|• Mayor of Mataquescuintla||Hugo Manfredo Loy|
|• Total||262 km2 (101 sq mi)|
|Elevation||1,727 m (5,000 ft)|
|• Density||160/km2 (410/sq mi)|
|Time zone||UTC-6 (Central America)|
Mataquescuintla played a significant role during the first half of the nineteenth century, when it was the center of operations of conservative general Rafael Carrera, who led a Catholic peasant revolution against the liberal government of Mariano Gálvez in 1838, and then ruled Guatemala from 1840 until his death in 1865.
Se divide por 6 zonas
The toponym "Mataquescuintla" comes from Nahuatl, and is composed of the words "matatl" (meaning "net bag"), "Itzcuintli" (meaning "dog") and "tlan" (meaning: "abundance"), and means "net to catch dogs".
After Central American independenceEdit
In the 1825 Constitution of Guatemala, Mataquescuintla was established as part of Cuilapa, in District 3; also in Cuilapa are Los Esclavos, Oratorio, Concepción, La Vega, El Pino, Los Verdes, Los Arcos, Corral de Piedra, San Juan de Arana, El Zapote, Santa Rosa, Jumay, Las Casillas, and Epaminondas.
Overthrow of Mariano GálvezEdit
In 1837, an armed struggle began against the regime of Francisco Morazán, president of the Federal Republic of Central America, a political entity that included Guatemala, Comayagua (later named Honduras), El Salvador, Nicaragua and Costa Rica. The rebellion also fought against those who governed the State of Guatemala, like Chief of State Mariano Gálvez. The leader of the insurgency was Rafael Carrera; among its forces were numerous natives, since on 9 June 1837, the State of Guatemala had reintroduced indigenous populations that had been suppressed since colonial times by the Cortes of Cádiz. The insurgents began hostilities by means of a guerrilla war: attacking populations without giving them an opportunity to have meetings with government troops. At the same time, Gálvez's clerical enemies spread ideas, accusing him of poisoning river water to spread cholera morbus, which hadn't happened even with the large population growth and the poor health structure in the region. The accusation, however, was beneficial to Carrera, putting a large part of the population against Mariano Gálvez and liberals in general.
Standing out among the battles of Carrera: in the barracks at Mataquescuintla; at Ambelis in Santa Rosa, defeating the army commanded by Teodoro Mejía; on 7 December 1837 in the plaza at Jalapa where he was defeated; and on 13 January 1838 where the Garrison of Guatemala was attacked. Some of these military events were accompanied by robberies, robberies, searches and murders of defenseless people. In particular, the Gálvez government, upon learning that Carrera was the leader of the revolt, invaded Mataquescuintla and captured his wife, Petrona Álvarez, whom the soldiers seized by force. When Carrera heard of this, he vowed to avenge his wife, and newly accompanied by her, restarted the fight with new vigor. Petrona Álvarez, inflamed with the desire for revenge, committed numerous atrocities against the liberal troops, to the point that many of Carrera's coreligionists feared her more than the caudillo himself, although by that time Carrera had already showed his military leadership and expertise that would come to later characterize him.
The fight had taken on the form of holy war, for it was the parish priests of the secular clergy who argued for the peasants to defend religious rights and to fight against the liberal atheists; Carrera had been educated by the parish priest of Mataquescuintla who taught Catholicism and started to worry about the liberals' power. Another factor that influenced the revolt were the concessions given by the liberal government of Francisco Morazán to the English—whom they called "heretics" because they were Protestants. In Guatemala, they had been given Belize and San Jerónimo in Salamá—which was an expensive and profitable property that the liberals had seized from the Dominicans in 1829. The contraband English items from Belize had impoverished the artisan Guatemalans, who joined Carrera's revolt. The priests announced to the natives that Carrera was their protector angel, who had descended from the heavens to take revenge on heretics, liberals, and aliens, and to restore their ancient dominion. They devised various tricks to make the natives believe this, which were announced as miracles. Among them, a letter was thrown from the roof of one of the churches, in the middle of a vast congregation of natives. This letter supposedly came from the Virgin Mary, who commissioned Carrera to lead a revolt against the government.
To counteract the violent attacks made by peasant guerrillas, Gálvez approved and then praised the use of a scorched earth policy against the uprising peoples. Several of his supporters advised him to desist from this tactic, because it would only contribute to increasing hostility. In early 1838, José Francisco Barrundia, the liberal leader of Guatemala, disillusioned with Galvez's management, managed to bring Guatemala City under Carrera's command, and fought the head of state. Later that year, the situation in Guatemala became unsustainable: the economy was paralyzed by the lack of security and roads, and the liberals negotiated with Carrera to end the warring. Gálvez left power 31 January 1838, before an "Army of the People", giving control to Rafael Carrera that initiated the battle in Guatemala City with an army of between ten thousand and twelve thousand men, after the agreement left Carrera against Barrundia.
Carrera's troops victorious, they shouted "Long live religion!" and "Away with foreign heretics!" Consisting mainly of poorly armed peasants, they took Guatemala City by force pillaged and destroyed the liberal government buildings, including the Archbishop's Palace, where Gálvez had resided, and the house of the English presenter William Hall. On 2 March 1838, Gálvez's absence was unanimously accepted in Congress, and after a period of uncertainty, Rafael Carrera came to power, although first would suffer some defeats.
Creation of Santa Rosa departmentEdit
The Republic of Guatemala began under President General Rafael Carrera on 21 March 1847 so that the former State of Guatemala could freely trade with foreign nations. On 25 February 1848, the Mita region was separated from the department of Chiquimula, into its own department, and divided into three districts: Jutiapa, Santa Rosa and Jalapa. The Santa Rosa department included Santa Rosa as the capital, and Cuajiniquilapa, Chiquimulilla, Guazacapán, Taxisco, Pasaco, Nancinta, Tecuaco, Sinacantán, Isguatán, Sacualpa, La Leona, Jumay and Mataquescuintla.
After the Liberal RevolutionEdit
After the Liberal Revolution of 1871, liberals began to negatively recount the Carrera regime. The role of Mataquescuintla in the formation of the Republic of Guatemala was set aside by liberal historians, such as José María Bonilla, Ramón Rosa, Lorenzo Montúfar y Rivera and Ramón A. Salazar.
In 1889, Mataquescuintla was scene of an uprising led by colonel Hipólito Ruano against the government of general Manuel Lisandro Barillas Bercián. Opposing policies set up Barillas, Ruano and other retired soldiers rose up in arms, and quickly stopped by the government. Ruano was captured and shot in Mataquescuintla Square.
The municipalities are regulated by various laws of the Republic, which establish their form of organization, administrative bodies, and their taxes. Although they are autonomous entities, they are subject to national legislation and the main laws that govern them since 1985 are:
|1||Constitution of Guatemala||Contains specific legal regulations for municipalities in articles 253 to 262.|
|2||Electoral Law of Policital Parties||Constitutional law applicable to municipalities via their elected officials.|
|3||Municipal Code||Decree 12-2002 of the Congress of Guatemala. It is ordinary law and is applicable to all municipalities, and contains legislation regarding the creation of municipalities.|
|4||Municipal Service Law||Decree 1-87 of the Congress of Guatemala. Regulates the relations between the municipality and public servants in labor matters. It has its constitutional basis in article 262, that orders an issuance of the same.|
|5||General Decentralization Law||Decree 14-2002 of the Congress of Guatemala. It regulates the constitutional authority of the State, and therefore of the municipality, to promote and apply decentralization, both economic and administrative.|
The municipal government is in charge of a Municipal Council while the municipal code—an ordinary law containing provisions that apply to all municipalities—establishes that "the municipal council is the highest collegiate body for deliberation and decision of the municipalities ... and has its seat in the district of the principal municipality"; article 33 of the aforementioned code establishes that "it is the exclusive responsibility of the municipal council to exercise the government of the municipality."
The municipal council works with the mayor, the trustees, and councilors, and is elected directly for a period of four years, and can be re-elected. There are also Auxiliary Community Development Committees (COCODE), Municipal Development Committee (COMUDE), as well as cultural associations and work commissions. Auxiliary mayors are elected by the communities according to their own set of principles and traditions, and meet with the municipal mayor on the first Sunday of each month, while the Community Development Committees and the Municipal Development Committee organize and facilitate the participation of they community's prioritizing needs and problems. The mayor from 2012 to 2016 was Hugo Manfredo Loy.
|Climate data for Mataquescuintla|
|Average high °C (°F)||22.9
|Daily mean °C (°F)||17.7
|Average low °C (°F)||12.5
|Average precipitation mm (inches)||8
It is located north of San Rafael Las Flores, Casillas, Santa Rosa de Lima and Nueva Santa Rosa in Santa Rosa, east of San José Pinula in Guatemala Department, west of San Carlos Alzatate in Jalapa, and south of Sansare in El Progreso and Palencia in Guatemala Department. It is very near Ayarza Lagoon and an abandoned bismuth mine.
- Citypopulation.de Population of departments and municipalities in Guatemala
- Escalante Herrera 2007.
- Cuyán Tahuite 2005.
- Woodward 1993.
- Rivera Natareno 2005, p. 2.
- Pineda de Mont 1869, p. 467.
- Hernández de León 1930, p. 63.
- González Davison 2008, p. 48.
- González Davison 2008, p. 42.
- González Davison 2008, p. 52.
- Squier 1852, p. 429-430.
- González Davison 2008, p. 51.
- Woodward 2002.
- González Davison 2008.
- Pineda de Mont 1869, pp. 73-76.
- Pineda de Mont 1869, p. 477.
- González Davison 2008, p. 426 "Lo que los liberales consideraban como desarrollo, consistía en la expropiación de las tierras de indios y de la Iglesia -que Carrera protegió por sobre todo durante su gobierno- y el uso de los campesinos como mano de obra gratuita para ser utilizadas para el cultivo de café a gran escala -lo que fue legalizado por el liberal Justo Rufino Barrios con su reglamento de jornaleros"
- Rosa 1974.
- Montúfar & Salazar 1892.
- Revista Militar 1899, pp. 189-190.
- Revista Militar 1899, p. 190.
- García Orellana 2005, p. 2.
- Asamblea Constituyente 1985.
- Congreso de Guatemala 2012.
- Prensa Libre 2011.
- "Climate: Mataquescuintla". Climate-Data.org. Retrieved 11 February 2017. CS1 maint: discouraged parameter (link)
- "Monografía del municipio de Mataquescuintla" (PDF). Municipalidad de Mataquescuintla (in Spanish). Guatemala. Archived from the original (PDF) on 2016-03-04. CS1 maint: discouraged parameter (link)
- Asamblea Constituyente (1985). Constitución Política de la República de Guatemala (PDF) (in Spanish). Guatemala: Gobierno de Guatemala. Archived from the original on 2016-02-02. CS1 maint: discouraged parameter (link) CS1 maint: bot: original URL status unknown (link)
- Congreso de Guatemala (2012). Código Municipal de Guatemala (PDF) (in Spanish). Guatemala: Gobierno de Guatemala. Archived from the original on 2015-08-07. CS1 maint: discouraged parameter (link) CS1 maint: bot: original URL status unknown (link)
- Cuyán Tahuite, Maritza Nohemí (2005). Diagnóstico socieconómico, potencialidades productivas y propuestas de inversión. Municipio de Mataquescuintla, departamento de Jalapa (PDF) (in Spanish). Guatemala: Facultad de Ciencias Económicas de la Universidad de San Carlos de Guatemala. Comercialización (Panadería). Archived from the original on 2016-03-04 – via Ejercicio profesional supervisado. CS1 maint: discouraged parameter (link) CS1 maint: bot: original URL status unknown (link)
- Escalante Herrera, Marco Antonio (2007). "Breve información sobre Mataquescuintla". Pbase.com (in Spanish). Guatemala. Archived from the original on 2010-02-28. CS1 maint: discouraged parameter (link) CS1 maint: bot: original URL status unknown (link)
- Fuentes y Guzmán, Francisco Antonio de (1883) . Zaragoza, Justo; Navarro, Luis (eds.). Recordación Florida. Discurso historial y demostración natural, material, militar y política del Reyno de Guatemala (in Spanish). II. Madrid, España: Central. CS1 maint: discouraged parameter (link)
- García Orellana, Maria (2005). "Diagnóstico socieconómico, potencialidades productivas y propuestas de inversión. Municipio de Mataquescuintla, departamento de Jalapa" (PDF). Ejercicio profesional supervisado (in Spanish). 3. Guatemala: Facultad de Ciencias Económicas de la Universidad de San Carlos de Guatemala. Archived from the original on 2015-01-01. CS1 maint: discouraged parameter (link) CS1 maint: bot: original URL status unknown (link)
- González Davison, Fernando (2008). La montaña infinita; Carrera, caudillo de Guatemala (in Spanish). Guatemala: Artemis y Edinter. ISBN 978-84-89452-81-7.
- Hernández de León, Federico (29 January 1959). "El capítulo de las efemérides: Reconquista del Estado de los Altos". Diario la Hora (in Spanish). CS1 maint: discouraged parameter (link)
- Hernández de León, Federico (27 February 1959). "El capítulo de las efemérides: Caída del régimen liberal de Mariano Gálvez". Diario la Hora (in Spanish). Guatemala.
- Hernández de León, Federico (16 March 1959). "El capítulo de las efemérides: Segunda invasión de Morazán". Diario la Hora (in Spanish). Guatemala.
- Hernández de León, Federico (20 April 1959). "El capítulo de las efemérides: Golpe de Estado de 1839". Diario la Hora (in Spanish). Guatemala.
- Hernández de León, Federico (21 April 1959). "El capítulo de las efemérides: Muerte de Carrera". Diario la Hora (in Spanish). Guatemala.
- Hernández de León, Federico (1930). El libro de las efemérides (in Spanish). III. Guatemala: Tipografía Sánchez y de Guise.
- Instituto Nacional de Estadística (INE) (2002). "XI Censo Nacional de Poblacion y VI de Habitación (Censo 2002)". Ine.gob.gt (in Spanish). Guatemala. Archived from the original on 24 August 2008. CS1 maint: discouraged parameter (link)
- Montúfar, Lorenzo; Salazar, Ramón A. (1892). El centenario del general Francisco Morazán (in Spanish). Guatemala: Tipografía Nacional. CS1 maint: discouraged parameter (link)
- Pineda de Mont, Manuel (1869). Recopilación de las leyes de Guatemala, 1821-1869 (in Spanish). I. Guatemala: Imprenta de la Paz en el Palacio.
- Prensa Libre (2011). "Ganadores del poder local en las elecciones de Guatemala 2011" (PDF) (in Spanish). Archived from the original (PDF) on 1 December 2011. Retrieved 13 September 2011. CS1 maint: discouraged parameter (link)
- Revista Militar (15 May 1899). "El coronel don Hipólito Ruano". Revista Militar: órgano de los intereses del Ejército (in Spanish). Guatemala. I (12).
- Rivera Natareno, Claudia Virginia (2005). "Diagnóstico socieconómico, potencialidades productivas y propuestas de inversión. Municipio de Mataquescuintla, departamento de Jalapa" (PDF). Ejercicio profesional supervisado (in Spanish). 1. Guatemala: Facultad de Ciencias Económicas de la Universidad de San Carlos de Guatemala. Archived from the original on 2016-03-04. CS1 maint: discouraged parameter (link) CS1 maint: bot: original URL status unknown (link)
- Rosa, Ramón (1974). Historia del Benemérito Gral. Don Francisco Morazán, ex Presidente de la República de Centroamérica (in Spanish). Tegucigalpa: Ministerio de Educación Pública, Ediciones Técnicas Centroamericana. CS1 maint: discouraged parameter (link)
- Squier, Ephraim George (1852). Nicaragua, its people, scenery, monuments and the proposed Interoceanic Canal. New York: D. Appleton and Co. CS1 maint: discouraged parameter (link)
- Stephens, John Lloyd; Catherwood, Frederick (1854). Incidents of travel in Central America, Chiapas, and Yucatan. London: Arthur Hall, Virtue and Co.
- Taracena, Arturo (1999). Invención criolla, sueño ladino, pesadilla indigena, Los Altos de Guatemala: de región a Estado, 1740-1871 (in Spanish). Guatemala: CIRMA. Archived from the original on 2016-01-09. Retrieved 2017-08-17.
- Woodward, Ralph Lee, Jr. (2002). Rafael Carrera y la creación de la República de Guatemala, 1821–1871. Serie monográfica (in Spanish). CIRMA y Plumsock Mesoamerican Studies. ISBN 0-910443-19-X. Archived from the original on 2019-03-01. Retrieved 2017-08-17.
- Woodward, Ralph Lee (1993). Rafael Carrera and the Emergence of the Republic of Guatemala, 1821-1871. Athens, GA: University of Georgia Press. | <urn:uuid:a81eb93b-a5d1-4b57-9959-b3d8b742b297> | CC-MAIN-2021-21 | https://en.m.wikipedia.org/wiki/Mataquescuintla | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991659.54/warc/CC-MAIN-20210516013713-20210516043713-00216.warc.gz | en | 0.712315 | 4,606 | 2.90625 | 3 |
Nosocomial infections are infections that are a result of treatment in a hospital or a healthcare unit. These infections are identified at least forty-eight to seventy-two hours following admission, so infections incubating, but not clinically apparent, at admission are excluded. It may also be within 30 days after discharge. With recent changes in health care delivery, the concept of nosocomial infections has sometimes been expanded to include other health care associated infections (Weinstein, 1991). These infections are also called hospital-acquired infection. Studies in the passed have reported that during hospitalization, at lest five percent of patients become infected. Similarly, a study carried out by the Centers for Disease Control and Prevention in the United States estimates that roughly 1.7 million hospital-associated infections, from all types of bacteria combined, cause or contribute to 99,000 deaths each year. In Europe the deaths estimated are 25000 each year. However, the case is more seen in the category of Gram-negative infections, which accounts for an estimated two thirds of the total cases reported.
If you need assistance with writing your nursing literature review, our professional nursing literature review writing service is here to help!Find out more
Nosocomial infections are commonly transmitted as a result of negligence of hygiene by some hospital personnel. Medical officials move from one patient to another. Thus in a situation where they do not maintain high hygiene standards, the officials themselves serve as means for spreading dangerous pathogens. Moreover, body’s natural protective barriers of the patients are bypassed by some medical procedures such as surgeries and injections. Hence with such hygienic negligence in our hospitals and other healthcare units, nosocomial infections become the order of the day and my cause severe cases of pneumonia and infections of the urinary tract, bloodstream or other parts of the body.
Causes of nosocomial infections
Nosocomial infections are caused by various factors. Some of the common ones include improper hygiene. Patients can get infections of diseases such methicillin resistant staphylococcus aureus (MRSA), respiratory illnesses and pneumonia from hospital staff and their visitors (Webster, 1998). Also doctors and nurses who do not practice basic hygienic measures such as washing hands before attending to patients may spread MRSA among them. Other infections are due to injections. There are cases where some hospital staffs do not give injections properly. Infections like HIV and hepatitis B can be as a result of contaminated blood due to sharing syringes and needles between patients when injecting medication into their intravenous lines. Nosocomial infections may also be as a result of torn or improperly bandaged incisions during surgeries. These incisions get contaminated with bacteria from the skin or the surrounding environment. Similarly, bacteria can be introduced into the patient’s body by contaminated surgical equipment. Also breathing machines such as ventilators can spread infections like pneumonia among patients using them. Staffs that do not use the proper infection control measures tend to contaminate these machines with germs. There are also cases where people on breathing machines are unable to cough and expel germs from their lungs. This can be another cause. In addition, urinary track infections can be due to faulty removal of urine from patients who are not able to use the toilet. In most cases catheters are the common cause for such cases. These catheters cause these infections when they become contaminated with bacteria by medical staff during insertion or are not properly maintained while in use (Webster, 1998). Another cause of nosocomial infections is the organ transplant. Illnesses such hepatitis B, hepatitis C, HIV and syphilis can be spread through bone and tissue grafts that may result from blood transfusions, skin and organ transplants. However such cases have become less common today due to factors such as improved technology. Many protective measures have been put in place to cut on these risks.
Prevention of nosocomial infections
Several measures can be put in place to prevent the spread of nosocomial infections. The most important measure to reduce the risk of transmitting skin microorganisms from one patient to another is hand washing. Medical staff washing hands as thoroughly and promptly as possible after attending to one patient where they may have come into contact with body fluids, excretions and blood, or equipment with these fluids, is a very important measure of nosocomial infection control. Even though it appears as a simple process, it is mostly overlooked or done incorrectly (Hiramatsu, Aritaka, Hanaki, Kawasaki, Hosoda & Hori, 1997). As a result practitioners and visitors should be continuously reminded on the advantages of proper washing of hands. This can be achieved through use of signals on responsible hand washing. In addition to hand washing, gloves are very important since they prevent gross contamination of the hands when touching blood, body fluids, secretions, excretions, and mucous membranes. They offer a protective barrier, in cases of exposure to blood borne pathogens. Similarly there is emphasis on surface sanitation. In health care environments, this is a critical component of breaking the cycle of infections. In cases concerning influenza, gastro enteritis and MRSA modern methods such as NAV-CO2 have been effective. Alcohol has been shown to be ineffective in endospore-forming bacteria such as Clostridium difficile and thus hydrogen peroxide is appropriate in this case. In addition, use of hydrogen peroxide vapor reduces infection rates and risks of acquisition. Some causes of infections are agent and host factors that are hard to control. In such cases isolation precautions can be designed to prevent transmission in common routes in health centers. For example a patient suffering from an air borne disease can be put in a separate room so as to control the spread of the disease. Another prevention measure is putting on protective clothing. An apron reduces the risk of infection as it covers most parts of the body. However with all this said, strategically implementing QA/QC measures in health care sectors and evidence-based management are the most effective technique of controlling nosocomial infections. For example, in cases of diseases such as ventilator-associated pneumonia and hospital-acquired pneumonia, the management of the health center should pay more emphasis on the control and monitoring of the quality of the hospital’s indoor air (Hiramatsu, Aritaka, Hanaki, Kawasaki, Hosoda, & Hori, 1997).
A Review of the Literature
Robert A Weinstein
(Cook County Hospital & Rush Medical College, Chicago, Illinois, USA)
In his research paper Robert Weinstein begins by a comparison of the cases of nosocomial infections now and in the past. Even though he agrees that there has been a reduction in number of cases, he goes a head to state that the numbers of death are still high. According to him, a study carried out in the United States estimated that in 1995, nosocomial infections cost $4.5 billion and contributed to more than 88,000 deaths (one death in every six minutes). I concur with these findings. Poor hygiene standards in most health centers have contributed to these high figures. There have been cases of medical practitioners who overlook basic hygienic measures such as a proper hand washing when attending to patients. There are cases where some medical services like injections are not administered in a proper manner. This is due to unqualified medical expertise especially in small health care centers. I think the research’s large numbers of deaths from nosocomial infections is due to such factors. I also agree with Weinstein that there is an approximately one third reduction in rate of infections in hospitals with the four basic infection control components (one infection control practitioner for every 250 beds, an effective hospital epidemiologist, ongoing control efforts and an active surveillance mechanism). As a result I think these infections can be controlled to a higher percentage if all hospitals and health centers could employ these basic components.
Robert A Weinstein also states that there has been an increase in viral infections. Most nosocomial infections in Semmelweis’s era were due to group A streptococci. In 1990 to 1996, 34% of nosocomial infections were due to the three most common gram-positive pathogens-S. aureus, enterococci and coagulase-negative staphylococci while the four most common gram-negative pathogens-Escherichia coli, P. aeruginosa, Enterobacter spp., and Klebsiella pneumoniae, accounted for 32%. With this trend I agree with Weinstein report. There has also been an increase in the blood transmitted infections hence increase in the cases of herpes viruses HIV-infections.
On the other hand Weinstein’s reveals that there is a higher rate of infection among the intensive care unit (ICU) patients. This is evident in our hospitals today. I think the increasingly aggressive medical and therapeutic interventions, including modern medical advancements like organ transplantations, implanted foreign bodies and xenotransplantations, have created a cohort of particularly vulnerable persons (Fridkin, Welbel & Weinstein, 1997). In most cases, patients affected by nosocomial infections are those immunocompromised by underlying diseases, age or medical/surgical treatments. More cases of bloodstream infections coagulase-negative staphylococci occur in the ICU because it is in these areas that patients with invasive vascular catheters and monitoring devices could come into contact with these bloodstream infections. Due to these factors, I concur with Weinstein’s research findings that infection rates in adult and pediatric ICUs are approximately three times higher than elsewhere in hospitals.
In conclusion, Robert A Weinstein’s research paper portrays a comprehensive research. It addresses changes in the medical fraternity that have affected nosocomial infections in one way or another. It also shows the significant impact of advancement in technology in medical and health care in relation to nosocomial infections.
Jessica Lietz presents her research on nosocomial infections putting more emphasis on the causes and prevention measures of the infections. She introduces her research stating that there are higher rates of infections in public hospitals as compared to private health centers. I concur with her findings on the basis of the difference in management in the two setups. Private centers tend to be managed in a better manner than public centers. This is because private hospitals are business oriented and the management is always doing all it can to better the institution so as to cope with the high market competition. As a result of this emphasis on good management, medical staff tends to adhere to rules and regulations. Hence the hygiene standards of these institutions are always high. Similarly there is close supervision of staff, another factor that advantages private hospitals over public ones. For the public medical institutions, the case is not the same. In most centers hygiene is not to standard. This may be due to several reasons. There is no close supervision of staff and same take this advantage of lack of a questioning authority to bypass basic hygiene measures. Similarly, public setups are prone to the effects of political differences between the staffs. Cases of corruption tend to take root in such centers and as a result, unqualified medical personnel find themselves in these institutions.
In her take on the causes of nosocomial infections, she states lack of adequate public education on the infections as a key factor in their spread. I think the point holds water since there are same cases of transmission of these infections due to ignorance. For instance one may visit a patient suffering from an air borne disease and contact the disease without knowing. Similarly patients may share personal items such as towels, not knowing that they are subjecting themselves to harmful infections. I think enlightening the public in general on the dangers of these infections and the basic control measures like maintaining a high personal hygiene can go a greater mile in trying to control these infections. It is therefore important to create a society that empresses these basic measures. This can be achieved through airing nosocomial infection related articles in the media, organized open air lessons in villages and also be taught in learning institutions.
Jessica Lietz on the other hand, argues out that just as hand washing is important as a measure of control; more emphasis should also be put on wearing of gloves. She states that gloves can also be used in the same context as hand washing as long as one glove is used on only one patient. I seem to disagree with this since there are challenges that come with it. Even though gloves offer a protective barrier, there are cases where these gloves tear. Moreover in instances where the gloves are not properly worn both the expertise and the patient may be a risk of infections. I strongly believe that a high standard of hygiene is the most appropriate way of fighting infections. As such, a basic, prompt and thorough hand wash is always the better option due its advantages. However, this does not rule out the use of gloves as they are equally important.
In conclusion, this research article gives a general view of nosocomial infections. It does not reflect a deep research into the subject. Jessica gives more emphasis on general arguments. There are some issues concerning these infections that have not been covered or have been covered shallowly. Jessica does not explain in length how nosocomial infections have been affected by technology. Advancement in technology has revolutionalized the medical fraternity and has come with its own advantages and disadvantage. Therefore one can not make a general decision from this article as it is shallow and needs further research.
National Center for Infectious Diseases
This is an article on the research carried out on the nosocomial infections by the National Center for Infectious Diseases in the United States. It points out young children, the elderly and persons with compromised immune systems as people who are more prone to these infections. Long hospital stays, failure of healthcare workers to wash hands, use of indwelling catheters and overuse of antibiotics have also been highlighted to cause some cases of the infection (Fridkin, Welbel, & Weinstein, 1997). Moreover the research acknowledges the effects of the diversification of technology on the spread and control of the infections highlighting organ transplant, catheters, xenotransplantations among others, as examples.
Invasive procedures expose patient to the possibility of infection. The research highlights the percentages below.
Causes of Urinary Tract Infections in Hospital Patients:
Escherichia coli: 40%
Proteus mirabilis: 11%
‘Other’ Gram-negative bacteria: 25%
Coagulase-negative staphylococci: 3%
‘Other’ Gram-positive bacteria: 16%
Candida albicans: 5%
Causes of Urinary Tract Infections that are Community-acquired:
Escherichia coli: 80%
Coagulase-negative staphylococci: 7%
Proteus mirabilis: 6%
‘Other’ Gram-negative bacteria: 4%
‘Other’ Gram-positive bacteria: 3%
This is a comprehensive research that has covered nosocomial infections at length. It discusses key components of the infections giving considerations to both past and today world. Moreover, it compares the rate of the infections both in the urban and rural setting. Hence it is an article that tries to solve nosocomial infection dilemma.
Toni Rizzo presents his research on the common types of infections in our hospital. He highlights respiratory procedures, intravenous (IV) procedures, surgery and wound and urinary bladder catheterization as the common types of infections. He states that most hospital-acquired UTIs happen after urinary catheterization. A healthy urinary bladder does not have bacteria or microorganisms (it is sterile). A catheter picks up bacteria that may be in or around the urethra and take them up into the bladder hence infecting it.
Our nursing and healthcare experts are ready and waiting to assist with any writing project you may have, from simple essay plans, through to full nursing dissertations.View our services
This is a standard research as it touches on almost key issues in the subject matter. I agree with the findings. Fungus infections from Candida are prone to affect patients who are taking antibiotics or that have a poorly functioning immune system. Hence bacteria from the intestinal track are the most types of UTIs. Similarly respiratory procedures done in our hospitals today are the common causes of bacteria getting into the throat. Pneumonia thus becomes another common type of hospital-acquired infections. Once the throat is colonized, it is easy for a patient to inhale the microorganisms into the lungs. Moreover, patients who are unable to cough or gag very well are most likely to inhale colonized bacteria and microorganisms into their lungs.
In general Toni Rizzo tries to address affects in medicine today. Infections due to modern advancements like organ transplant among others have been effectively discussed. Thus this is a comprehensive research.
Emmanuelle Girou and Francois Stephan
(Case-control Study of ICU Patients)
This is an article on a study done in the ICU patients. Generally ICU patients are at a high risk of acquiring nosocomial infections and in same cases some die from these infections. There is a need for therapy whether infections in the ICU occur or not. The objectives of the study was to define the interrelationships between underlying disease, severity of illness, therapeutic activity and nosocomial infections in ICU patients, and their influence on these patients’ out come. The study was conducted in a 10-bed medical ICU. Initial severity of illness was matched, with daily monitoring of severity of illness and therapeutic activity scores, and with analysis of the contribution of nosocomial infections to patients’ outcomes. The study ran for one year and data carefully taken.
Global incidence rate of 14.6 infections per 100 admissions was estimated as forty one out of the 281 studied patients developed at least one nosocomial infection. During their ICU stay, the 41 case-patients developed 98 nosocomial infections (2.4 episodes per patient): 15 pneumonias, 35 bacteremias, 33 urinary-tract infections, 12 central-venous-catheter-related infections, two sinusitides, and one surgical wound infection. Of the 35 episodes of bacteremia, only four were primary; the other 31 complicated the following nosocomial infections: 14 urinary tract infections, eight catheter-related infections, eight instances of pneumonia, and one surgical-site infected. The characteristics of patients in both groups were compared through use of the Mann-Whitney nonparametric test for continuous variables and the chi-square test for categorical variables. Wilcoxon’s test was used to compare two continuous variables within one group. To identify risk factors independently associated with nosocomial infection, variables found to be significantly different between cases and controls in the univariate analysis were entered into a forward stepwise logistic-regression model (Statistica 4.5; Statsoft, Inc., Tulsa, OK). When patients developed multiple nosocomial infections during their hospitalization, only the first episode was used in the risk factor analysis. A value of p < 0.05 constituted a significant difference.
This is a very detailed and comprehensive case study. It clearly explains why the rate of infection is high in the ICU. This high rate is attributed to various factors. The immune system of most patients in the ICU is always low. Similarly these patients are subjected to taking more antibiotics. Long hospital stays is also another factor. Also it is in the ICU that most medical procedures like organ transplant, catheter, xenotransplantations among others, take place. The research also accounts for the effects of technology and other factors that affect these infections. It accounts for the findings given reasons based on concrete facts. As result, it’s a dependable research that can be used to study nosocomial infections especially in the ICU.
In conclusion, all the articles above points out improved hygiene especially hand washing and immunization have resulted to the overall advances in control of infectious diseases. Negligence of hygiene is also portrayed as a major challenge to the efforts of control of nosocomial infections. I think for us to significantly control the infections, we must join forces and work together with medical personnel on implementation of existing infection control technologies. We should empress positive changes towards the control of nosocomial infection and observe high standards of hygiene so that we do not rely solely on technologic advances.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
Related ContentAll Tags
Content relating to: "infection"
Infection occurs when an infectious agent multiplies within the body tissues causing adverse affects. When an individual has an infection, micro-organisms enter the body through a susceptible host, meaning that the infection will manifest within the body.
Principles of Infection Control in the Operating Department
Infection control is a vital part of everyday life in Operating theatre departments across the world. It is used to ensure patient and staff safety throughout surgical procedures and patients stay in...
Mechanical barrier against infection
Take Home Midterm 1.) One example of a mechanical barrier against infection would be the surface layer of our skin. The surface layer of human skin is acidic and very dry, thus making it difficult for...
DMCA / Removal Request
If you are the original writer of this literature review and no longer wish to have your work published on the NursingAnswers.net website then please: | <urn:uuid:ae5cd5a0-7f5c-4116-843d-6634209e6b31> | CC-MAIN-2021-21 | https://nursinganswers.net/litreviews/nosocomial-infections-review-of-literature-health-and-social-care-essay.php | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988837.67/warc/CC-MAIN-20210508031423-20210508061423-00537.warc.gz | en | 0.944623 | 4,426 | 3.421875 | 3 |
The focus of this glossary is on the open source ILS Evergreen which is being used in an increasing number of libraries throughout the world. Open Source software is developed by a different method from that used by traditional (“proprietary”) library software. Evergreen is being deployed rapidly as many libraries are converting from traditional ILSs. The traditional vendors have made and continue to make contributions to libraries and were it not for them, it can be speculated, libraries would still be using card catalogs if they relied on the technical expertise found in libraries. But a new wind is blowing and for a growing number of libraries, open source software has proven to be economical and has provided librarians with more flexibility with and control over their libraries’ environments. For these reasons and others, there are many librarians looking for an introduction to open source applications and their underlying principles of development.
Linked terms in boldface have entries in the glossary.
|Apache||Apache is open source Web server project that develops and maintains the Web server software used by Evergreen. It is the most popular Web server software and is used throughout the Web. The project began in 1994. The Apache software Foundation’s Web site is http://httpd.apache.org/.|
In an Evergreen installation, a “brick” is a unit consisting of one or more servers. It refers to a set of servers with ejabberd, Apache, and all applicable Evergreen services. It is possible to run all the software on a single server, creating a “single server brick.” Typically, larger installations will have more than one such brick and, hence, be more robust.
Bricks will vary according to local requirements but a small brick might have one “head” with (ejabberd, Apache, and open-ils.settings) and one or more “drones” which will handle general application processing.
|The Cathedral & the Bazaar||A book written by Eric S. Raymond (O’Reilly, 1999). A must read if you are new to open source
Librarians who read it and hear the poetry behind the details will realize that open source works very much like we do.
Wikipedia has a short discussion of the major principles of open source development in its entry on this book:
The essay is also available on the Web at Raymond’s Website, although the printed version is easier to read:
|Cloud computing||“…Web-based processing, whereby shared resources, software, and information are provided to computers and other devices (such as smartphones) on demand over the Internet.” http://en.wikipedia.org/wiki/Cloud_computing.
ILS vendors offer hosting where they manage the servers used by the online catalogs of libraries and provide access to those catalogs via the Internet. Terms such as Software as a Service (SaaS) refer to these kinds of services. In this sense, all such hosted solutions used by libraries are cloud computing and Evergreen was doing cloud computing before the term became popular. As with many new things in the IT world, the definition of what constitute cloud computing is a bit fuzzier and covers a multitude of computing arrangements.
|commit||To make proposed changes in software code permanent. In open source software development, the ability to commit is usually limited to a core group of experienced, skilled developers.|
|community||In the open source world, there is much talk about the “community” of users and developers of the software. In open source development, the term “community” usually refers to users and developers who work in concert to develop open source software, communicating openly and collaborating on the direction of the project. Effectively, these software projects and their development courses are directed by the community through email lists, IRC channels, and an array of communications mechanisms. As a result, anyone in the community can develop improvements to the software and those improvements come not just from a relatively small number of developers at a company which owns the software.
A robust community surrounding an open source project creates an environment that is similar to that a proprietary vendor has from a large customer base. Robust communities create a larger pool of resources to sponsor and contribute to projects. Such communities also mitigate risk for users of the software.
|compiled||Describes a program once it has been translated into computer language. It is usually not readable by people. Proprietary software in the library world will usually only be available in a compiled format. In contrast, open source software is normally available as source code which is downloaded and compiled by its users and modified if required. Hence, the underlying code can be read by the users of open source software and modified to suit their needs, if desired.|
|Consortial Library System||An ILS designed to run consortia. Evergreen was the first and is currently the only such system of library software. A 2008 blog post first described this type of system: Consortial Library Systems.|
|Debian||Debian is a computer operating system using the Linux kernel and tools from the GNU project. It is one of the preferred operating systems for Evergreen,
|DIY||Do It Yourself. One of open source advantages is, of course, that it is free and readily available for download by anyone. A result has been that both Evergreen has been adopted by libraries without paying for the software or for support but by staff at libraries learning the systems and installing it.
It does happen but there is considerable technical knowledge required to carry it off and some of the DIY implementations have used paid support options a la carte—picking and choosing what support they need. The technical knowledge is held by people and those people who install these systems are generally highly paid so DIY can be a “pay me now, pay me later proposition.” However, work is going on in both communities to improve documentation and make the DIY process easier.
Evergreen has a complex infrastructure that sets a high bar for the required knowledge to install and support it but it , also has DIY implementations and is in the process of being adopted by several very large consortia which have capable IT staffs who are doing DIY implementations.
|ejabberd||“ejabberd is a high performance instant messaging server. It enables users to communicate in real time and allows status and presence information to be securely and rapidly transferred between servers.” ejabbered is free source software distributed under the GNU GPL. It is an application server for the eXtensible Messaging and Presence Protocol (XMPP). ejabbered is used to exchange data in Evergreen.|
|Equinox Software, Inc.||The company founded by the developers of Evergreen. Its Website is at: http://www.esilibrary.com/|
|Evergreen||The first ILS designed to handle the processing of geographically dispersed, resource-sharing library networks. It is the first Consortial Library System. Evergreen is the open source software that runs a growing number of libraries and consortia. The Wikipedia entry gives more information: http://en.wikipedia.org/wiki/Evergreen_(software). It first went live in 2006 in the Georgia PINES consortium.|
The Evergreen community Website is at: http://www.evergreen-ils.org/.
The community communicates via mailing lists as listed on this page:
and via Internet Relay Chat (IRC) channels. More information can be found here:
Evergreen blog aggregator is at Planet Evergreen: http://planet.evergreen-ils.org/
Sources of information on Evergreen history and background:
As well as numerous blog posts. Planet Evergreen: http://planet.evergreen-ils.org/ is probably the best source for these posts.
|Evergreen Superconsortium||In a June, 2010 blog post, Bob Molyneux and Mike Rylander discussed something each independently noticed and, curiously, each had invented the same new word to describe: the “Superconsortium.” Briefly, they noted that consortia in the Evergreen community were banding together to do common development on Evergreen. Since then, this method of development has grown and has given another economical means to improve Evergreen.
The original post is here: http://evergreen-ils.org/blog/?p=339.
|Evergreen Support options||
Evergreen, being open source, does not have one support company which handles all support so users are not subject to vendor lockin. A list of firms supporting Evergreen is kept on the Evergreen wiki:
Free support is also available via the community resources listed with the entry on Evergreen.
|Fork||In this context, a software development fork results when developers take a copy of the project software and begin a separate development path. Software forks are largely an aspect of software developed using open source principles and, in effect, it is what Wikipedia refers to as a “schism.” Usually a group of the developers decides to take the open code and create something new based on it. Forks can be friendly or unfriendly and can be seen as a source of strength or weakness in open source development. Schisms are seen in other aspects of human activity, such as religion.
Jargon File argues that forks are rare and, hence, the “individual instances look large in hacker folklore.”
|FOSS or FLOSS||Free (Libre) Open Source Software. Also called “OSS”.|
|Free software||Free software is not to be confused with open source software although the two often have similar objectives, they do not always.
See the Free Software Definition at http://www.gnu.org/philosophy/free-sw.html for a discussion of the philosophy behind free software.
For a discussion of the differences in philosophy between Free software and open source software, see Why Open Source misses the point of Free Software at http://www.gnu.org/philosophy/open-source-misses-the-point.html. The Wikipedia entry for the Free software movement is at: http://en.wikipedia.org/wiki/Free_software_movement.
|FUD||Fear, Uncertainty, Doubt. Do you want to trust your library’s functions to open source software written by a bunch of tattooed, dope-smoking hippies with orange hair?
FUD, as the example above might demonstrate, has been a useful marketing tool for sellers of proprietary software but with the growth of the Evergreen and other open source communities in the library world, it has a waning influence. By instilling FUD in prospective users about the viability, robustness, and support of open source competitors, proprietary vendors have attempted to make open-source projects with hundreds of developers and excellent support seem like a casual basement project for a few hobbyist programmers.
|FulfILLment ™||FulfILLment is being developed by Equinox Software, Inc. under contract with OHIONET. It is an open source project designed to link library catalogs. When completed in about a year, it will provide library users seamless access to materials owned by libraries using FulfILLment—no matter which integrated library system his or her library uses. The project’s Website is: http://fulfillment-ill.org/.|
|Git||A distributed software revision control system now used by Evergreen. It was developed by Linus Torvalds for Linux kernel development and is free software distributed under the terms of the GNU GPL.. For more information see the Wikpedia entry.|
|GNU||GNU is free software that is a “Unix-like” operating system. The GNU Project was launched in 1984. The project’s Web site is http://www.gnu.org/. More background is found here: http://www.gnu.org/gnu/gnu.html. Wikipedia’s entry: http://en.wikipedia.org/wiki/GNU.|
|GPL||The GNU General Public License is an open source license that is used by Evergreen as well as most open source applications. There are various versions of the GPL and other kinds of open source licenses. The GNU Project’s Website: http://www.gnu.org/. The discussion of its licenses: http://www.gnu.org/licenses/licenses.html. Wikipedia’s article: http://en.wikipedia.org/wiki/GPL.|
|GPLS||Georgia Public Library Service, the state library of Georgia. GPLS administers the PINES network and is where Evergreen was originally developed. Its Website is at: http://www.georgialibraries.org/.|
|ILS||Integrated Library System. Also known as a Library Management System (LMS). Wikipedia’s entry: http://en.wikipedia.org/wiki/Integrated_library_system.|
|IndexData||This firm has been active for 15 years in developing open source software to aid in indexing and searching. Evergreen uses components developed by IndexData including ZOOM, among others. Its Website is at: http://www.indexdata.com/.|
|Koha||Koha (http://www.koha-community.org/.) is another open source Integrated Library System (ILS). Koha 1.01 was released in 2000 by Horowhenua Library Trust and Katipo Communications, Ltd. Today, thousands of libraries worldwide use Koha. Unlike Evergreen which has a staff client, Koha is completely web-based.|
|LMS||Library Management System. Also known as an Integrated Library System (ILS).|
|Linux||An open source operating system used by Evergreen.
The Linux Foundation (http://www.linuxfoundation.org/) maintains Linux.com (http://www.linux.com/.) The Linux Wikipedia entry is at: http://en.wikipedia.org/wiki/Linux and that of the Linux Foundation is: http://en.wikipedia.org/wiki/Linux_Foundation.
|Load Balancing||An integral part of a robust server setup and one used in many Evergreen installations where the use of an array of redundant commodity servers runs best if the workload on the servers is distributed.
“In networking, load balancing is a technique to distribute workload evenly across two or more computers, network links, CPUs, hard drives, or other resources, in order to get optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing—instead of a single component—may increase reliability through redundancy. The load balancing service is usually provided by a dedicated program or hardware device (such as a multilayer switch or a DNS server).”
|Memcached||“Free & open source, high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load.” from: http://memcached.org/.
A factor which can greatly speed up Evergreen searches is by loading the entire database in memory so searches can be done much faster than they could be if the database had to be searched on the hard drives. Memcached is the means by which this mirroring of the database in memory is done.
|migration||If you change ILS vendors, your library’s data will have to be moved from one vendors’ database structures to another’s. Patron, transaction, and bibliographic records will have to be moved. This is normally not a process undertaken lightly. If the data are in a proprietary database, do you own your data so you can migrate them?|
|OpenSRF||Open Service Request Framework (pronounced “open surf.)” This is the software architecture at the core of Evergreen and the FulfILLment consortial borrowing platform. Invented by the developers of Evergreen, OpenSRF provides transparent load balancing, high-availability and abstraction features to applications, allowing developers to focus on functionality instead of infrastructure.|
|open source||Open source is a number of things. It is a class of licenses, a culture, a community, and a way of producing and sharing software. It is not to be confused with free software, although the two movements share many objectives.
In these senses, it is normally distinguished from proprietary licenses or software. Software produced by this method is released under an open source license like the GPL and the source code is freely available. There are a number of open source licenses. Generally, these licenses permit users to adapt, make changes, and improve software. The GPL, used by Evergreen, is a bit stricter than some other open source licenses and, among other things, also requires the adapted software be released through a GPL license.
Open source is relatively new to the library world. One normally speaks of the alternative proprietary vendors as “legacy” or “traditional” vendors.
Additional sources of information on open source software in libraries
Engard, Nicole, Practical Open Source Software for Libraries, (Chandos Publishing, 2010.)
|open source software advantages:||
|open source software disadvantages:||
|OSS||open source software|
|OSS4lib||Website that maintains a listing of free software and systems designed for libraries but is broader than the ILS/LMS focus of this glossary. It was started in 1999. Its Website is at: http://www.oss4lib.org/|
|Perl||“Perl is a highly capable, feature-rich programming language with over 22 years of development.” (http://www.perl.org/.) Perl is used extensively in the Evergreen community.
From Wikipedia’s entry:
“Perl is a general-purpose programming language originally developed for text manipulation, but as of 2010 used for a wide range of tasks including system administration, web development, network programming, games, bioinformatics, and GUI development.
|PINES||The Georgia statewide public library resource sharing network. It currently has 51 systems and 280 library outlets. It was the first system to use Evergreen. The PINES catalog searches the largest installation of Evergreen with a 10 million item collection. In 2010, PINES circulated 19 million items.
The PINES web home is at http://pines.georgialibraries.org/.
Sources of information on PINES’s history and background:
A short history 10 Years of PINES provides a retrospective on PINES.
|PostgreSQL||PostgreSQL, commonly shortened to “Postgres,” is a powerful, open source relational database system that is used in Evergreen. It has more than 15 years of active development and a proven architecture that has earned it a strong reputation for reliability, data integrity, and correctness. To learn more about PostgreSQL visit http://www.postgresql.org/.
Wikipedia’s entry: http://en.wikipedia.org/wiki/PostgreSQL
|proprietary software||A method for producing software that is normally distinguished from open source software. Proprietary software is not normally distributed as source code but as compiled programs so that one cannot see what the code does. It would normally be only supported by the company that manufactured it which can lead to vendor lockin. Since users cannot see the code, they cannot easily make permanent improvements or changes in it and have to wait for the next release.|
|proprietary software advantages:||
|proprietary software disadvantages:||
|Red Hat||Red Hat corporation is a company founded in 1993 which supports a major Linux distribution. Red Hat’s Web site is at http://www.redhat.com/ and its Wikipedia entry is:
http://en.wikipedia.org/wiki/Red_Hat. Evergreen has been supported on the Red Hat distribution in the past, but as of May 2012 we need contributors to test and support installation and configuration of current versions of Evergreen on current versions of Red Hat.
|Repository||An online archive for open source software where current and past versions of the software can be found. Popular repositories include SourceForge and Freshmeat.|
|Service Oriented Architecture||A software architecture based on a collection of loosely-coupled, distributed services which communicate and interoperate via agreed standards. OpenSRF is an example of Service Oriented Architecture.|
|source code||“..text written in a computer programming language.”
Source code is written in a human-readable language by software programmers or developers. Before it can be run on computers, it must be compiled into language that these computers can read. Wikipedia has more (http://en.wikipedia.org/wiki/Source_code) including related links.
|Staff Client||The term "staff client" refers to the relationship of two pieces of software in a client-server architecture. The "staff client" software communicates with the server software. Therefore, the client software is generally that software that is loaded on the end user's machine in order to run the program. With Evergreen, only the staff interface requires a "staff client." The the public interface for Evergreen is commonly referred to as the OPAC and is is completely web-based.|
|Turnkey||Of software, an application or suite of applications that a vendor sets up and all you have to do is turn the key and you are in business.|
|Ubuntu||Ubuntu is a Linux distribution originally begun by a team of developers which worked on the Debian project. The two projects are related. There are Evergreen installations running Ubuntu in production.|
|Vaporware||Software that does not exist…but has been promised.|
|Vendor lockin||If you buy from a proprietary vendor, it is protected from competition for your business by
Thanks to Anoop Atre, Galen Charlton, Jason Etheridge, Nicole Engard, Rogan Hamby, and Glen Holt for many helpful suggestions. They are not, of course, responsible for any errors. | <urn:uuid:70a5f179-93fa-463a-88d9-4413092c81b1> | CC-MAIN-2021-21 | https://wiki.evergreen-ils.org/doku.php?id=zzz:evergreen_and_open_source_glossary | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989819.92/warc/CC-MAIN-20210518094809-20210518124809-00336.warc.gz | en | 0.911975 | 4,638 | 3.1875 | 3 |
|Born||Saul David Alinsky
January 30, 1909
Chicago, Illinois, U.S.
|Died||June 12, 1972
Carmel-by-the-Sea, California, U.S.
|Cause of death||Heart attack|
|Education||University of Chicago, Ph.B. 1930
U. of Chicago Graduate School, criminology, 1930–1932.
|Occupation||Community organizer, writer, political activist|
|Known for||Political activism, writing, community organization|
|Notable work||Reveille for Radicals (1946); Rules for Radicals (1971)|
|Spouse(s)||Helene Simon of Philadelphia (m. June 9, 1932 – her death)
Jean Graham (May 15, 1952 – 1970; divorced)
Irene McInnis Alinsky (m. May 1971)
|Children||Katherine and David (by Helene)|
|Awards||Pacem in Terris Award, 1969|
Saul David Alinsky (January 30, 1909 – June 12, 1972) was a Jewish American community organizer and writer. He is generally considered to be the founder of modern community organizing. He is often noted for his book Rules for Radicals.
In the course of nearly four decades of political organizing, Alinsky received much criticism, but also gained praise from many public figures. His organizing skills were focused on improving the living conditions of poor communities across North America. In the 1950s, he began turning his attention to improving conditions in the African-American ghettos, beginning with Chicago's and later traveling to other ghettos in California, Michigan, New York City, and a dozen other "trouble spots".
His ideas were adapted in the 1960s by some U.S. college students and other young counterculture-era organizers, who used them as part of their strategies for organizing on campus and beyond. Time magazine once wrote that "American democracy is being altered by Alinsky's ideas," and conservative author William F. Buckley said he was "very close to being an organizational genius".
Saul David Alinsky was born in Chicago, Illinois in 1909 to Russian Jewish immigrant parents, the only surviving son of Benjamin Alinsky's marriage to his second wife, Sarah Tannenbaum Alinsky. Alinsky stated during an interview that his parents never became involved in the "new socialist movement." He added that they were "strict Orthodox, their whole life revolved around work and synagogue ... I remember as a kid being told how important it was to study."
Because of his strict Jewish upbringing, he was asked whether he ever encountered antisemitism while growing up in Chicago. He replied, "it was so pervasive you didn't really even think about it; you just accepted it as a fact of life." He considered himself to be a devout Jew until the age of 12, after which time he began to fear that his parents would force him to become a rabbi.
I went through some pretty rapid withdrawal symptoms and kicked the habit ... But I'll tell you one thing about religious identity...Whenever anyone asks me my religion, I always say—and always will say—Jewish.
He worked his way through the University of Chicago, where he majored in archaeology, a subject that fascinated him. His plans to become a professional archaeologist were changed due to the ongoing economic Depression. He later stated, "Archaeologists were in about as much demand as horses and buggies. All the guys who funded the field trips were being scraped off Wall Street sidewalks."
After attending two years of graduate school, he accepted work for the state of Illinois as a criminologist. On a part-time basis, he also began working as an organizer with the Congress of Industrial Organizations (CIO). By 1939, he became less active in the labor movement and became more active in general community organizing, starting with the Back of the Yards and other poor areas on the South Side of Chicago. His early efforts to "turn scattered, voiceless discontent into a united protest" earned the admiration of Illinois governor Adlai Stevenson, who said Alinsky's aims "most faithfully reflect our ideals of brotherhood, tolerance, charity and dignity of the individual."
As a result of his efforts and success at helping slum communities, Alinsky spent the next 10 years repeating his organization work across the nation, "from Kansas City and Detroit to the barrios of Southern California." By 1950 he turned his attention to the black ghettos of Chicago. His actions aroused the ire of Mayor Richard J. Daley, who also acknowledged that "Alinsky loves Chicago the same as I do." He traveled to California at the request of the San Francisco Bay Area Presbyterian Churches to help organize the black ghetto in Oakland. Hearing of his plans, "the panic-stricken Oakland City Council promptly introduced a resolution banning him from the city."
Community organizing and politics
In the 1930s, Alinsky organized the Back of the Yards neighborhood in Chicago (made infamous by Upton Sinclair's 1906 novel, The Jungle, which described the horrific working conditions in the Union Stock Yards). He went on to found the Industrial Areas Foundation while organizing the Woodlawn neighborhood; IAF trained organizers and assisted in the founding of community organizations around the country.
In Rules for Radicals (his final work, published in 1971 one year before his death), Alinsky addressed the 1960s generation of radicals, outlining his views on organizing for mass power. In the opening paragraph Alinsky writes,
What follows is for those who want to change the world from what it is to what they believe it should be. The Prince was written by Machiavelli for the Haves on how to hold power. Rules for Radicals is written for the Have-Nots on how to take it away."
Alinsky did not join political parties. When asked during an interview whether he ever considered becoming a Communist party member, he replied:
Not at any time. I've never joined any organization—not even the ones I've organized myself. I prize my own independence too much. And philosophically, I could never accept any rigid dogma or ideology, whether it's Christianity or Marxism. One of the most important things in life is what Judge Learned Hand described as 'that ever-gnawing inner doubt as to whether you're right.' If you don't have that, if you think you've got an inside track to absolute truth, you become doctrinaire, humorless and intellectually constipated. The greatest crimes in history have been perpetrated by such religious and political and racial fanatics, from the persecutions of the Inquisition on down to Communist purges and Nazi genocide.
He did not have much respect for mainstream political leaders who tried to interfere with growing black–white unity during the difficult years of the Great Depression. In Alinsky's view, new voices and new values were being heard in the U.S., and "people began citing John Donne's 'No man is an island.'" He observed that the hardship affecting all classes of the population was causing them to start "banding together to improve their lives," and discovering how much in common they really had with their fellow man.
Alinsky once explained that his reasons for organizing in black communities included:
- Negroes were being lynched regularly in the South as the first stirrings of black opposition began to be felt, and many of the white civil rights organizers and labor agitators who had started to work with them were tarred and feathered, castrated—or killed. Most Southern politicians were members of the Ku Klux Klan and had no compunction about boasting of it.
Alinsky's tactics were often unorthodox. In Rules for Radicals he wrote,
[t]he job of the organizer is to maneuver and bait the establishment so that it will publicly attack him as a 'dangerous enemy.'" According to Alinsky, "the hysterical instant reaction of the establishment [will] not only validate [the organizer's] credentials of competency but also ensure automatic popular invitation."
As an example, after organizing FIGHT (an acronym for Freedom, Independence [subsequently Integration], God, Honor, Today) in Rochester, New York, Alinsky once threatened to stage a "fart in" to disrupt the sensibilities of the city's establishment at a Rochester Philharmonic concert. FIGHT members were to consume large quantities of baked beans after which, according to author Nicholas von Hoffman, "FIGHT's increasingly gaseous music-loving members would tie themselves to the concert hall where they would sit expelling gaseous vapors with such noisy velocity as to compete with the woodwinds." Satisfied with his threat yielding action, Alinsky later threatened a "piss in" at Chicago O'Hare Airport. Alinsky planned to arrange for large numbers of well-dressed African Americans to occupy the urinals and toilets at O'Hare for as long as it took to bring the city to the bargaining table. According to Alinsky, once again the threat alone was sufficient to produce results. In Rules for Radicals, he notes that this tactic fell under two of his rules: Rule #3: Wherever possible, go outside the experience of the enemy; and Rule #4: Ridicule is man's most potent weapon.
Alinsky described his plans for 1972 to begin to organize the white middle class across the United States, and the necessity of that project. He believed that what President Richard Nixon and Vice-President Spiro Agnew then called "The Silent Majority" was living in frustration and despair, worried about their future, and ripe for a turn to radical social change, to become politically active citizens. He feared the middle class could be driven to a right-wing viewpoint, "making them ripe for the plucking by some guy on horseback promising a return to the vanished verities of yesterday." His stated motive: "I love this goddamn country, and we're going to take it back."
Legacy and honors
The documentary, The Democratic Promise: Saul Alinsky and His Legacy, states that "Alinsky championed new ways to organize the poor and powerless that created a backyard revolution in cities across America." Based on his organizing in Chicago, Alinsky formed the Industrial Areas Foundation (IAF) in 1940. After he died, Edward T. Chambers became its Executive Director. Hundreds of professional community and labor organizers, and thousands of community and labor leaders have been trained at its workshops. Fred Ross, who worked for Alinsky, was the principal mentor for Cesar Chavez and Dolores Huerta. Other organizations following in the tradition of the Congregation-based Community Organizing pioneered by IAF include PICO National Network, Gamaliel Foundation, Brooklyn Ecumenical Cooperatives, founded by former IAF trainer, Richard Harmon and Direct Action and Research Training Center (DART).
Several prominent American leaders have been influenced by Alinsky's teachings, including Ed Chambers, Tom Gaudette, Ernesto Cortes, Michael Gecan, Wade Rathke,and Patrick Crowley. Alinsky is often credited with laying the foundation for the grassroots political organizing that dominated the 1960s. Jack Newfield, writing in New York magazine, included Alinsky among "the purest Avatars of the populist movement," along with Ralph Nader, Cesar Chavez, and Jesse Jackson.
In 1969, while a political science major at Wellesley College, Hillary Rodham Clinton chose to write her senior thesis on Alinsky's work, with Alinsky himself contributing his own time to help her. During her time as first lady, the thesis was not made publicly available by the school. Although Clinton defended Alinksy's intentions in her thesis, she was critical of his methods and dogmatism.
According to biographer Sanford Horwitt, U.S. President Barack Obama was influenced by Alinsky and followed in his footsteps as a Chicago-based community organizer. Horwitt asserted that Barack Obama's 2008 presidential campaign was influenced by Alinsky's teachings. Alinksy's influence on Obama has been heavily emphasized by some of his detractors, such as Rush Limbaugh and Glenn Beck. Thomas Sugrue of Salon.com writes, "as with all conspiracy theories, the Alinsky-Obama link rests on a kernel of truth". For three years in the mid 80s, Obama worked for the Developing Communities Project, which was influenced by Alinsky's work, and he wrote an essay that was collected in a book memorializing Alinsky. Newt Gingrich repeatedly stated his opinion that Alinsky was a major influence on Obama during his 2012 presidential campaign, equating Alinsky with "European Socialism", although Alinsky was U.S.-born and was not a Socialist. Gingrich's campaign itself used tactics described by Alinsky's writing.
Adam Brandon, a spokesman for the conservative non-profit organization FreedomWorks, one of several groups involved in organizing Tea Party protests, says the group gives Alinsky's Rules for Radicals to its top leadership members. A shortened guide called Rules for Patriots is distributed to its entire network. In a January 2012 story that appeared in The Wall Street Journal, citing the organization's tactic of sending activists to town-hall meetings, Brandon explained, "his [Alinsky's] tactics when it comes to grass-roots organizing are incredibly effective." Former Republican House Majority Leader Dick Armey also gives copies of Alinsky's book Rules for Radicals to Tea Party leaders.
Alinsky died at the age of 63 of a sudden, massive heart attack in 1972, on a street corner in Carmel, California. Two months previously, he had discussed life after death in his interview with Playboy:
- ALINSKY: ... if there is an afterlife, and I have anything to say about it, I will unreservedly choose to go to hell.
- PLAYBOY: Why?
- ALINSKY: Hell would be heaven for me. All my life I've been with the have-nots. Over here, if you're a have-not, you're short of dough. If you're a have-not in hell, you're short of virtue. Once I get into hell, I'll start organizing the have-nots over there.
- PLAYBOY: Why them?
- ALINSKY: They're my kind of people.
- Community organizing
- Category:Community activists
- Community development
- Community education
- Community practice
- Community psychology
- Critical Psychology
- Grassroots organizing
- Organization Workshop
- Reveille for Radicals, Chicago: University of Chicago Press, 1946.
- John L. Lewis: An Unauthorized Biography. New York: Putnam, 1949.
- Rules for Radicals: A Pragmatic Primer for Realistic Radicals. New York: Random House, 1971.
- The Philosopher and the Provocateur: The Correspondence of Jacques Maritain and Saul Alinsky. Bernard E Doering (ed.). Notre Dame, IN: University of Notre Dame Press, 1994.
- "Saul David Alinsky". Dictionary of American Biography. New York: Charles Scribner's Sons. 1994. Gale Document Number: BT2310018941. Retrieved September 7, 2011 – via Fairfax County Public Library.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>(subscription required) Gale Biography in Context.
- "Saul David Alinsky Collection". Hartford, Connecticut: The Watkinson Library, Trinity College. Retrieved September 7, 2011.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Brooks, David (March 4, 2010). "The Wal-Mart Hippies". New York Times. Retrieved September 8, 2010.
Dick Armey, one of the spokesmen for the Tea Party movement, recently praised the methods of Saul Alinsky, the leading tactician of the New Left.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "Playboy Interview: Saul Alinsky". Playboy Magazine. March 1972.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Alinsky, Saul David (Fee). New Catholic Encyclopedia (2nd ed.). The Catholic University of America via Gale. 2003.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles> 15 vols.
- Horwitt, Sanford D. (1989). Let them call me rebel: Saul Alinsky, his life and legacy. New York: Alfred A. Knopf. pp. 3–9. ISBN 0-394-57243-2.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Nicholas Von Hoffman (2010). Radical: A Portrait of Saul Alinsky. Nation Books. pp. 108–109. ISBN 9781568586250.
He passed the word in the Back of the Yards that this Jewish agnostic was okay, which at least ensured that he would not be kicked out the door.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Charles E. Curran (2011). The Social Mission of the U.S. Catholic Church: A Theological Perspective. Georgetown University Press. p. 32. ISBN 9781589017436.
Saul D. Alinsky, an agnostic Jew, organized the Back of the Yards neighborhood in Chicago in the late 1930s and started the Industrial Areas Foundation in 1940 to promote community organizations and to train community organizers.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Deal Wyatt Hudson (1987). Deal Wyatt Hudson, Matthew J. Mancini (ed.). Understanding Maritain: Philosopher and Friend. Mercer University Press. p. 40. ISBN 9780865542792.
Saul Alinsky was an agnostic Jew for whom religion of any kind held very little importance and just as little relation to the focus of his life's work: the struggle for economic and social justice, for human dignity and human rights, and for the alleviation of the sufferings of the poor and downtrodden.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Alinsky, Saul. Rules for Radicals.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Philip Klein (25 January 2012), "A Saul Alinsky Republican?", Washington Examiner
- Hill, Laura Warren. "Rochester Black Freedom Struggle Online Project: Oral Histories". University of Rochester Libraries.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Nicholas von Hoffman, Radical: A Portrait of Saul Alinsky Nation Books, 2010 p. 83-4
- "The Democratic Promise: Saul Alinsky and His Legacy". Itvs.org. July 14, 1939. Retrieved February 26, 2009.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Dick Meister, "A Trailblazing Organizer's Organizer"
- Slevin, Peter (March 25, 2007). "For Clinton and Obama, a Common Ideological Touchstone". The Washington Post.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Siegel, Robert; Horwitt, Sanford (May 21, 2007). "NPR Democrats and the Legacy of Activist Saul Alinsky". All Things Considered. Npr.org. Retrieved September 8, 2011.
Robert Siegel talks to author Sanford Horwitt, who wrote a biography of Saul Alinsky called Let Them Call Me 'Rebel'. The book traces Alinsky's early activism in Chicago's meatpacking neighborhood.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Flora, Cornelia Butler; Flora, Jan L.; Fey, Susan. Rural Communities. Westview Press. p. 335. Retrieved February 26, 2009.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Jerzyk, Matt (February 21, 2009). "Rhode Island's Future". Rifuture.org. Retrieved February 26, 2009.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Newfield, Jack (July 19, 1971). "A Populist Manifesto: The Making of a New Majority". books.google.com. New York Magazine. p. 46.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Sugrue, Thomas (January 30, 2009). "Saul Alinsky: The activist who terrifies the right". Salon. Retrieved February 7, 2012.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Cockburn, Alexander; St. Clair, Jeffrey (April 13, 2015). "The Making of Hillary Clinton". CounterPunch. Retrieved April 12, 2015. Italic or bold markup not allowed in:
- Levenson, Michael (March 4, 2007). "A student's words, a candidate's struggle In 1969 thesis, Clinton tackled radicalism tag". Boston Globe. Retrieved April 14, 2015.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Cohen, Alex; Horwitt, Sanford (January 30, 2009). "Saul Alinsky, The Man Who Inspired Obama". Day to Day. NPR. Retrieved April 17, 2011.
about his book Let Them Call Me Rebel: Saul Alinsky His Life and Legacy<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Obama, Barack (1988). "Problems and promise in the inner city". Illinois Issues. Retrieved April 16, 2015.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Moyers, Bill; Winship, Michael (February 6, 2012). "The truth about Newt's favorite punching bag". Salon. Retrieved April 16, 2015.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Knickerbocker, Brad (January 28, 2012). "Who is Saul Alinsky, and why is Newt Gingrich so obsessed with him?". Christian Science Monitor. Retrieved April 16, 2015.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Williamson, Elizabeth (January 23, 2012). "Two Ways to Play the 'Alinsky' Card". The Wall Street Journal. Retrieved January 26, 2011.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Kazin, Michael (January 25, 2012). "Saul Alinsky Wasn't Who Newt Gingrich Thinks He Was". New Republic. Retrieved April 16, 2015.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- P. David Finks, The Radical Vision of Saul Alinsky. New York : Paulist Press, 1984.
- Sanford D. Horwitt, Let Them Call Me Rebel: Saul Alinsky: His Life and Legacy. New York: Alfred A. Knopf, 1989.
- Frank Riessman, "The Myth of Saul Alinsky," Dissent, vol. 14, no. 4, whole no. 59 (July–Aug. 1967), pp. 469–478.
- Marion K. Sanders, The Professional Radical: Conversations with Saul Alinsky. New York: Harper & Row, 1970.
- Herb Schapiro, The Love Song of Saul Alinsky. New York: Samuel French, 2007. —Play.
- Aaron Schutz and Mike Miller, eds., People Power: The Saul Alinsky Tradition of Community Organizing. (Nashville: Vanderbilt University Press, 2015). ISBN 978-0-8265-2041-8
- Nicholas von Hoffman, Radical: A Portrait of Saul Alinsky. New York: Nation Books, 2010.
- Bruce Orenstein (co-producer), The Democratic Promise: Saul Alinsky and His Legacy, Chicago Video Project, 1999.
|Wikiquote has quotations related to: Saul Alinsky|
- Works by or about Saul Alinsky in libraries (WorldCat catalog)
- Saul Alinsky collected news and commentary at The Wall Street Journal
- Democratic Promise, a documentary about Alinsky and his legacy
- Encounter with Saul Alinsky, National Film Board of Canada documentary
- Saul Alinsky, The qualities of an organizer (1971)
- Santow, Mark Edward (January 1, 2000). Saul Alinsky and the dilemmas of race in the post-war city (Dissertation abstract).<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Behrent, Michael C. (June 10, 2008). "Saul Alinsky, la campagne présidentielle et l'histoire de la gauche américaine" (in français). La Vie des Idées. Retrieved September 8, 2011. Unknown parameter
|trans_title=ignored (help); Italic or bold markup not allowed in:
- Saul Alinsky's FBI files, hosted at the Internet Archive: part 1, part 2 | <urn:uuid:961fe6e3-b072-4e8c-9f98-48465b4e9c53> | CC-MAIN-2021-21 | https://infogalactic.com/info/Saul_Alinsky | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00532.warc.gz | en | 0.91686 | 5,410 | 2.71875 | 3 |
This is the Bibliography "I" page for authors' surnames beginning
[Left: William Irvine's, "Apes, Angels & Victorians: The Story of Darwin, Huxley, and Evolution" (1955).]
with "I" which I may refer to in my book outline, "Problems of Evolution."
© Stephen E. Jones, BSc. (Biology)
Ingram, J. , 2001, "The Barmaid's Brain: And Other Strange Tales from Science," Aurum Press: London, Reprinted, 2005.
Inwood, B. & Gerson, L.P., eds, 1994, "The Epicurus Reader," Hackett Publishing Co: Indianapolis IN.
Irvine, W., 1955, "Apes, Angels and Victorians: The Story of Darwin, Huxley, and Evolution," McGraw-Hill: New York NY.
Isaac, B., ed., 1990, "The Archaeology of Human Origins: Papers by Glynn Isaac," Cambridge University Press: Cambridge UK
Isaacs, A., Daintith, J. & Martin, E., eds, 1991, "Concise Science Dictionary," , Oxford University Press: Oxford, Second edition.
PS: See `tagline' quotes below, from these above books. I do not necessarily agree with everything that I emphasise (bold).
"Scientists spend a lot of time pointing out the similarities between us and our closest relatives, the chimpanzees. There are two species of chimp, the common chimp that Jane Goodall has studied for decades and the lesser known bonobo, or pygmy chimp. Both are genetically very similar to humans (the common chimp closer), so much so that some scientists think of humans as just another chimp (see Jared Diamond's The Third Chimpanzee). But let's face it: there is world of difference between a human and a chimp. The most obvious is mental, notwithstanding the linguistic achievements of chimpanzees like Kanzi, the chimp trained by Sue Savage-Rumbaugh who apparently understands complicated English sentences (there are even structures in the chimp brain that hint at some sort of organization for language) or the reasoning exhibited by chimps who are smart enough to pile up boxes to reach bananas suspended from the ceiling. They're smart, but they're not Homo sapiens smart. And the difference between us and the chimps is more than just mental: physically and developmentally we're completely different animals. And yet there's that genetic similarity-the genes of the two species are more than 98 per cent identical." (Ingram, J., "The Barmaid's Brain: And Other Strange Tales from Science, London : Aurum, 2001, pp.105-106. Emphasis original).
"My favourite inhabitant of that [ultramicroscopic] world is a virus, but not one that preys on human beings. They too are marvellous, but the virus that first captured my imagination-and still holds it-was something called a bacteriophage, a `bacteria-eater.' Now simply called phage ... the existence of these specialized parasites was first deduced early in the twentieth century. They were not even seen; their presence was inferred. ... Felix d'Herelle, the pioneer of the research, could only have seen a bacteriophage near the end of his life-the first photos using the electron microscope were taken in the early 1940s. ... seeing a phage is a revelatory experience, not only confirming the portrait painted by the biochemistry (the criminal suspect turns out to look just like the artist's composite drawing) but also reinforcing the idea that nature is endlessly inventive-and savage." (Ingram, 2001, pp.201-202).
"There are many bacteriophages, one or more for every kind of bacterium. They have been studied, not so much because they are interesting in and of themselves, but because they are relatively simple objects that can shed light on how genes work. The one that is probably the most intensively studied in a virus called T4 that parasitizes E. coli, the bacterium with the misfortune of being known mostly for its association with human feces-water quality tests search for the presence of coliform bacteria as an index of exposure of that water to human waste." (Ingram, 2001, p.202)
"Under ideal laboratory conditions an E. coli cell can divide every twenty minutes. Obviously, as has been pointed out many times before, that can't possibly be happening in the natural habitat (your intestine) or the earth would be swamped by these bacteria in a couple of days. Nonetheless coliform bacteria represent a highly evolved, incredibly efficient life form; thus any organism that would target it must be highly evolved as well, and the T4 bacteriophage fits the bill. In fact it is speculated that T4 probably appeared on the planet shortly after its bacterial hosts, which puts its arrival at something like three and a half billion years ago. Its modus operandi substantiates the view that it is anything but primitive." (Ingram, 2001, pp.202-203)
"A T4 phage looks a little like the Apollo lunar lander.
It has a geometric head, a tail, and a set of tail fibres that spread out and attach to the surface of the bacterium. In function, however, it is more like a completely self-contained robotic spacecraft - fully preprogrammed.
The manufactured appearance-the unlifelike symmetry-is surprising at first look, probably because we think of microscopic infectors as tiny worms, or even miasmic gases, concepts left over from centuries ago. But the forces that dominate this world (where objects are millionths of a metre in size or less) are powerful short-range chemical bonds, and structures are nakedly molecular. For instance, a molecule will attract or repel others depending on the haze of electric charge surrounding its projections or the shape and orientation of tiny crevasses on its surface. A second molecule might fit like a hand in a glove or it might never make contact. This isn't to say that life in our world isn't dictated by the same kind of chemistry-it is. But other forces, especially gravity, play a dominant role. In the world of the phage, chemistry is it." (Ingram, 2001, p.204)
"In the absence of prey, the T4 phage simply drifts with the tide-it is not capable of seeking out E. coli. In drift mode the tail fibres are stowed, pinned up alongside the tail. However, when the virus comes into contact with the surface of the bacterial cell, the tail fibres immediately swing down and spread out, and are the first parts of T4 actually to touch the E. coli cell. They will attach wherever they contact a specific receptor molecule that's part of the external coat of the bacterium. However, the bond between one tail fibre and its receptor is weak, too weak to anchor the virus. There are six such fibres and at least three must make contact before capture is complete. That doesn't happen immediately because the receptors are distributed across the surface of the bacterium like occasional repeating tiles in a mosaic. This is the first step of what phage scientists call the `phage mating dance.' T4 walks across the surface of its intended victim, tail fibres attaching, then detaching, until finally it makes sufficient, and permanent, contact." (Ingram, 2001, pp.204-205)
"Once anchored, a remarkable series of events ensues. The virus adjusts its position so that the tail is positioned over a thin portion of the surface of the bacterium. Tail fibres attached to the flat base plate of the tail extend and pin the virus down (no escape now) and suddenly the base plate itself mysteriously changes shape from hexagonal to star-shaped. This triggers a rearrangement of the molecules of the outer sheath of the tail; the sheath contracts, the tail fibres bend and the virus is pulled down closer to the cell surface. The core of the tail actually penetrates partway through the multilayered outer envelope of the bacterium, an event likely made easier by enzymes in the base plate that chop up some of the surface molecules in that envelope." (Ingram, 2001, p.205)
"Now the head of the virus sits just above the cell. The head is a rigid hollow case in the shape of an icosahedron, a regular twenty-sided geometric figure. It contains the genes of the phage, more than one hundred and fifty of them, all linked together in one long thread of DNA. Long of course is a relative term, but the phage DNA, stretched out, would measure several hundred times the dimensions of the head. No one is yet sure exactly how that much DNA is packed into that tiny space, a space made tinier by the fact that special packing molecules are stuffed in there as well. But at this point in the phage mating dance, the DNA isn't going to be locked inside the head much longer." (Ingram, 2001, pp.205-206)
"When the hexagonal base plate changed its shape, it opened up a channel wide enough for a single DNA double helix to pass through. Now the huge string of phage DNA, its entire genome, snakes its way through the tail, through the bacterial surface envelopes, the rigid cell wall and into the interior. It's all over in less than a minute, this process that some researchers have likened to throwing a potful of spaghetti-one enormous strand-into a colander and having the end of that strand find its way through a hole and then feed itself through completely. The energy to do that has to come from somewhere, but it's not yet clear where. One thing is certain: once the phage DNA has entered the E. coli cell the poor bacterium is not long for this world. And it is about to suffer the indignity of contributing through its death to the multiplication of the phage." (Ingram, 2001, p.206)
"It's simple really. Among the hundred-and-fifty-plus genes in the phage DNA are those that direct (through the molecules they make) the shutdown of almost all E. coli activities. However, the cellular machinery formerly used to make E. coli membranes, enzymes, structural protein molecules-the machinery that maintained the bacterium's pulse of life-remains unscathed and is instantly converted to creating new phages. The now commandeered bacterial cell becomes a factory floor for phage parts. As the minutes tick by scaffolds for building new heads appear here, tail fibres there, baseplates over here. It might appear simple, but in fact some of these parts are composed of several different kinds of molecules. David Coombs, a phage biologist at the University of New Brunswick, has called the base plate alone `one of the most challenging biological structures ever studied in molecular detail.' Some phage parts spontaneously self-assemble from their components, but others must be engineered together under the guidance of yet more molecules." (Ingram, 2001, pp.206-207)
"A hint of the subtlety of engineering involved can be seen in the manufacture of new phage DNA. Naturally it's assembled using the machinery that E. coli used to make its own DNA. But what is it made out of? Pieces of E. coli DNA that were disorganized, then dismembered, mere minutes after the phage gained access to the interior of the cell. The phage manages to scavenge about twenty viruses' worth of DNA from host DNA. But the phage DNA is different in one important respect: one of the four DNA subunits is decorated with small molecules that identify it as uniquely phage. It's suspected this protects the phage DNA from enzymes inside the cell that normally attack and destroy any pieces of foreign DNA that they happen upon. It may even protect the intruder's DNA from its own DNA-destroying chemicals. Because such recognition is a molecular touch-and-feel sort of process, DNA with these unusual decorations escapes." (Ingram, 2001, p.207)
"Assembly continues in an ordered but rapid fashion. Fully mature heads are built around head scaffolds (which are then discarded), then stuffed with a complete set of genes. Tail fibres bond to base plates, tail cores to sheaths, base plates to tails, and before the half hour is out hundreds of new phages are ready to be released. One final enzyme is manufactured which chews away the bacterial envelope from the inside and the progeny viruses escape to begin the routine all over again." (Ingram, 2001, p.207)
"How do any E. coli survive in the face of such diabolical evolutionary design? They might come up with alterations to the receptors that the tail fibres recognize, which would literally make them `invisible' to the phage, but there's good evidence that the phages can simply respond by altering their tail fibres to make them visible again. E. coli also makes a variety of defensive DNA-destroying enzymes, but T4 can evade many of those by decorating its own DNA, although there's likely an ongoing battle here, with E. coli cells swapping defence genes among themselves." (Ingram, 2001, pp.207-208)
"Perhaps the most effective defences are what are called `guests' hiding in the E. coli DNA. These are genes left behind in the E. coli chromosome by other phages or in some case by some unknown visitor. These alien genes will not permit the T4 to reproduce inside the E. coli cell, but this act of defiance is a noble one for the bacterium, because the bacterium dies in the process, reminiscent of the infamous phrase from the Vietnam War, `We had to destroy the village to save it.' In this molecular version, however, death of the bacterium does insure that no new viruses will be produced from it." (Ingram, 2001, p.208)
"Do you want to be happy? Of course you do! Then what's standing in your way? Your happiness is entirely up to you. This has been revealed to us by a man of divine serenity and wisdom who spent his life among us, and showed us, by his personal example and by his teaching, the path to redemption from unhappiness. His name was Epicurus. ... The fundamental obstacle to happiness, says Epicurus, is anxiety. No matter how rich or famous you are, you won't be happy if you're anxious to be richer or more famous. No matter how good your health is, you won't be happy if you're anxious about getting sick. You can't be happy in this life if you're worried about the next life. You can't be happy as a human being if you're worried about being punished or victimized by powerful divine beings. But you can be happy if you believe in the four basic truths of Epicureanism: there are no divine beings which threaten us; there is no next life; what we actually need is easy to get; what makes us suffer is easy to put up with. This is the so-called 'four-part cure', the Epicurean remedy for the epidemic sickness of human anxiety; as a later Epicurean puts it, `Don't fear god, don't worry about death; what's good is easy to get, and what's terrible is easy to endure.'" (Hutchinson, D.S., "Introduction," in Inwood, B. & Gerson, L.P., eds, 1994, "The Epicurus Reader," Hackett Publishing Co: Indianapolis IN, p.vii. Emphasis original).
"`Don't worry about death.' While you are alive, you don't have to deal with being dead, but when you are dead you don't have to deal with it either, because you aren't there to deal with it. `Death is nothing to us,' as Epicurus puts it, for `when we exist, death is not yet present, and when death is present, then we do not exist.' [Epicurus, Letter to Menoeceus, text 4, section 125] Death is always irrelevant to us, even though it causes considerable anxiety to many people for much of their lives. Worrying about death casts a general pall over the experience of living, either because people expect to exist after their deaths and are humbled and terrified into ingratiating themselves with the gods, who might well punish them for their misdeeds, or else because they are saddened and terrified by the prospect of not existing after their deaths. But there are no gods which threaten us, and, even if there were, we would not be there to be punished. Our souls are flimsy things which are dissipated when we die, and even if the stuff of which they were made were to survive intact, that would be nothing to us, because what matters to us is the continuity of our experience, which is severed by the parting of body and soul. It is not sensible to be afraid of ceasing to exist, since you already know what it is like not to exist; consider any time before your birth-was it disagreeable not to exist? And if there is nothing bad about not existing, then there is nothing bad for your friend when he ceases to exist, nor is there anything bad for you about being fated to cease to exist. It is a confusion to be worried by your mortality, and it is an ingratitude to resent the limitations of life, like some greedy dinner guest who expects an indefinite number of courses and refuses to leave the table." (Hutchinson, 1994, p.viii-ix).
"`Don't fear god.' The gods are happy and immortal, as the very concept of `god' indicates. But in Epicurus' view, most people were in a state of confusion about the gods, believing them to be intensely concerned about what human beings were up to and exerting tremendous effort to favour their worshippers and punish their mortal enemies. No; it is incompatible with the concept of divinity to suppose that the gods exert themselves or that they have any concerns at all. The most accurate, as well as the most agreeable, conception of the gods is to think of them, as the Greeks often did, in a state of bliss, unconcerned about anything, without needs, invulnerable to any harm, and generally living an enviable life. So conceived, they are role models for Epicureans, who emulate the happiness of the gods, within the limits imposed by human nature. `Epicurus said that he was prepared to compete with Zeus in happiness, as long as he had a barley cake and some water.' If, however, the gods are as independent as this conception indicates, then they will not observe the sacrifices we make to them, and Epicurus was indeed widely regarded as undermining the foundations of traditional religion. Furthermore, how can Epicurus explain the visions that we receive of the gods, if the gods don't deliberately send them to us? These visions, replies Epicurus, are material images travelling through the world, like everything else that we see or imagine, and are therefore something real; they travel through the world because of the general laws of atomic motion, not because god sends them. But then what sort of bodies must the gods have, if these images are always streaming off them, and yet they remain strong and invulnerable? Their bodies, replies Epicurus, are continually replenished by images streaming towards them; indeed the `body' of a god may be nothing more than a focus to which the images travel, the images that later travel to us and make up our conception of its nature." (Hutchinson, 1994, pp.ix-x).
"If the gods do not exert themselves for our benefit, how is it that the world around us is suitable for our habitation? It happened by accident, said Epicurus, an answer that gave ancient critics ample opportunity for ridicule, and yet it makes him a thinker of a very modern sort, well ahead of his time. Epicurus believed that the universe is a material system governed by the laws of matter. The fundamental elements of matter are atoms, which move, collide, and form larger structures according to physical laws. These larger structures can sometimes develop into yet larger structures by the addition of more matter, and sometimes whole worlds will develop. These worlds are extremely numerous and variable; some will be unstable, but others will be stable. The stable ones will persist and give the appearance of being designed to be stable, like our world, and living structures will sometimes develop out of the elements of these worlds. This theory is no longer as unbelievable as it was to the non-Epicurean scientists and philosophers of the ancient world, and its broad outlines may well be true." (Hutchinson, 1994, pp.ix-x).
"We happen to have a great deal of evidence about the Epicurean philosophy of nature, which served as a philosophical foundation for the rest of the system. But many Epicureans would have had little interest in this subject, nor did they need to, if their curiosity or scepticism did not drive them to ask fundamental questions. What was most important in Epicurus' philosophy of nature was the overall conviction that our life on this earth comes with no strings attached; that there is no Maker whose puppets we are; that there is no script for us to follow and be constrained by; that it is up to us to discover the real constraints which our own nature imposes on us. When we do this, we find something very delightful: life is free, life is good, happiness is possible, and we can enjoy the bliss of the gods, rather than abasing ourselves to our misconceptions of them." (Hutchinson, 1994, p.x).
"Like nearly everything else, evolution was invented, or almost invented, by the Greeks. From Heraclitus and Anaximander came the suggestion that animal species are mutable; from Aristotle, the idea of a graded series of organisms, the idea of continuity in nature or the shading of one class into another, and a model of evolutionary process in the development of the germ into the plant. From both the Stoics and the Epicureans, and particularly from Lucretius, came the doctrine that man is a part of nature and that his origins are animal and savage rather than godlike and idyllic." (Irvine, W., 1955, "Apes, Angels and Victorians: The Story of Darwin, Huxley, and Evolution," McGraw-Hill: New York NY, pp.84-85).
"Already in The Origin of Species Darwin is haunted by the mystery of genetics. If variations cause evolution, what causes variations? He attacks the problem in the first and second chapters, and finally at length in the fifth. The discussion is cautious and sensible but also vague and occasionally confused. He sometimes talks as though natural selection not only sifts variations but causes them. Later, when taken to task for these lapses by Lyell and Wallace, he rectified many passages but allowed a few to remain, even in the last edition of his book. In general, he holds that variations arise through unknown hereditary factors within the organism, through use and disuse, the correlation of parts, and changes in environment. Domestic animals are extremely variable because man has introduced them into many and diverse regions. The domestic duck cannot rise from the ground because it has long ceased to need or use its wings. Significantly, its young can still fly. In short, he is often, so to speak, a Buffonian or a Lamarckian on the genetic level. At his best, he simply acknowledges a complete ignorance of the whole subject." (Irvine, 1955, p.92).
"You could not see natural selection at work. Therefore it was a mere empty speculation. But in a more particular sense the sore point was natural selection itself. It seemed to substitute accident-or, as some felt, mechanism-for intelligent purpose in the natural order. ... Natural selection was an ingenious hypothesis but of course it could not be taken seriously. It omitted its own ultimate and governing factor. The American Asa Gray, a warm and sincere Darwinian, held that, so far from representing chance, natural selection embodied a blind necessity totally incompatible with theism, unless the stream of variations themselves could be conceived as guided by design. [Gray, A. "Design versus Necessity," in "Darwiniana," D. Appleton & Co: New York NY, 1876, pp.75-76] ... When Asa Gray pleaded that variations might be divinely guided, Darwin ... felt that the more divine guidance in variations, the less reality in natural selection." (Irvine, 1955, p.108).
"At the end of his life, he [Darwin] spoke out frankly in the `Autobiography:' As usual, he explained himself with a history. His religion had wasted away before his science in a war of attrition so gradual that, in his own words, he `felt no distress' and hardly realized that a shot had been fired. Soon after his return to England, while yet hesitating between an evolutionary and a theological biology, he had discovered -no doubt with astonishment-that he had become a complete skeptic about Revelation. His ideas of progress and evolution-secondarily, his humanitarianism-had been decisive. He saw that scriptures and mythology were part of the evolution of every people. `The Old Testament was no more to be trusted than the sacred books of the Hindoos,' [Darwin, C.R. in Barlow, N., ed., "The Autobiography of Charles Darwin," W.W. Norton & Co: New York, 1958, p.85] not only because of `its manifestly false history of the world' but because of `its attributing to God the feelings of a revengeful tyrant.' [Ibid, p.85] He rejected Christian miracles because they were similar to those in other mythologies, because they rested on dubious and conflicting testimony, and because they contradicted the uniformitarianism he had learned from Lyell. He also rejected the divinity of Jesus and doubted the supremacy of Christian ethics. `Beautiful as is the morality of the New Testament, it can hardly be denied that its perfection depends in part on the interpretation we now put on metaphors and allegories:' [Ibid, p.86]" (Irvine, 1955, p.109).
"Darwin's matter was as English as his method. Terrestrial history turned out to be strangely like Victorian history writ large. Bertrand Russell. and others have remarked that Darwin's theory was mainly `an extension to the animal and vegetable world of laissez faire economics' [Russell, B., "Religion and Science," Home University Library: London, 1935, pp.72-73] As a matter of fact, the economic conceptions of utility, pressure of population, marginal fertility, barriers in restraint of trade, the division of labor, progress and adjustment by competition, and the spread of technological improvements can all be paralleled in The Origin of Species. But so, also, can some of the doctrines of English political conservatism. In revealing the importance of time and the hereditary past, in emphasizing the persistence of vestigial structures, the minuteness of variations and the slowness of evolution, Darwin was adding Hooker and Burke to Bentham and Adam Smith. The constitution of the universe exhibited many of the virtues of the English Constitution." (Irvine, 1955, p.98).
"Understanding the literature on human evolution calls for the recognition of special problems that confront scientists who report on this topic. Regardless. of how the scientists present them, accounts of human origins are read as replacement material for genesis [sic]. They fulfil needs that are reflected in the fact that all societies have in their culture some form of origin beliefs, that is, some narrative or configurational notion of how the world and humanity began. Usually, these beliefs do more than cope with curiosity, they have allegorical content, and they convey values, ethics and attitudes. The Adam and Eve creation story of the Bible is simply one of a wide variety of such poetic formulations." (Isaac, G., in Isaac, B., ed., 1990, "The Archaeology of Human Origins: Papers by Glynn Isaac," Cambridge University Press: Cambridge UK, p.96).
"We are conscious of a great change in all this, starting in the eighteenth and nineteenth centuries, The scientific movement which culminated in Darwin's compelling formulation of evolution as a mode of origin seemed to sweep away earlier beliefs and relegate them to the realm of myth and legend. Following on from this, it is often supposed that the myths have been replaced by something quite different. which we call `science'. However. this is only partly true: scientific theories and information about human origins have been slotted into the same old places in our minds and our cultures that used to be occupied by the myths, the information component has then inevitably been expanded to fill the same needs. Our new origin beliefs are in fact surrogate myths, that are themselves part science, part myths." (Isaac, 1990, p.96).
"abiogenesis The origin of living from nonliving matter, as by *biopoiesis. See also spontaneous generation." (Isaacs, A., Daintith, J. & Martin, E., eds., "Concise Science Dictionary," , Oxford University Press: Oxford UK, Second Edition, 1991, p.1. Emphasis original).
"biogenesis The principle that a living organism can only arise from other living organisms similar to itself (i.e. that like gives rise to like) and can never originate from nonliving material. Compare spontaneous generation." (Isaacs, et al., 1991, p.74. Emphasis original).
"biopoiesis The development of living matter from complex organic molecules that are themselves nonliving but self-replicating. It is the process by which life is assumed to have begun. See origin of life." (Isaacs, et al., 1991, p.74. Emphasis original).
"Darwinism The theory of *evolution proposed by Charles Darwin (1809-82) in On the Origin of Species (1859), which postulated that present-day species have evolved from simpler ancestral types by the process of *natural selection acting on the variability found within populations. On the Origin of Species caused a furore when it was first published because it suggested that species are not immutable nor were they specially created - a view directly opposed to the doctrine of *special creation. However the wealth of evidence presented by Darwin gradually convinced most people and the only major unresolved problem was to explain how the variations in populations arose and were maintained from one generation to the next. This became clear with the rediscovery of Mendel's work on classical genetics in the 1900s and led to the present theory known as neo-Darwinism." (Isaacs, et al., 1991, p.183. Emphasis original).
"Evolution The gradual process by which the present diversity of plant and animal life arose from the earliest and most primitive organisms, which is believed to have been continuing for at least the past 3000 million years. Until the middle of the 18th century it was generally believed that each species was divinely created and fixed in its form throughout its existence (see special creation). Lamarck was the first biologist to publish a theory to explain how one species could have evolved into another (see Lamarckism), but it was not until the publication of Darwin's On the Origin of Species in 1859 that special creation was seriously challenged. Unlike Lamarck, Darwin proposed a feasible mechanism for evolution and backed it up with evidence from the fossil record and studies of comparative anatomy and embryology (see Darwinism; natural selection). The modern version of Darwinism, which incorporates discoveries in genetics made since Darwin's time, probably remains the most acceptable theory of species evolution. More controversial, however, and still to be firmly clarified, are the relationships and evolution of groups above the species level." (Isaacs, et al., 1991, pp.251-252. Emphasis original).
"mutation A sudden random change in the genetic material of a cell that may cause it and all cells derived from it to differ in appearance or behaviour from the normal type. An organism affected by a mutation (especially one with visible effects) is described as a mutant. Somatic mutations affect the nonreproductive cells and are therefore restricted to the tissues of a single organism but germline mutations, which occur in the reproductive cells or their precursors, may be transmitted to the organism's descendants and cause abnormal development. Mutations occur naturally at a low rate but this may be increased by radiation and by some chemicals (see mutagen). Most (the gene mutations) consist of invisible changes in the DNA of the chromosomes, but some (the chromosome mutations) affect the appearance or the number of the chromosomes. An example of a chromosome mutation is that giving rise to *Down's syndrome. The majority of mutations are harmful, but a very small proportion may increase an organism's *fitness; these spread through the population over successive generations by natural selection. Mutation is therefore essential for evolution, being the ultimate source of genetic variation." (Isaacs, 1991, p.455. Emphasis original).
"natural selection The process that, according to *Darwinism, brings about the evolution of new species of animals and plants. Darwin noted that the size of any population tends to remain constant despite the fact that more offspring are produced than are needed to maintain it. He also saw that variations existed between individuals of the population and concluded that disease, competition, and other forces acting on the population eliminated those individuals less well adapted to their environment. The survivors would pass on any inheritable advantageous characteristics (i.e. characteristics with survival value) to their offspring and in time the composition of the population would change in adaptation to a changing environment. Over a long period of time this process could give rise to organisms so different from the original population that new species are formed. *See also* adaptive radiation. *Compare* punctuated equilibrium." (Isaacs, 1991, p.458. Emphasis original).
"neo-Darwinism (modern synthesis) The current theory of the process of *evolution, formulated between about 1920 and 1950, that combines evidence from classical genetics with the Darwinian theory of evolution by *natural selection (see Darwinism)*. It makes use of modern knowledge of genes and chromosomes to explain the source of the genetic variation upon which selection works. This aspect was unexplained by traditional Darwinism." (Isaacs, et al., 1991, pp.459-460. Emphasis original).
"origin of life The process by which living organisms developed from inanimate matter, which is generally thought to have occurred on earth between 3500 and 4000 million years ago. It is supposed that the primordial atmosphere was like a chemical soup containing all the basic constituents of organic matter: ammonia, methane, hydrogen, and water vapour. These underwent a process of chemical evolution using energy from the sun and electric storms to combine into ever more complex molecules, such as amino acids, proteins, and vitamins. Eventually self-replicating nucleic acids, the basis of all life, could have developed. The very first organisms may have consisted of such molecules bounded by a simple membrane. " (Isaacs, et al., 1991, p.491. Emphasis original).
"Special Creation. The belief, in accordance with the Book of Genesis, that every species was individually created by God in the form in which it exists today and is not capable of undergoing any change. It was the generally accepted explanation of the origin of life until the advent of *Darwinism. The idea has recently enjoyed a revival, especially among members of the fundamentalist movement in the USA, partly because there still remain problems that cannot be explained entirely by Darwinian theory. However, special creation is contradicted by fossil evidence and genetic studies, and the pseudoscientific arguments of creation science cannot stand up to logical examination." (Isaacs, et al., 1991, pp.646-647. Emphasis original).
"spontaneous generation The discredited belief that living organism can somehow be produced by nonliving matter. For example, it was once thought that microorganisms arose by the process of decay and even that vermin spontaneously developed from household rubbish. Controlled experiments using sterilized media by Pasteur and others finally disproved these notions. Compare biogenesis. See also* biopoiesis" (Isaacs, et al., 1991, pp.652-653. Emphasis original). | <urn:uuid:c5f1e880-ce17-44e3-9d6d-9cb544c20683> | CC-MAIN-2021-21 | https://creationevolutiondesign.blogspot.com/2007/11/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989749.3/warc/CC-MAIN-20210510204511-20210510234511-00537.warc.gz | en | 0.957831 | 7,623 | 2.515625 | 3 |
formative assessment for kindergarten
Instructional Framework Introduction. From notetaking on how knowledge is forming as students are learning to the importance of understanding a child’s learning path, Becky relates this topic to all ages as we discuss the best practices in formative assessment. They happen while you teach, and they provide insight into what students understand before you give or grade a single test. Formative assessments are a classroom staple, but how can you continue this process when learning is digital? Formative Assessment vs. Summative Assessment. (Grading quizzes but assigning low point values is a great way to make sure students really try: The quizzes matter, but an individual low score can’t kill a student’s grade.) Becky Holden talks about formative assessment in kindergarten. Formative Assessment Writing Activities and Research Activities Formative Assessment Activities Definition Classroom Options Writing Break Stop in the middle of class and give students two minutes to write about the lesson or topic. Feedback and assessment methods included peer … Formative assessment happens naturally as we walk around the room and listen in on student conversations or examine their classwork after the bell rings. Just as a doctor assesses your health status and makes recommendations to improve your well-being, teachers use assessments to detect students’ strengths and weaknesses so they can help them improve and direct their learning. While the common goal is to establish the development, strengths and weaknesses of each student, each assessment type provides different insights and actions for educators. For more introverted students—or for more private assessments—use Flipgrid, Explain Everything, or Seesaw to have students record their answers to prompts and demonstrate what they can do. When meeting with one student at a time, teachers get to focus on the individual needs and ability of that particular student. Nov 27, 2019 - Explore Jennifer Hoffpauir's board "formative assessment", followed by 765 people on Pinterest. One last piece of advice is to choose one formative assessment and make it your own. Polling tools Or think beyond the visual and have kids act out their understanding of the content. Here are some examples of preschool formative assessments. FAQ 5. Formative Assessments Formative assessments are included at the end of each lesson. More specifi cally, formative assessments target areas that need additional practice, and help the teacher recognize areas in which children are A formative assessment is an evaluation of student comprehension and needs that occurs in the midst of a lesson, unit or course. Low-stakes quizzes and polls: If you want to find out whether your students really know as much as you think they know, polls and quizzes created with Socrative or Quizlet or in-class games and tools like Quizalize, Kahoot, FlipQuiz, Gimkit, Plickers, and Flippity can help you get a better sense of how much they really understand. Search by Date Search by Date. Formative Assessment is part of the instructional process. Please check back after the institute to view recordings of our sessions! Taking quick notes on a tablet or smartphone, or using a copy of your roster, is one approach. K-5 Unit Assessments and Data. MTSS. Writing Assessment in Kindergarten. 4 comments: Selz June 5, 2017 at 4:10 AM. Home Page. If a tool is too complicated, is not reliable or accessible, or takes up a disproportionate amount of time, it’s OK to put it aside and try something different. 1. 7. Methods that incorporate art: Consider using visual art or photography or videography as an assessment tool. Posted by Teacher Toni at 11:16 AM. Join us to discuss and discover formative assessment in pre-k and kindergarten. The Objectives for the lesson are: Students will be able to count from 0 to 100 by 1's Students will be able to count on the decades (10, 20, 30..) from 0 to 100. These samples were collected during our kindergarten writers workshop time. Designing just the right assessment can feel high stakes—for teachers, not students—because we’re using it to figure out what comes next. This part of the assessment is a performance task. 2020 National Early Childhood Inclusion Institute, Formative Assessment in Pre-K & Kindergarten for ALL Children. To provide you with a comprehensive repertoire, I have labeled each assessment as Individual, Partner, Small Group, or Whole Class. All of these strategies give teachers an unobtrusive way to see what students are thinking. A formative assessment or assignment is a tool teachers use to give feedback to students and/or guide their instruction. In this sense, formative assessment informs both teachers and students about student understanding at a point when timely adjustments can be made. Becky Holden talks about formative assessment in kindergarten. Formative assessments may decrease a student's test anxiety that usually comes at the end of a lesson. This might sound familiar to you…. Home. Have student The five domains of learning will be explored followed by the sharing of documentation ideas appropriate for both pre-k and k-3 assessment. Because you can design the questions yourself, you determine the level of complexity. • Formative assessment is a process that provides a critical link between standards, curriculum, and instruction. Fast and fun formative assessment tools are perfect for checking in along those learning journeys. Effective assessment of transitional kindergarten-aged students can be ... How to Use Formative Assessment in the TK Classroom. Start the class off with a quick question about the previous day’s work while students are getting settled—you can ask differentiated questions written out on chart paper or projected on the board, for example. Formative assessments are the educational equivalent of a medical checkup. The beauty of formative assessment is that it's done while the students are still learning. Interview assessments: If you want to dig a little deeper into students’ understanding of content, try discussion-based assessment methods. Listening in on student partners or small-group conversations allows you to quickly identify problems or misconceptions, which you can address immediately. FAQ 3. May 25, 2015 - This board includes resources for both formative and summative assessment with young children. A formative assessment is an evaluation of student comprehension and needs that occurs in the midst of a lesson, unit or course. May 25, 2015 - This board includes resources for both formative and summative assessment with young children. During formative assessment periods in your class, invite a student to go to the reading corner, pick out a book, and read for a while. All about kindergarten assessments! Here is an extensive list of 75 digital tools, apps, and platforms that can help you and your students use formative assessment to elicit evidence of learning. Formative assessment differs from summative assessment in that it is generally low-stakes and is used to monitor student learning. Students can instead use six hand gestures to silently signal that they agree, disagree, have something to add, and more. What are three things you learned, two things you’re still curious about, and one thing you don’t understand? You will also need 4 tube socks. Ask them to pick their own trouble spot from three or four areas where you think the class as a whole needs work, and write those areas in separate columns on a whiteboard. That’s why it’s important to keep it simple: Formative assessments generally just need to be checked, not graded, as the point is to get a basic read on the progress of individuals, or the class as a whole. Some Parting Words on Formative Assessment Tools. It is not included in a student grade, nor should it be used to judge a teacher's performance. ARTS ACHIEVE students and teachers worked together to develop clear checklists and rubrics, against which students could measure their own progress. See more ideas about kindergarten assessment, kindergarten, formative and summative assessment. Using innovative formative assessment strategies consistently and effectively removes the surprises from getting final grades. When incorporated into classroom practice, it provides the information needed to adjust teaching and learning while they are happening. Curriculum Mapping. District Professional Development. Fast and fun formative assessment tools are perfect for checking in along those learning journeys. FAQ 5. The purpose of a formative assessment is to help students learn and to improve the learning process itself. Formative assessment—discovering what students know while they’re still in the process of learning it—can be tricky. Ask students to write for one minute on the most meaningful thing they learned. In my kindergarten class’ upcoming science unit, students will look more closely at plant and animal survival needs. A teaching team demonstrates how they use formative assessment with students in the classroom to make adjustments and respond to student learning and understanding. Several self-assessments let the teacher see what every kid thinks very quickly. 2 Decomposing Numbers (Operations and Algebraic Thinking) Kindergarten This Formative Assessment Lesson is designed to be be implemented approximately two Mathematical goals This lesson is intended to help you assess how well students are able to decompose numbers less than or equal to 10 into … The purpose of a formative assessment is to help students learn and to improve the learning process itself. Exit slips can take lots of forms beyond the old-school pencil and scrap paper. There is no shortage of strategies, techniques, and tools available to teachers who use formative instructional practice in their classrooms. Can your kindergarteners correctly identify and name shapes regardless of their orientation or overall size? However, formative assessment of all standards IS required. Ask students to explain the “muddiest point” in the lesson—the place where things got confusing or particularly difficult or where they still lack clarity. Click on the Cluster in the table below to be taken to the resource page for lessons, tasks, and additional resources for teaching the NC Mathematics Standard Course of Study. Email This BlogThis! If you choos… The lesson for this unit was based on Common Core Standard 2 for Kindergarten (K.CC.2). Curriculum Materials. This blog post compares October kindergarten writing samples with December kindergarten writing samples. Early childhood educators are very familiar with the requirement to observe and understand a child’s learning and development but it seems many become confused and overwhelmed when we begin talking about ‘assessment of learning’ or writing summative assessments.. Formative assessment can be challenging especially for the younger grades. See more ideas about kindergarten assessment, kindergarten, formative and summative assessment. They can create a dance to model cell mitosis or act out stories like Ernest Hemingway’s “Hills Like White Elephants” to explore the subtext. 1. A quick way to see the big picture if you use paper exit tickets is to sort the papers into three piles: Students got the point; they sort of got it; and they didn’t get it. When incorporated into classroom practice, it provides the information needed to adjust teaching and learning while they are happening. Often you can give your rubric to your students and have them spot their strengths and weaknesses. In fact, Black and Wiliam (1998) argue that by using formative assessments, teachers can help students learn at about twice the rate. Resources on this page are organized by the Instructional Framework. […] Ask questions at the bottom of Bloom’s taxonomy and you’ll get insight into what facts, vocabulary terms, or processes kids remember. Educators use formative assessments to monitor and update classroom instruction, and these types of assessments are not used in the grade point average of the student. To prepare for this assessment, you will need copies of the Five Senses Task Assessment recording sheet included as a PDF with this lesson. Teacher-Student Conferences. Assessment for Learning is also known as formative assessments. Based on insights collected through formative assessment, teachers get to adjust their instructional strategies to attend to students’ emerging learning needs better plan for future teaching/learning opportunities. • Reliable assessment and … Formative assessments are NOT limited to those listed below. No matter the tool, the key to keeping students engaged in the process of just-walked-in or almost-out-the-door formative assessment is the questions. One of my favorite math teacher books has 75 formative assessments. SEARCH BY KEYWORD. The teacher cannot expect the students to come to school reading and writing, while some might be able to, most students cannot. The purpose of this document is to connect and sequence mathematical … In this post, I will describe three formative assessments I will use in my kindergarten class to gauge if students are meeting the following science objective of the plant and animal unit: Objective: 85% of students… Join us to discuss and discover formative assessment in pre-k and kindergarten. I teach kindergarten in a rural, high-poverty school in North Carolina that has a diverse student population, including children who speak a language other than English at home and whose parents (and often other family members) are migrant workers, often coming from Mexico. So, now make a plan for how you are going to incorporate formative assessment into you class and what actions you will take as a result. FAQ 2 . These informal assessment resources are: Compact and fit in a zippered pencil pouch for a binder; Easily accessible and portable; Quiet and not distracting to other students; Appropriate for individual interactions; Easily obtained and inexpensive (dollar stores); and, Based on formative assessment results, they can fill in learning gaps as students prepare for kindergarten. Most of my DLL students live in homes where Spanish is the primary language; some children are learning both Spanish and a dialect of Mixtec at home, but these students communicate in Spanish (and in their emergent English) in the classroom. From notetaking on how knowledge is forming as students are learning to the importance of understanding a child’s learning path, Becky relates this topic to all ages as we discuss the best practices in formative assessment. Use these quick, formative assessments to identify what standards students have mastered and where they may need additional support. This Kindergarten: Geometry (K.G.2) Formative Assessment Task Lesson Plan is suitable for Kindergarten. In other words, feedback is used to improve learning. In this post, I will describe three formative assessments I will use in my kindergarten class to gauge if students are meeting the following science objective of the plant and animal unit: Objective: 85% of students… FAQ 4. The National Forum on Assessment (1995) suggests that assessment systems include opportunities for both individual and group work. The goal of the formative assessment is to monitor children’s learning and to provide ongoing feedback. Formative Assessment Guide for Kindergarten English and Language Arts Common Core Standards 1 Listed below are suggested ways to formatively assess ELA Common Core Standards. The Inclusion Institute is full. • Formative assessment data are used to plan effective and differentiated instruction and intervention for young children. As such, here are 7 ideas that kindergarten teachers can use to gather data and learn about the students’ progress. Entry and exit slips: Those marginal minutes at the beginning and end of class can provide some great opportunities to find out what kids remember. B ecky Holden talks about formative assessment in kindergarten. Are we ready to move on? The teacher engages in formative discussion, requiring students to explain their thinking and challenge each other's assertions as they work to solve the second problem. Whether you’re assessing at the bottom of Bloom’s taxonomy or the top, you can use tools like Padlet or Poll Everywhere, or measure progress toward attainment or retention of essential content or standards with tools like Google Classroom’s Question tool, Google Forms with Flubaroo, and Edulastic, all of which make seeing what students know a snap. Keep in mind that this is a short list of ideas for formative assessments. Kindergarten Formative Assessment Lesson Problem Solving Formative Assessment Lesson. Kids in many classes are always logged in to these tools, so formative assessments can be done very quickly. Preschool teachers use formative assessments to check the cognitive, social-emotional, and physical development of their students throughout the school year. This Kindergarten: Geometry (K.G.2) Formative Assessment Task Lesson Plan is suitable for Kindergarten. 3. A few years ago, I came across “10 assessments you can perform in 90 seconds” by TeachThought and really enjoyed the formative assessment strategies they outlined. It takes formative assessment to accomplish this. Nov 27, 2019 - Explore Jennifer Hoffpauir's board "formative assessment", followed by 765 people on Pinterest. Alogrithms. Formative and summative assessments both check for understanding. See more ideas about formative assessment, teaching, teaching classroom. W.K.1 Note student’s ability to recognize their own opinions. The following are common types of formative assessment. Specific strategies will be discussed for use in staying true to best practice, meeting challenging mandates, using assessment to guide instruction, and recognizing the connections and alignment between pre-k and kindergarten. Dipsticks: So-called alternative formative assessments are meant to be as easy and quick as checking the oil in your car, so they’re sometimes referred to as dipsticks. Browse kindergarten formative assessment resources on Teachers Pay Teachers, a marketplace trusted by millions of teachers for original educational resources. Or do a misconception check: Present students with a common misunderstanding and ask them to apply previous knowledge to correct the mistake, or ask them to decide if a statement contains any mistakes at all, and then discuss their answers. Students created work, obtained feedback, and revised, all according to established criteria. There are several ways to carry on formative assessment in your class. It is not included in a student grade, nor should it be used to judge a teacher's performance. Get this page as a PDF: Guidance on Diagnostic and Formative Assessments (PDF) Return to Stronger Together: A Guidebook for the Safe Reopening of California's Public Schools . I'll be in 2nd Grade this year, but I'll still keep all of you crazy Kindergarten people in my prayers as you take on a new group of angels. The goal of the formative assessment is to monitor children’s learning and to provide ongoing feedback. Participants may choose to further enhance knowledge gained in this session by scheduling a demonstration classroom guided observation to see how formative assessment is used to guide instruction. Mighty Math is a set of weekly formative assessments that can be used to evaluate progress towards the Kindergarten math Common Core State Standards. Welcome, Kindergarten Math Teachers. Formative Assessments Formative assessments are included at the end of each lesson. All nine of the kindergarten CCSS are assessed each week: K.CC.A: Know number names and the count sequences. A focused observation form is more formal and can help you narrow your note-taking focus as you watch students work. Search by Topic. 7 Approaches to Formative Assessment 1. K-5 Homework. Student writing samples used to assess students writing progress across the writing continuum. With this formative assessment strategy, you’ll ask one student a question and then ask another student if that answer seems reasonable or correct. These can be things like asking students to: Your own observations of students at work in class can provide valuable data as well, but they can be tricky to keep track of. 6. Ask more complicated questions (“What advice do you think Katniss Everdeen would offer Scout Finch if the two of them were talking at the end of chapter 3?”), and you’ll get more sophisticated insights. Formative assessments are conducted within the context of classroom activities, and help you determine your students’ academic and social-emotional development on an ongoing basis. Because formative assessment is an essential part of the learning process, it’s vital that teachers have everything they need to make learning interesting, engaging, fun and successful. You can also shift some of this work to students using a peer-feedback process called TAG feedback (Tell your peer something they did well, Ask a thoughtful question, Give a positive suggestion). Essential Learning Outcomes. What I found interesting about this work was... write a letter explaining a key idea to a friend, draw a sketch to visually represent new knowledge, or. Students can discuss in pairs, share and write comments, and/or read a few aloud. How would you have done things differently today, if you had the choice? Misconceptions and errors: Sometimes it’s helpful to see if students understand why something is incorrect or why a concept is hard. January 4, 2016 January 4, 2016 meganlee15 . You can try prompts like: Or skip the words completely and have students draw or circle emojis to represent their assessment of their understanding. Both of these would be considered summative assessments. 5. This assessment mega pack includes 140 exit slips covering all the kindergarten math common core standards! Formative Assessment Guide for Kindergarten English and Language Arts Common Core Standards 4 (Standard W.K.1 – 8) Condition: Large group instruction Observation: Before writing a summative writing piece, write a piece with the same purpose as a class. As educators, we work to figure out who understands the teaching point of a lesson, who has mastered a new concept, who needs extra help. Formative assessment can be used alongside data walls10 in that it provides teachers with evidence of student learning. Unlike summative assessments, which occur at the end of a unit, formative assessments are simply check-ins to gauge students’ understanding. Five minutes per student would take quite a bit of time, but you don’t have to talk to every student about every project or lesson. Including technology, planning for assessments, how to help your students navigate in the ins and outs of testing, helping them with the results, and how to interact with parents to allow them to also help in the process! More specifi cally, formative assessments target areas that need additional practice, and help the teacher recognize areas in which children are struggling, and address those areas immediately. The purpose of formative assessment is to check students understanding during the learning process. Mighty Math is a set of weekly formative assessments that can be used to evaluate progress towards the Kindergarten math Common Core State Standards. Then, ask a third student for an explanation of why there is an agreement or not. Both of these would be considered summative assessments. Kindergarten is usually the first time kids are in school and they need to be assessed appropriately. Share to Twitter Share to Facebook Share to Pinterest. FAQ. In this sense, formative assessment informs both teachers and students about student understanding at a point when timely adjustments can be made. Formative assessment can be used alongside data walls10 in that it provides teachers with evidence of student learning. Using AfL is not easy, but when used effectively can help students learn better. … A single data point—no matter how well designed the quiz, presentation, or problem behind it—isn’t enough information to help us plan the next step in our instruction. The following are common types of formative assessment. Entry and exit slips: Those marginal minutes at the beginning and end of class can provide some great opportunities to find out what kids remember. Search by Topic. Information will be shared about the NC Office of Early Learning’s Pre-K/K Demonstration Program, where visits to inclusive preschool and kindergarten classrooms are available to observe evidence based practices. It allows the students to ue their five senses to figure out the identity of mystery items. Formative Assessment is part of the instructional process. The lesson for this unit was based on Common Core Standard 2 for Kindergarten (K.CC.2). When we use formative assessment strategies, we’re on a fact-finding mission. Happy Kindergarten Assessment to you! Here are seven awesome websites to help you collect formative assessment data during home learning. 2. Formative Assessment in Kindergarten Becky Holden in episode 398 of the 10-Minute Teacher Podcast. Formative assessment and summative assessment are two overlapping, complementary ways of assessing pupil progress in schools. Some Parting Words on Formative Assessment Tools. A formative assessment or assignment is a tool teachers use to give feedback to students and/or guide their instruction. All nine of the kindergarten CCSS are assessed each week: K.CC.A: Know number names and the count sequences. Kindergarten assessments. You can use sticky notes to get a quick insight into what areas your kids think they need to work on. Unpacking. Formative assessment forms and informs learning for the student and teacher. For example, you can use colored stacking cups that allow kids to flag that they’re all set (green cup), working through some confusion (yellow), or really confused and in need of help (red).
Visakhapatnam District Mandals And Villages List, Marketing Salaries 2019, Vermont Cheese Types, Sistema Educativo En Guatemala, Factors Affecting Management Of Resources In Schools, Vans Authentic Black Men's, Sakrete Play Sand, Chicken Liver Curry Recipe Kerala Style, Cheap Butterfly Release, Chalet Rental French Alps, Brownstone For Rent Brooklyn, Stir-fry Cauliflower With Pork, | <urn:uuid:9bbdc680-cb8e-42da-ac54-0387729097c6> | CC-MAIN-2021-21 | https://lawnenforcementct.com/qk4z384/gm7eq86.php?tag=formative-assessment-for-kindergarten-f25679 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991557.62/warc/CC-MAIN-20210517023244-20210517053244-00055.warc.gz | en | 0.940414 | 5,249 | 4.25 | 4 |
An 1878 brochure (at left) from the New England Telephone company, "How to Make a Telephone Call," explains, with illustrations, the use of its new instrument. One of the drawings "represents a person calling attention by pressing the knob at the end of Bell Box, and turning the crank, causing the Bell at the other station to ring. When the person at the other end hears the call, he will call back; then both will turn the switches to button marked T ." As the instructions continue, "The Telephone can then be used." Of course, these early adopters began "using" the telephone the minute they pressed that knob and turned that crank. But the writer of this pamphlet understood that he was selling a revolutionary new experience, not the intricacies of a complicated machine. He needed to bracket off the preliminary manipulation of buttons and cranks as something different from this new and seemingly magical phenomenon of talking to another human over a great distance.
And yet those pesky controls required attention. This was a machine that demanded some expertise. As the pamphlet emphatically notes, "When you have finished talking, BE SURE AND TURN THE SWITCH TO BUTTON MARKED B." Oh, and watch out if the sky clouds up: "If a thunderstorm threatens, insert the plug that is supplied with the Bell, into the hole marked A." This presumably breaks the circuit between the two telephones and prevents electricity from a lightning strike from traveling along the wires that connect them. But remember to remove the plug the next time you want to make a call!
It is easy to laugh at how difficult previous generations of technology were to use, especially once they have matured and become everyday consumer items. Many improvements arise from better materials, manufacturing processes, and engineering: The physical design of a machine has to account for how the human body is built, what kinds of controls best match the capabilities of the human hand, ear, foot, etc. These early telephones required two hands and a great deal of manual dexterity to press knobs, turn cranks, hold ear- (and sometimes mouth-) pieces. But as readers of interactions know, advancements in industrial design must be informed by principles of human cognition that make using such devices something that we don't have to think about very much. Central to this process is an understanding of the role of mental and conceptual models in interaction design.
Donald Norman began this discussion for the current generation of interaction designers with the simple axiom, "A good conceptual model allows us to predict the effects of our actions ." In Norman's view, systems should present accurate images of how they work, whether through the controls in the interface, or through accompanying marketing material, instruction manuals, diagrams, support sites, etc. Problems occur when the conceptual model is faulty, when the system's image of itself is inaccurate. This often results when marketers or technical writers attempt to simplify for consumers the intricate details of a design. When the system breaks down or doesn't perform the way a user expects, the inaccurate system image reveals a fissure between the user's understanding of how a system works and how it actually does work. In Norman's classic example, he has trouble adjusting the temperature of his refrigerator compartments: The system image indicates that the freezer and cooler compartments can be controlled independently, when in fact there is only one cooler for the unit and controls that direct more or less cool air to one section or the other. He finds it impossible to alter the temperature in one compartment without affecting the other compartment as well, and this contradicts the conceptual model presented by the refrigerator's system imagethe diagram on the refrigerator itself.
What must be emphasized in any discussion of conceptual models is the knowledge that users bring with them from previous experiences with mechanical devices and information systems. When humans engage in learning, they attempt to assimilate the new experience into previously formed mental representations of reality. If the new experience matches those representations, learning occurs more easily. If there is a mismatch, real learning can occur only when the person alters his or her mental representations of reality in some way to accommodate the new experience. One can see how damaging a false system image can be to this process: Rather than strengthening a person's understanding of the world, the new learning experience corrupts it. We must always consider the previously formed mental representations of the world that people bring with them to new experiences with mechanical devices and information systems, for these inform their perception of the system's image and therefore the conceptual model they perceive as part of their experience.
Telegraph operators of the 19th century would have understood Bell's box telephone immediately. Because of their prior experiences, they had already built mental models that could assimilate the idea of electrical current traveling along a wire; the need for a closed circuit between two devices on the wire and the need to interrupt that circuit under certain circumstances; and the ability to convert electrical pulses into something else at the ends of the wire, such as the swings of a needle on a galvanometer, the series of dots and dashes in Morse code, or sound waves in the telephone. Telegraph operators also would have understood the knob and crank of the Bell Box as a way of generating current along the wire and ringing a bell to indicate at the other site that a response is requested. The plugs to open and close the circuit would likewise fit well with their mental picture of such instruments. For those who had prior experiences with it, the telegraph served as a valuable antecedent in the new experience of using the early box telephones.
But to people of the 1870s who did not have prior intimate experience with the telegraph, the new telephone was magical and frightening. Merritt lerley notes, "the reaction was confusion or disbelief. Many people were apprehensive confronting a telephone for the first time. The disembodied sound of a human voice coming out of a box was too eerie, too supernatural, for many to accept ." Only when the telephone became understood as a "speaking telegraph" did the masses became more comfortable with it. But this is merely another way of saying that people learned to assimilate the telephone into their established mental picture of how such devices work. And this could happen only when the everyday operation of the telephone became more like everyday uses of the telegraph. Ordinary people did not operate a telegraph machine. They handed their messages to the clerk, and he or she sent them along the wires. Likewise, the maturation of the telephonethe improvements in design that made it useful as an everyday appliancetook much of the operation of the apparatus out of the hands of the user, making it much easier for ordinary people to learn how to make a telephone call. Even the character of the first telephone conversations was determined by users' prior experience with the telegraph. Explaining their brevity, lerley writes: "the telegraph was understood to be a medium for short, to-the-point, business-like messages. So too, it seemed, the telephone ."
Our goals have not changed in the past hundred years and more. We want to write, to communicate, to buy and sell goods and services, to move from one place to another, to understand a problem, to make good decisions. But the technologies that help us achieve these goals do change, sometimes quite rapidly. They are "contingent" technologies in at least two important and connected ways. First, our ability to learn and use new technologies is contingent upon our experience with prior technologies. On a computer, for instance, each time we learn a new interaction idiom such as drag-and-drop, or double clicking, or scrolling, we adopt new ways of understanding how software applications and hardware devices work. We compare these new experiences to past ones; we recognize solutions to problems that previously puzzled us; we assimilate the new experience and store our new understandings so that they will serve as helpful antecedents in future encounters with unfamiliar technologies. We create new mental models that we carry with us, to help us predict how other software and devices will work when we encounter something slightly different or new. Our ability to learn and use something new is then (again) contingent upon our ability to apply antecedent experiences in productive ways. We "run the model," and we hope that the new technology operates according to the same rules that controlled the previous experience. In this fundamental way, our ability to use new technologies is contingent upon our prior experiences with other technologies.
But new technologies don't always follow the same principles and patterns as previous ones, and this is the other way in which they are contingent. They change. They take one form today, another tomorrow. They use language in unexpected ways and deploy idioms and metaphors inconsistently and confusingly. It becomes difficult to refer to antecedent experiences in order to learn how to use something new. There is too much incompatibility between our previously formed mental models and the new system before us. An online game that appears to allow direct manipulation in fact requires a memorized sequence of keystrokes. A line of text that looks like a hyperlink actually requires a double-click to activate. A custom-designed scrolling widget uses unconventional horizontal arrows in addition to vertical ones, thus making it difficult to access much of the content in an application. This kind of contingency in the technologies we use does not further our understanding. Rather, it makes it more difficult to learn new technologies because it introduces inconsistency and randomness, making it impossible to predict the effects of our actions. It hinders our ability to develop reliable mental models about the workings of the digital world in which we live and work.
A common example of the problem of contingency can be found at the checkout aisle of any grocery store or discount superstore. For several years now, people who use credit cards for purchases in such places have encountered a confusing interaction: After swiping our cards, we are prompted for our Personal Identification Number (PIN) to authorize the transaction. Observing our bewilderment, helpful clerks will ask the now familiar question, credit or debit? Answering "credit," we are then instructed to press the cancel button and proceed as usual. Sure enough, pressing cancel sends the process request to the bank and moves the transaction along, culminating in a request for an ink or digital signature. Sometimes pressing cancel leads to an intervening screen that again offers the choice of credit or debit. Today "Press Cancel to Proceed" is everywhere. Those of us who habitually pay with credit cards have accommodated this new aspect of reality to our mental representations of how the systems work. We now do it automatically, without bothering the cashier. But this is not legitimate new knowledge that will help users of information systems with other interactions. In every other situation, cancel means abort the procedure, stop, don't do anything.
Banks and stores have many reasons to favor (and default to) one form of payment or another. But the inconsistency in this interaction from store to store makes it clear that customer confidence and understanding about the transaction process is not high among them. The fact that "credit or debit" often really means "signature or PIN authorization," but the consumer rarely is informed of the implications of each, is further evidence that these systems are not designed to meet the user's needs . To further complicate matters, some banks encourage customers to choose credit at checkout, even when using a debit card. That is, they encourage signature-based authorizations over PIN-based ones, because signature-based authorizations are more profitable for the bank issuing the card. But they often disguise this fact with arguments about security and purchase protections.
One credit union, for instance, offers this advice on its Visa/ATM FAQ page: "Take advantage of all of the benefits of your VISA Check Card by always selecting 'credit.' The funds will still be withdrawn directly from your checking, and you will receive the purchase protection of VISA, a service that does not apply if you choose 'debit' ." In other words, choose credit when you want debit. For those who use a debit card instead of a credit card as a way to control spending, it is probably clear enough that their debit card will subtract funds from the checking account regardless of the authorization method. But the fact that the choice of authorization method is disguised as a choice of payment account (credit or checking) is damaging to the customer's efforts to build an understanding of the way their everyday transactions work. The misleading system image confounds learning and understanding.
If you have made the necessary mental accommodation and you understand why you should select credit when you want debit, you might think you understand what happens when you enter a PIN to authorize a credit transaction when that is the default at the checkout. You simply use the same PIN that allows you to take a cash advance from your credit card at an ATM, for example. You are merely choosing to authorize the transaction with your PIN rather than with your signature. But sometimes "credit or debit?" does mean just that: Some credit cards are actually "dual access" cards and can be used as debit cards attached to checking accounts at the issuing bank. The cards themselves do not indicate this dual functionality, and the system provides no indication that in choosing to authorize the transaction with a PIN instead of a signature, the customer is actually requesting that the funds be taken from her checking account rather than her line of credit. The customer has every reason to believe that she is making a credit card purchase, but her choice of PIN-based authorization makes it a debit transaction instead. And of course it is overly generous to call this a "choice," since the system never actually presents the option of choosing debit instead of credit.
This is not the place to completely redesign the point-of-sale interaction, but we can point out what is missing from the current system and how some basic principles of interaction design can improve it. First and foremost, the system needs to present an accurate picture of which account one's money is being taken from. "Debit or credit?" should present a choice of accounts, not authorization method. If the industry needs to support multiple authorization methods, it should educate the customer at the point of sale about those choices: the screen should prompt us to "enter PIN or sign below to authorize." If PIN authorizations are less secure but transfer funds immediately, and signature authorizations carry more protections but take a day or more, the system should present that information in close proximity to the buttons on the screen. If the account the user selects has insufficient funds for the transaction, the system should present that information and present the option of continuing anyway (if the bank permits it), and should indicate how much will be charged for an insufficient-funds overdraft. These modifications improve the visibility of the system, as Norman defined this term: by making the correct controls visible and by making them convey the correct information . They enhance the feedback of the system by informing the user of what actions have been taken and what the consequences of further actions will be. By improving the visibility and feedback of the interaction, they help users build a conceptual model of the entire process that will help them understand when things go wrong, such as when funds are taken from the wrong account, or an unexpected overdraft occurs. Most important, this more robust conceptual model will serve to make future electronic financial transactions more comprehensible.
Some retailers have done a better job than others with their point-of-sale payment systems. Some, for instance, allow the customer to select credit or debit after swiping their card, instead of defaulting to a PIN screen. The Giant Eagle grocery store in my area does this. And to its credit, it replaced the point-of-sale devices as I wrote this essay, and the new system works exactly the same way. My local Lowes, on the other hand, changed from a system that offered the credit-or-debit choice to one that assumed debit/PIN by default and required the credit customer to press cancel to proceed. Others have eliminated the requirement to authorize payment entirely (and thus to choose an authorization method), typically for transactions below a certain amount. (Panera Bread is an example of a national company that does this.) We can hope that the less desirable interactions soon will be replaced by better ones; indeed, in some areas this transition to understandable interactions is occurring quite rapidly. But this is just another part of the problem: Implementations that are temporary or contingent, that change overnight without warning, make it more difficult for people to become habituated to the interaction and to build strong mental models to help them understand what is happening in a typical transaction.
The common cordless home telephone presents another case study in how inattention to the concepts of feedback, visibility, and mental and conceptual models can confound users. A standard analog telephone is already "on" when you lift the handset: The perfectly pitched dial tone provides feedback that the system is ready to receive one or more numbers as input, and a slightly differentiated series of tones provides additional feedback as one enters the numbers. Modern cordless telephones, however, are not "on" in the same way when you pick them up: There is no dial tone. Theoretically, this should not be a problem, as users can begin dialing immediately, just as they would with an older phone, and then "send" the number in some way. But they do not have the audio feedback of the dial tone that the older sets used to indicate that the system was ready to receive a number. The silence of the handset contradicts the user's mental model of the way a telephone operates, which is carried over from the analog telephone experience.
Most new users in this situation look in vain for an "ON" button and might be encouraged by finding an "OFF" button prominently featured near the top of the button pad. Alas, the corresponding button is usually "TALK," not "ON." "TALK" just is not the correct label for these cordless handsets because it never makes sense: not before I've dialed (how can I talk if I haven't dialed yet?); and not after I've dialed (how can I talk if the number hasn't been sent yet?). But instead of focusing as a community on this interaction design challenge, the makers of these handsets experiment and change the design as often as they like, with no observable progress toward the best solution.
An inspection of the four bestselling cordless telephones at Best Buy in early 2008 reveals that none of them uses the same interaction methods to accomplish the basic task of initiating a telephone call. Common to each design is a directional wheel with a prominent button to the left and rightbut there the similarities end. The following table compares the design of these top models:
The absence of an ON label, or a picture of a handset, or at least the color green, makes it difficult to infer how to turn on the AT&T model. Conversely, the absence of an OFF label, or a picture of a cradled handset, or the color red, makes a puzzle out of turning off the GE model. The Uniden model's use of color in combination with images of handsets provides the best immediate indication of how to initiate a call. But then those ambiguous icons on the directional wheel are bound to frustrate a user at some point in the conversation. Ironically, three of the models use some representation of a handset on the button that initiates the call, but the depicted handset harkens back to the modern analog Bell telephone. This is particularly evident in the models that use a cradled Bell handset to indicate which button ends a connection. The designers of these buttons appear to know that users learn best when they can compare something new to something familiar. Sadly, they also know that they can't rely on depictions of today's ever-changing digital phones to provide any shared foundation of experience on which to build.
We live in an increasingly digital world, and it is also an increasingly unusable one. Making a telephone call and paying for something at the store are mundane aspects of everyday life. But unless we can accomplish these mundane tasks without giving them much thought, we will be hindered in realizing our higher aspirationssuch as communicating to ourselves and to others about who we are, what it is we wish to accomplish in life, how we want to change the world. Both the ordinary and the exceptional aspects of modern existence are increasingly connected to technological systems. Unless we are able to design systems that encourage us to build upon our experiences with them from one day to another, we will fail both ourselves and those who come after us.
Charles Hannon is associate professor and founding chair of the information technology leadership department at Washington & Jefferson College in Washington, PA. He teaches courses in human computer interaction, the history of information technology, data presentation, and project management, among others. He is the author of Faulkner and the Discourses of Culture. More recently, he has published widely on the role of educational technologies in higher education. His current book project is Usable Devices: Mental and Conceptual Models, and the Problem of Contingency.
The Amazon Kindle received its share of criticism for "Next Page" and "Prev Page" buttons that are too easy to press unintentionally. But it is the Kindle's "Back" button that illustrates the fissure that opens up when there is a mismatch between mental and conceptual models. How does "Back," on a device that promotes itself as a book substitute, fit into our mental models of reading?
The Kindle's user guide states that readers can use Back to return to their book or magazine after briefly looking up a word, highlighting text, or following a footnote: "Pressing the Back button, located to the right of the select wheel, will bring you back to where you were." But the nature of hypertext is fundamentally associative. Once we have linked away from our book in the Kindle and looked through several pages in several new texts, what does it mean to go "back" to where we were?
A reader leaves a book to browse the Kindle store. Once there, she selects a top-level category, such as blogs, then a subcategory, like news, politics, and opinion, and then uses Next Page and Prev Page to look through several pages of available blog subscriptions. Where should Back now take her? To the home page of the Kindle store? To the page listing all blog subcategories? What about back to the page of the book she was reading? In this scenario Back takes her to the list of blog subcategories, from which she had previously selected news, politics, and opinion. But it is difficult to imagine a rationale for this behavior that would help her predict the results of using Back in future contexts.
The Kindle's Back button represents a collision between what we have learned in the past 500 years about reading books, and what we know so far about hypertext. Consider what the Kindle user's guide says about underlined words in a text: "They indicate a link to somewhere else in the material you are reading like a footnote, a chapter, or a web site." A typical mental model of book reading includes the concept of a footnote and a chapter as being "somewhere else in the material you are reading," but a website is something else altogether. We put down our books when we look something up on the Internet. If we are reading online, we know we are leaving the current text when we link to another site.
Back is a flawed concept for the Kindle because it mixes two separate mental models of reading (Web and print), and because Back is hardly a settled concept for Web-based interactions in the first place. Initially, Back was simple to understand and use. The predictability of Back and Forward made risk taking more acceptable for new users. But the No. 1 "design sin" in Jakob Nielsen's "Top Ten Web Design Mistakes of 1999" was the breaking of Back by coding links so that they open new browser windows or redirect users to the undesired page. Over the years, Back has been degraded further by the use of frames; by forms that send a user's information to a server for processing; by websites that use Flash animations for navigation; and by rich Internet applications that process information within the context of a single URL.
These "contingent" implementations make it difficult for users to develop mental models that will allow them to predict what will happen when they use a Back button, whether on the Web, the Kindle, or any other device.
©2008 ACM 1072-5220/08/0900 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2008 ACM, Inc. | <urn:uuid:6207b8b3-5b86-46e3-87cb-7d2f5bd20a58> | CC-MAIN-2021-21 | https://interactions.acm.org/archive/view/september-october-2008/featuremental-and-conceptual-models-and-the-problem-of-contingency1 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00536.warc.gz | en | 0.949497 | 5,123 | 3.140625 | 3 |
M.Goodall, A.Tagg British Aircraft before the Great War (Schiffer)
Deleted by request of (c)Schiffer Publishing
P.Lewis British Aircraft 1809-1914 (Putnam)
Dunne D.6, D.7 and D.7bis
Following the successful flights at Eastchurch of his D.5 tailless pusher biplane in 1910, Lt. J. W. Dunne embarked upon the design of a new tailless monoplane. The wings were set high in the parasol position, and the D.6, as it was designated, was derived from the model monoplane which Dunne had submitted in support of his original proposals to the War Office in 1905. These were refused, and he was persuaded to adopt, instead, the biplane form, a decision which, he stated many years later, he felt to be correct in the light of subsequent experience.
The same form of sweptback wings was employed as in the earlier aircraft built, but some minor modifications were incorporated. The degree of sweepback was increased slightly, and an interesting innovation was the alteration of the camber of the wing section which changed continuously from the leading-edge at the roots to the trailing-edge at the tips. Also in the interests of inherent stability, the wing-tips were the subject of pronounced wash-out, and their final few feet curved sharply downwards outboard of the centre of the ailerons to provide side area in the absence of fins or rudders. The wings were above an open wooden framework which formed an uncovered fuselage carrying the single seat at the front, with the 60 h.p. Green engine and its 7 ft. 3 ins. diameter propeller at the rear. The water radiator was fitted vertically above the centre-section in an effort to keep the centre of gravity as high as possible. The entire machine was supported in a horizontal position on an undercarriage comprising two pairs of wheels combined with long, curved skids, at the rear of which were fitted shorter, sprung, shock-absorbing tail-skids. The ailerons, which operated either as elevators or rudders, were controlled independently by two levers from the pilot's position.
The D.6 was built by Short Brothers under the sponsorship of the Blair Atholl Aeroplane Syndicate, and the test flying was carried out by Dunne himself, who concentrated on the experimental flight trials of the D.6, the D.7 and the D.7bis from 1911 until mid-1913, surviving four major crashes during the process. N. S. Percival made two attempts to fly the D.6, but was not successful with it.
Col. J. E. Capper was interested in the design and ordered for himself a slightly smaller 50 h.p. Gnome-engined single-seat version, which was designated the D.7 Auto-Safety and was ready for display at the 1911 Olympia Aero Show. The span was 35 ft. and the wing area totalled 200 sq. ft. Empty and loaded weights were 1,050 lb. and 1,409 lb. respectively, and a 60 m.p.h. maximum speed was achieved. During June, 1911, the D.7 was put through its tests at Eastchurch, Isle of Sheppey, and on 12th January, 1912. Dunne flew the machine before Alec Ogilvie and T. O'B. Elubbard without, for a period, using either his hands or feet on the controls.
In 1912, the original D.6 single-seater was converted into the D.7bis two-seater and was given the extra power of the 70 h.p. Gnome engine to cope with the additional weight. The wing was remodelled to match that of the D.7, with a span of 35 ft. and an area of 200 sq. ft. The machine weighed 1,200 lb. empty and 1,728 lb. loaded, and had a maximum speed of 60 m.p.h.
Description: Single-seat tailless pusher monoplane. Wooden structure, fabric covered.
Manufacturers: Short Brothers, Leysdown, Isle of Sheppey, Kent.
Power Plant: 60 h.p. Green.
Dimensions: Span, 36 ft. Length. 21 ft. Wing area, 230 sq. ft.
Jane's All The World Aircraft 1913
DUNNE. The Blair Atholl Aeroplane Syndicate, Ltd., 1, Queen Victoria Street, London, E.C. School: Eastchurch. In 1906 Lieut. Dunne was employed by the British Army authorities for secret aeroplane experiments. He had at that time patented a monoplane of arrow type. In 1907 Dunne I was tried on the Duke of Atholl's estate in Scotland, but failed to fly, being smashed on the starting apparatus. Dunne III, a glider, 1908, was experimented with successfully by Lieut. Gibbs. In the same year Dunne IV, a larger power driven edition made hops of 50 yards or so. Early in 1910 the War Office abandoned the experiments. Dunne II, a triplane of 1906 design, was, by consent of the War Office, assigned to Prof. Huntingdon, who made one or two short flights with it at Sastchurch in 1910. At the same time the above syndicate was formed, and Dunne V, built by Short Bros., was completed in June, 1910. In 1912-13 the Huntingdon, modified, was flying well.
1912-13 1912-13 1912-13 1912-13
Model and Date. single-seat 2-seater biplane biplane
mono. mono D8. D9.
Length........feet(m.) <i>not given</i> ... ... ...
Span..........feet(m.) 35 (10.66) 85 (10.66) 46 (14) 45 (13.70)
Area......sq.feet(m?.) 200 (18.5) 200 (18.5) 552 (51) 448 (42)
............lbs.(kgs.) 1050 (476) 1200 (544) 1700 (774) 1693 (768)
............lbs.(kgs.) 359 (161) 528 (230) 414 (187) 509 (231)
Motor.............h.p. 50 Gnome 70 Gnome 60 Green 80 Gnome
Speed......m.p.h.(km.) 60 (95) 60 (95) 45 (70) 50 (80)
Number built during 1912 1 1 1 5
Notes.--Biplane D 8 is identical with the original pattern Dunne V, except that it has only one propeller instead of two. It has been flown completely uncontrolled in a 20 m.p.h. wind, carrying a R.Ae.C. observer as passenger.
Flight, April 1, 1911.
Third International Aero Exgibition at Olympia - 1911.
THE EXHIBITS ANALYSED.
A monoplane that is altogether in a class by itself is the Dunne, which is, so far as practical flying machines are concerned, an evolution of the Dunne biplane. The biplane was in itself, however, originally evolved from still earlier monoplane models. A characteristic feature of this machine is the absence of a tail and the V-plan form of the wings, which also have a varying angle of incidence from root to tip. The object of the design is the acquisition of natural stability, and the purpose of sloping back the wings is to acquire an overall length for the machine as distinct from the chord dimension. This increment in length virtually introduces the principle of a tail, and the change in the angle of incidence throughout the succeeding sections of the wings confers the principle of the dihedral angle on the relative attitude of the virtual tail portion in respect to the central leading portion of the machine.
Flight, April 8, 1911.
MORE AEROPLANES AT OLYMPIA.
WITH SPECIAL REFERENCE TO THE DUNNE MONOPLANE.
Among the monoplanes, that which will probably attract the greatest interest among our readers is the Dunne, for all will recollect what a number of interesting features the Dunne biplane, which we described some time ago, possessed. The Dunne monoplane, like its prototype biplane, is designed to possess natural stability, and is tailless in the ordinary sense of the term. In principle, however, the V-plan form of its wings gives it two tails instead of one, and the hinged flaps in the trailing extremities of the wings provide it with two elevators instead of one. These flaps are under independent control, and serve the purpose of steering the machine horizontally and vertically. The principle of stability associated with the Dunne monoplane is somewhat complicated, and has to do entirely with the special formation of the wings, which are generated on the surface of a cone. This is not the place in which to go into precise details of this method, which is fully described in our article on the Dunne biplane; but it will be interesting to those familiar with that description to be told that the apex of the cone is altogether in a different place, being situated, on the monoplane, a little way behind the trailing extremity of the wing and more or less directly in line with the outside edge. This formation of the wing gives a variable angle of incidence from shoulder to tip, which, in conjunction with the V-plan form, confers on the machine the principle of the fore-and-aft dihedral angle, which is one of the accepted methods of obtaining natural stability and is a characteristic feature in the design of all successful aeroplanes. Owing to the wing extremities being situated in an exposed region and not sheltered behind the middle portion of the plane, as is more or less the case with the tail of an ordinary aeroplane, Mr. Dunne claims that their tail effect is enhanced. Also the same argument applies to the efficacy of the dihedral angle, because, owing to the formation and continuity of the wings, it is impossible to define what part constitutes main plane and what part tail. That in fact the relative functions of these members are performed by different parts of the wings in accordance with the requirements of the moment. Lateral stability in the Dunne monoplane is somewhat more difficult to explain, but that which is the most significant feature in the design is unquestionably the fact that the wing formation provides down-turned wing tips as distinct from the upturned wing tips on such monoplanes as the Handley Page and Weiss, which are also designed more or less with a view to natural stability. It will be noticed, of course, that it is the leading edge of the Dunne monoplane that is turned down, whereas in the Handley Page and Weiss monoplanes it is the trailing edge that is turned up, so that the relative positions of the leading and trailing edges in all three machines are identical. On the other hand, it will be observed that there is a very material and fundamental difference in principle between the two methods, for whereas in principle the upturned trailing edge represents the lateral dihedral angle, the down-turned leading edge represents the gull's wing, which is an accepted method of obtaining lateral stability in side gusts. The general action is as follows: A side gust ordinarily lifts that side of the machine against which it first strikes, because of the aeroplane action of the planes considered in their attitude towards the gust and the consequent travel of the centre of pressure towards the virtual leading edge facing the gust, which involves an actual travel of the centre of pressure laterally from the real centre of gravity of the machine. Thus the machine cants over and the upset is emphasised with the dihedral angle, because the upturned wing offers an increasing surface for the more effective surface to the gust and tends to counteract the lift due to the travel of the centre of pressure on the remainder of the plane. It is, in principle, little more or less than this idea which was tried by the Wright Brothers in some of their early gliding experiments. Like most things of this kind, however, there is all the difference between the broad principle and the detail of carrying it into effect on a practical machine. It is the detail that makes the Dunne monoplane such an original design.
Flight, June 24, 1911.
THE DUNNE MONOPLANE, 1911.
As our readers are already thoroughly familiar with the general features of the Dunne system from our description of the biplane in FLIGHT, June 18th, 1910, it is unnecessary to make any elaborate reference to the monoplane that is now undergoing its trials at Sheppey. This machine, as a glance at the accompanying illustrations shows, has the same general type of wings, but a point that will escape casual observation is that the camber of each is generated on the surface of a cone having its apex in the vicinity of the trailing extremity, whereas it may be remembered that the generating cone used in connection with the wings of the biplane had its apex in the vicinity of the prow of the machine. The meaning of this reference to the generating cone will be understood by those who read our description of the Dunne biplane, but for the sake of those who are unfamiliar with the principle, we may briefly explain that the characteristic feature of the Dunne wing formation is that the camber changes from point to point between shoulder and tip. This change takes place both in camber and attitude (angle of incidence) and is gradual in character; it is represented by the change of curvature on the surface of a cone arranged in a special way with respect to the setting of the wings. For a complete explanation of this particular point, however, we must refer our readers to our above mentioned article.
Since the tips are set back behind the shoulder, owing to the V plan form of the wings, the change in angle between shoulder and tip introduces the principle of the longitudinal dihedral. In order to render this clear it is convenient to imagine that the middle section of each wing is removed. In this case the extremities form two tails at a negative angle in respect to the leading main plane. In practice the extremities act as tails, and being out of the influence of the draught of the propeller they do not tend to disturb the balance of the machine if the propeller stops in flight. As to where the tail portion begins and the main plane ends, it seems impossible to say, for it seems only reasonable to suppose that the dividing line varies with circumstances. Provided that it moves in the right direction, this differential action is, of course, all to the advantage of the natural stability of the machine.
Natural stability is the great aim, we might almost say the raison d'etre, of the Dunne aeroplane, and, so far as the longitudinal stability is concerned, the simple principle of the fore and aft dihedral is apparently a sufficient explanation of the system. In many modern machines the principle of the dihedral is also used for lateral stability, but in the Dunne machine this equilibrium is arranged in a different way. As a glance at the accompanying illustrations shows, the wings are arched rather than upturned, and it is therefore to the principle of the gull's wing, and not to the dihedral angle, that the lateral stability of the machine is due.
Unfortunately, this principle does not lend itself to any very precise explanation, but as a general description it may be pointed out that the down-turned extremities are so arranged that if the relative wind veers from an initial position, which may be assumed to be in the line of flight, the near wing will be partially shielded and may even have a downward pressure on its extremity. Simultaneously, the downturned extremity of the far wing will be more exposed and will thus exert a greater lifting force at its full leverage.
The first tendency of the veering wind is to lift the near wing, owing to the improved aspect ratio of that wing and to the diminished aspect ratio of the far wing, which is also possibly shielded somewhat by the body of the machine; it is this disturbing force that is counteracted in the manner just explained. As the equilibrium of the machine depends on the nicety with which one force just balances another it can be understood that the exact design of the wings is rather a difficult matter.
It will be observed, from what we have said, that the disturbance and correction thereof are simultaneous and are both brought about by the relative wind itself, without change in the position of the machine. In the theory of the dihedral angle, the machine is assumed to heel over in order to obtain a righting force. This difference in the actions of the two types of machine seems to draw a line between two types of stability, which may be described as "stiff" and "rolling." The Dunne principle belongs to the former, inasmuch as the machine is not supposed to be actually moved at all by the disturbing influence.
From a constructural point of view, the Dunne monoplane is mainly interesting on account of its dissimilarity in general appearance to any other well-known type. The wings form a canopy over the pilot, who is seated in the bows of a shallow body that carries the engine at its after end. The propeller revolves immediately behind the V of the wings, and its axis is, of course, in line with the centre of gravity of the machine. Above the wings is the radiator, which is placed there principally to raise the centre of gravity as high as possible. The entire machine is carried on a simple wheel-skid under-carriage.
The control of the machine is effected by two levers, which are quite independent, and control the hinged wing-tips in the trailing edges of the planes separately. These flaps serve the dual purpose of elevator and rudder, for when they are both moved simultaneously in the same direction they alter the attitude of the machine, and thereby cause it to climb or descend; but when moved in opposite directions, or when one of them is moved alone, it is equivalent to rudder action, because it alters the resistance to motion and thus tends to accelerate or retard that extremity of the plane, so that the machine alters its course.
Flight, June 22, 1912.
THE DUNNE MACHINES IN FLIGHT.
WE publish this week, two or three photographs taken by Miss Dunne of her brother's machines at the Royal Aero Club's Eastchurch flying grounds. Everyone will rejoice to hear that Mr. Durme has recovered from his very serious illness, and is now back at work again. Not only is Mr. Dunne himself flying at Eastchurch, but Capt. Carden, R.E., as our readers know, has been making the best sort of progress, passing for his brevet last week, and Capt. Carden, as some of our readers may not know, has the misfortune to have lost an arm, wherefore his practice with the Dunne machine is worthy of very special attention.
Two of the photographs show the biplane in flight, and both illustrate very clearly the V plan of the wings, from which, in conjunction with the peculiar variation in camber from shoulder to tip, is derived the high degree of natural stability that this flyer has always claimed to possess. It has flaps at the extremities of the main planes, but these are for the purpose of steering and elevation only; they are independently operated by separate levers, one on each side of the pilot, which adds to the significance of Capt. Carden's performances.
The monoplane, which is illustrated with Mr. Dunne in the pilot's seat, is built on the same principle as the biplane, but the absence of the lower plane gives it a very extraordinary appearance. We have heard other pilots describe the flying of this machine as revolutionary, and certainly it may be taken for granted that the Astra Co. of France would not have taken up the French rights and be making preparations for building these machines in their own country if they did not think a great deal of them. In the early days of motor cars, it will be remembered, all the good things came from France in the first instance, but the tide turned at last. Let us hope that it may do so in aviation, and long may men like J. W. Dunne, who are devoting the best of their lives to the cause, be spared fully to achieve the ends they have in view. | <urn:uuid:7380ef5d-15fe-4c25-8e74-6ba2d686419d> | CC-MAIN-2021-21 | https://flyingmachines.ru/Site2/Crafts/Craft28544.htm | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991378.48/warc/CC-MAIN-20210515070344-20210515100344-00177.warc.gz | en | 0.959523 | 4,266 | 3.09375 | 3 |
King Arthur's Death
By Michael Smith
An epic poem of the fall of kings, vibrantly translated and stunningly illustrated with linocut prints by the author of Unbound’s Sir Gawain
Publication date: February 2021Buy
Book Jacket Artwork
Bear and Bees
Only 5 available
Only 5 available
Only 5 available
White Hart of Richard II
Only 5 available
Doggy Days in Summer
Only 5 available
King Arthur Print
King Arthur and Excalibur - Giclee Colour Print
Only 15 available
Hand-Painted Shield: The Viscount of Valence
Unique: Only one available
Arthur’s Last Battle ORIGINAL Linocut Print
Set of All Five Animal Prints
Only 5 sets available
King Arthur and Excalibur - ORIGINAL Linocut Print (already fulfilled)
Only two available
The Siege of Metz
Only one available
Frequently Asked Questions
Where can I get my book delivered to?
How do supporter names work?
King Arthur’s Death (commonly referred to as the Alliterative Morte Arthure) is a Middle English poem that was written in the north of England at the end of the fourteenth century. A source work for Malory’s later Morte d’Arthur, it is an epic tale which documents the horrors of war, the loneliness of kingship and the terrible price paid for arrogance.
This magnificent poem tells of the arrival of emissaries from Imperial Rome demanding that Arthur pays his dues as a subject. It is Arthur’s refusal to accept these demands, and the premise of foreign domination, which leads him on a quest to confront his foes and challenge them for command his lands.
Yet his venture is not without cost. His decision to leave Mordred at home to watch over his realm and guard Guinevere, his queen, proves to be a costly one. Though Arthur defeats the Romans, events in Britain draw him back where he must now face Mordred for control of his kingdom – a conflict ultimately fatal to the pair of them.
Combining heroic action, a probing insight into human frailty and a great attention to contemporary detail, King Arthur’s Death is not only a lesson in effective kingship, it is also an astonishing mirror on our own times, highlighting the folly of letting stubborn dogma drive political decisions.
Chivalry exposed by the horrors of war
The Unbound community will already be familiar with my translation of Sir Gawain and the Green Knight (now published). Like Gawain, it tells its readers much more than appears at first glance…
Whereas Gawain focuses on themes of religion, duty and chivalric behaviour, Arthur concentrates on the frailty of kingship, the depravity of men and the marital duties of knighthood.
Combining pace, grip and passion, King Arthur’s Death has epic scale and sweeping scope. Yet, it does not dwell on the courtly love and mythical angles so typical of the French romances of the period. Instead, by contrasting courtly politesse with the brutal horrors of war, it highlights the delusional vanity of the chivalric ideal and the terrible impact of poor decisions.Indeed, King Arthur’s Death is almost an antithesis of the Arthurian romance, boldly written and with a profound anti-war message hidden amongst its sweeping narrative. It is as if the poet himself is weary of the Hundred Years War, the backdrop to his life and times, and is calling on princes to show greater judgement and compassion for their people.
Help King Arthur’s Death be told anew!
As with Sir Gawain and the Green Knight, my approach to translation centres on re-casting the words of the original in such a way that if the poet came back today the language, flow, alliteration and metre would all be relevant to him. Yet it must also have flow and relevance to the modern reader.
As with Gawain, the book will not only be accessible, but will also examine why King Arthur’s Death was written and what it aimed to achieve. It is a translation, but also an interpretation, pointing the reader to new areas of learning and different thinking of what lies behind story and myth.
Again, it will be illustrated throughout; containing a wide range of linocut prints featuring scenes from the poem and also, in the introduction and notes, pen-and-ink drawings based on contemporary manuscripts. As before, all of these will draw on extensive research and have been reproduced in the style of the fourteenth century.
Readers of Gawain will know that I aim to remain true to the original form of the poem. My focus will be on capturing its pace and punch and brevity of form which, I hope, will bring alive this magnificent poem once again. A poem which many have claimed to be the poem which inspired Malory.
King Arthur’s Death can only happen with your help. Please do pledge for this brand new, illustrated translation of this epic poem and let its voice speak to us again!
In these changing times, its message demands to be heard anew.
Michael Smith comes from Cheshire and read history at the University of York, specialising in English and European mediaeval history. In later years, he studied as a printmaker at the Curwen Print Study Centre near Cambridge. His first book, a translation of Sir Gawain and the Green Knight, publishes in July 2018 from Unbound. You can find out more about his mediaeval-themed art and printmaking at www.mythicalbritain.co.uk.
Excerpt 1: Sir Cador battles with the King of Lybia.
This scene gives a flavour of the poet’s great pace but also of his understanding of the speed of battle and the tactics of warfare in the fourteenth century:
Then Sir Cador the keen as becomes a true knight,
Cries out “A Cornwall!” and fewters his lance,
And strikes straight through the battle on a great steed;
Many strong men he struck by his strength alone.
When his spear was snapped he eagerly sprung
And swept out his sword which never failed him,
That cut swathes most wide and wounded great knights,
And he works in this way to anguish their flanks
And hews at the hardiest halving their necks asunder
Such that all blends with blood where so his horse barges!
Many nobles that lord did bludgeon to death,
He topples down tyrants and empties their saddles,
Then turns from his toils when he thought the time right!
Then the Lybian king cries full loud
At Sir Cador the keen with cruel words:
“You have won worship and wounded knights;
You act for your boldness like the world is your own -
I am right here and waiting sir, by my word;
Hold yourself fore-warned you had better beware!”
With cornet and clarion many new-made knights
Listened out for the cry, casting lance to the fewter,
Forged forth on their foe on steeds like iron
And felled as they first came full fifty at once;
Shot through the schiltrons and shattered lances,
Laid down in a pile great noble lords,
And thus nobly our new men use all their strengths!
But new nonsense is here that saddens me greatly:
That king of Lybia takes a steed that he liked
And acts most lordly with silver-lioned shield,
Surrounds the melee and piles in amongst it;
Many lords with his lance their lives he steals!
Thus he chases the child-knights of the king’s chamber
And kills in the fields those most chivalrous knights;
With a spear for the chase he chops down many!
- 4th January 2021 The Battle of Camlan, the last battle of King Arthur - a short film for you to enjoy
I hope you had an enjoyable Christmas and New Year - or the best possible given the ongoing pandemic; these are unpleasant times which are trying us all. Despite this, I hope that you are enjoying your copy of King Arthur's Death which should have arrived with you by now; the book would never have been possible without your support.
Two short films for you to enjoy…23rd July 2020 Sir Robert Thornton and the writing of King Arthur’s Death
King Arthur’s Death – the Alliterative Morte Arthure – survives due to the work of one man, Sir Robert Thornton. A Yorkshire gentleman with the passion of an enthusiastic amateur, he was responsible for copying down many mediaeval poems and stories which, but for his efforts, would long since have disappeared.
The Lincoln Manuscript (MS 91)
His efforts have left of us with two significant…29th June 2020 And we're off!
The day has arrived! I'm sending a brief update to say that as of today, 30th June 2020, King Arthur's Death enters its final stage ready for publication in February next year. My understanding from Unbound is that you, as a pledger and patron of the book, will receive your copies much earlier than this so hopefully the wait will be shorter.
As previously explained, the coronavirus…26th May 2020 Why Barnard Castle makes Surquedry a word for today
One of the undercurrent themes of the alliterative poetry of the fourteenth century is the notion of "surquedry", a word which is frequently used and is a Middle English equivalent of the French surcuiderie, an overweening arrogance or pride. The poets were critical of this in contemporary mediaeval English government and it has a particular relevance to politics in Britain today.
Historical…28th March 2020 The Dunstable Swan Jewel and its Connection to King Arthur
After almost a year of suffering from a frozen shoulder, I have at last been able to return to printmaking - just as COVID-19 has forced everything into lockdown. So in the last few days, I have turned Gringolet's garage into a makeshift printmaking studio to produce my first linocut for some time: a four colour print of the Dunstable Swan Jewel in the British Museum.
…19th February 2020 Great News - the page proofs have now arrived for King Arthur!
Yesterday the postman arrived with the page proofs for King Arthur's Death - the Alliterative Morte Arthure; this means that (once I have proof-read them), the manuscript enters its final stage: to become a book itself.
The team at Unbound has done a magnificent job in bringing this together. With the cover design, illuminated letters, linocut illustrations - not to mention…16th January 2020 Monarchical loyalty in the Middle Ages - a key to understanding the King Arthur story
Key to an understanding of mediaeval stories such as King Arthur is the concept of the “Familia Regis” or royal household – a close-knit group of supporters and servants loyal to their monarch. At the same time, the monarch is loyal to his or her followers. It is a symbiotic relationship which, when upset, can have dramatic consequences.
In King Arthur’s Death – the Alliterative Morte Arthure…21st December 2019 Cover design revealed for King Arthur!!!
I had a wonderful surprise from Imogen, my editor at Unbound, who yesterday sent me the proposed artwork for King Arthur's Death - the Alliterative Morte Arthure. I simply had to share it with you. I am absolutely thrilled with the design, capturing what I see as the core essence of this magnificent fourteenth century work.
I think the designer himself (who worked on my last…9th December 2019 King Arthur - a story with hidden messages still relevant today
Most writing, whether wilfully or otherwise, often paints a picture of its own time that, as the years go by, begins to lose its context and therefore its original meaning. Thus, with King Arthur's Death (the Alliterative Morte Arthure), it is the duty of the translator not only to tell the story as originally written but also to reveal the context behind the story which confronts us today.…24th October 2019 A progress report on King Arthur - plus a special date for your diary!
This week my illustrated translation of the Alliterative Morte Arthure (King Arthur's Death), reached a new milestone as I received the first copyedit from the editorial team. I'm so pleased to have the same team working on the book as on my translation of Gawain: the questions are probing and compel me onwards to be sure of accuracy!
It's a daunting task to go through this, but a worthwhile…29th September 2019 Fabulous mediaeval literary inspirations found in Herefordshire
When I am translating - and writing about - the fourteenth century poets who inspire my work, I am also inspired by the world in which these poets lived. Theirs was an environment in which people's lives could be cut short in an instant, when food supplies could collapse, when plague, pestilence and war were never far way. A terrible time but also a time of beauty and great cultural richness.…3rd September 2019 What did the knights of King Arthur look like?
In our minds, the knights of King Arthur probably resemble a Hollywood melange of extravagant "Techincolor", inordinately-sized and bedecked battle helmets, and hand-to-hand combat on horseback. While there is nothing wrong in this, it is important to remember that an Arthurian romance is set in the time in which it was written. What King Arthur does the poet-writer of the Alliterative Morte Arthure…22nd August 2019 The Devil comes at night - the loneliness of kings in the Alliterative Morte Arthure
As anyone in a position of leadership knows, being the person at the top is a lonely business. In mediaeval times, to be king or emperor brought with it great power and responsibility but also periods of great doubt. King Arthur's Death, the Alliterative Morte Arthure, addresses this theme in the two dream sequences of King Arthur when, disturbed by his visions, he asks for the opinion of his "philosphers…20th August 2019 Exciting News about King Arthur!!!
Today a new milestone was reached in my translation of the Alliterative Morte Arthure (King Arthur's Death) - the manuscript and all the illustrations has now been sent to Unbound. The book now begins the next part of its journey - becoming the book which you helped to make happen!
Labour of love
This has been an all-absorbing project over the last eighteen months…4th July 2019 Recreating mediaeval illuminated letters - a short film
As many of you will know from my translation of Sir Gawain and the Green Knight, I like to illustrate my works with linocut prints. So I have produced a brief film showing how I produce the letters which will appear in the book - from the cutting through to the printing and finally the publication of the work in the finished book!
Art is very important to my approach. I make these linocuts to…4th June 2019 King Arthur's Death - here's what will happen next...
As you know, last week my translation of the Alliterative Morte Arthure (King Arthur's Death) hit 100% funding and now takes its last steps before being handed over to the team at Unbound to turn it into a book. I thought I would just send a quick update to let you known what is happening next with the book.
22nd May 2019 King Arthur - a real lesson for Britain today
- Firstly, as of today, I have completed all the linocut illustrations…
When I first began work on the King Arthur's Death, I was aware that deep within it lay a message for today's leaders too: beware the folly of your own pride. Never did I think, as the year has progressed during my translation and illustration of this work, that its message would become ever more relevant and pertinent.
In today's tumultuous and febrile political environment, King Arthur's…13th May 2019 Some contemporary secrets revealed by the Allliterative Morte Arthure (King Arthur's Death)
The Alliterative Morte Arthure (King Arthur's Death) is a fabulous poem - not just for its poetic magnificence, but also because of its wealth of contemporary detail. With this in mind, I wanted to embellish an earlier post about its coverage of King Arthur's siege of Metz. So I have produced a small film detailing how and why mediaeval sieges were fought and what the poet might have had in mind in…26th April 2019 Kingship and spirituality in King Arthur's Death
It is important to see the Alliterative Morte Arthure - the fourteenth century poem we now know as King Arthur's Death - as not so much an Arthurian romance but as a reflection on kingship. This is why it was written: a statement of the conflict between religious or spiritual duty and martial or "chivalric" state politics. It is a mirror on the dichotomy of kingship in the late fourteenth century…15th April 2019 The mystery and magic of nature and its role in King Arthur
In my work translating and illustrating King Arthur's Death, I have become transfixed by the mastery of the anonymous Arthur-poet. In this brief update, I want to share with you the fabulous way in which he weaves the mystery and magic of nature into his work and how he uses its power to add suspense and mystery to his work. I have also produced a brief film about this which I'd like to share with…29th March 2019 A short video bringing alive the 14th Century King Arthur poet
In this brief update, I wanted to share with you a short video I have made giving some context to my forthcoming translation of the Alliterative Morte Arthure (King Arthur's Death). In this small section, I am reading some of the opening lines of the original manuscript; the sub-titles below each page give a flavour of the translation. I hope this conveys some of the flavour of what you can expect…22nd March 2019 King Arthur is nearly funded!
Last night I was delighted to see that the funding for my translation of the Alliterative Morte Arthure (King Arthur's Death) reached 80% of its final target. This is a wonderful achievement and I wanted to thank you for your support with this project to date. I could not have reached this point without your help.
With now just 20% left to go we're so close to making this…11th March 2019 The extravagance of mediaeval feasting in the court of King Arthur
Whether during the holy days, or in celebration of weddings or victories, or as part of welcoming a monarch during their travels, feasting for mediaeval royalty and the nobility was often a magnificent display of wealth and a demonstration of the social order. In King Arthur's Death we are treated to a description of mediaeval culinary magnificence so grand that we can hardly imagine it today…20th February 2019 Making good progress with King Arthur - an update from the author
It's been nearly a year since I began work on translating King Arthur's Death so I wanted to send you a brief update on how the book is progressing and give you an idea of what you can look forward to when it is finally published. Unlike my translation of Sir Gawain and the Green Knight, I did not begin the crowdfunding with the manuscript ready - I started completely from scratch. No words, no…6th February 2019 Making a four colour linocut print of King Arthur and Excalibur
It has taken me a while but I have at last managed to produce a four colour linocut print of King Arthur wielding Excalibur, which I wanted to offer as a pledge option for supporters of my translation of King Arthur's Death. This article takes you through the process, which in total took between 7 and 10 solid days of hard work...
The basis of my research.
King Arthur's Death, or the…17th January 2019 The incredibly moving lament of King Arthur for his dead knights
Towards the end of the Alliterative Morte Arthure, the fourteenth century poem I am translating and illustrating for Unbound, we finally arrive at the moment where Arthur fights with, and defeats, the odious traitor Mordred. Here, seriously wounded and on the verge of death, King Arthur makes a speech so powerful and haunting it is spine-tingling to hear it even now, 600 years after it was first written…10th January 2019 Some lovely new linocut prints featuring animals and birds
I've recently taken a short break from my printmaking work for King Arthur and have been producing a small range of prints featuring animals and birds from mediaeval bestiaries. I'm so pleased with them I thought I would offer a small selection of them as pledge options for supporters of my King Arthur translation. Let me tell you more about them and how they were made...
Bestiaries in the…2nd January 2019 Merlin and the Legend of Dinas Emrys - Arthurian mythology in the heart of North Wales
Britain is a land of myth and legend, at the centre of which the stories of King Arthur are amongst the most renowned. At Dinas Emrys in the mountains of North Wales, history and myth combine in a potent mixture to deliver one of the most important sites in the Arthurian canon.
Here, in this lonely, rain-sodden place, the British king Vortigern was told by the boy Merlin of two fighting dragons…19th December 2018 King Arthur's Round Table Celebrates Christmas - but then, strangers arrive...
In the English literary tradition of the Middle Ages, Christmas is a time of huge significance and symbolism. With King Arthur’s Death, as with Sir Gawain and the Green Knight, the warmth of the holiday period is beautifully conveyed; a time when lordly households drew in on themselves in the midst of winter, and maybe hear of strange tales and magical stories…
Following an introduction to the…28th November 2018 A film showing the printmaking process for my translation of King Arthur's Death
As well as being an historian, I am also a printmaker. The combination of both these areas comes to play in my forthcoming translation of King Arthur's Death (as indeed it did in my translation of Sir Gawain and the Green Knight, also via Unbound, published July 2018), for which I am producing at least 32 linocut prints. In this brief update, I provide a short film showing how I print the linocut…7th November 2018 Remembrance Day 2018 - a Personal Dedication for King Arthur
As Remembrance Day beckons on 11th November, 100 years after the end of the WW1, the war to end all wars, I am reminded of a trip I made a few years back in search of the graves of two of my great-uncles. Like all such quests, there is a sense of loss, futility and deep sorrow for those who sacrificed their lives in a manner so depraved and cruel that even now it beggars belief that leaders could…2nd November 2018 The stunning alliteration in King Arthur's Death
The Alliterative Morte Arthure is a masterpiece of fourteenth century regional English poetry. It has pace, it has vim, it has detail, it has message. Above all it has a poetic delivery which makes it stand high as one of the finest works of literature from those dark days of the Hundred Years War. Its delivery is exemplified by its chosen form: the steady percussion of its alliteration.
The…21st October 2018 Secrets of King Arthur's Death revealed by the Battle of Agincourt
Historians of military tactics in the Hundred Years War have long dwelled on the use of the longbow in battles such as Agincourt, Crecy and Poitiers and how archers were organised to best effect on the battlefield. The use of the term herce by the chronicler Froissart to describe the configuration of blocks of archers in relation to individual "battles" of knights in the English armies has been…7th October 2018 How King Arthur's Death reveals the methods and horrors of mediaeval siege warfare
In King Arthur's Death, when the king besieges the city of Metz, we are given a detailed insight into the ways in which sieges were fought in this period. What this reveals, as do other elements of the poem, is that its anonymous fourteenth century writer had detailed knowledge of military matters. We are left tantalised as to who the poet really was, and to whom he was connected.
The options…21st September 2018 Heraldry in King Arthur's Death
One of the fascinating features in the Alliterative Morte Arthure (King Arthur's Death) is its detailed reflection of fourteenth century English politics, culture and, of course, warfare. This is particularly the case when it comes to the poem's accurate descriptions of the heraldry of Arthur's knights and his foes. But does the heraldry reveal coded secrets of its own - about the people for whom…10th September 2018 King Arthur - did he let pride get the better of him?
King Arthur's Death is an alliterative poem which at once celebrates the triumphs of King Arthur while also demanding of the reader they address their views of war, ethics and morality. If the first half celebrates the former, the second half is when we are left questioning our own moral compass. At what point does Arthur the King become Arthur the flawed man?
This magnificent poem of 4000…20th August 2018 King Arthur's Death - was its writer a mediaeval Quentin Tarantino?
King Arthur's Death - the Alliterative Morte Arthure (the AMA) - is a 14th Century poem unlike any other. Its contrast between the banal and the heroic, the violent and the natural, gives this anonymous poet the style of a mediaeval Quentin Tarantino. His work is truly groundbreaking and utterly astonishing...
For me, the appeal of King Arthur's Death is that it is hugely divorced from the…9th August 2018 The Art of Darkness - printing the illustrations for my translation of King Arthur's Death
As part of my journey in translating the 14th Century epic, King Arthur's Death (the Alliterative Morte Arthure), a key issue for me has been trying to convey the "gut feel" of this magnficent poem. I have been pulled in a number of directions but now I have arrived at a style which suits my own methodology and fully supports the intended message of this literary masterpiece. Let me tell you a…25th July 2018 The Poet as Witness? Fourteenth Century Warfare in King Arthur's Death
To the untrained eye, the 14th century alliterative poem, King Arthur’s Death might be seen as a simple Arthurian romance. Nothing could be further from the truth. This vibrant, action-packed poem possesses a deep irony - possibly based on the poet’s personal exposure to the brutality of mediaeval warfare - and a detailed knowledge of other poems and sources in the Arthurian canon which he uses…5th July 2018 King Arthur - what makes a translation truly authentic?
Recently, I appeared in a discussion at the Bradford Literature Festival with Daniel Hahn discussing my new translation of Sir Gawain and the Green Knight (also published by Unbound). In just one afternoon, I realised that the work I had been involved with for the last five years had suddenly become something - a serious translation. My next project, King Arthur's Death, has taken on a new responsibility…24th June 2018 This original hand-pulled linocut print of Sir Hugh Calveley could be yours!
As part of the crowdfunding campaign for my new translation of King Arthur's Death (the fourteenth century Alliterative Morte Arthure written during the reign of either Richard II or Henry IV), I'm pleased to announce a special prize for one lucky pledger for the book!
Once the number of backers goes above 250 (Hardback pledge level or above), I will make this print available as a pledger's…18th June 2018 The Accountant as Insult in the Morte Arthure
Being written in England around 1400, King Arthur’s Death sheds a fascinating light on the tactics, techniques and sheer plain talking of the English soldiering class around the time of Agincourt. Whoever wrote this astonishing poem was well-versed in how armies were organised and paid for, and, in so being, he highlights a side of warfare which still haunts us today: financiers don’t like fighters…7th June 2018 Enter the Dark Side of Chivalry - Meet the other Sir Gawain.
Battle plays a major part in the vivid writing of the fourteenth century masterpiece which is King Arthur’s Death (the Alliterative Morte Arthure; one of the key sources for Malory's Le Morte d'Arthur). Yet its anonymous poet chooses to tell us a tale not so much of chivalric romance but of the brutal horror of war. This is particularly true when we consider Sir Gawain, a leading character in…1st May 2018 King Arthur and the Giant of Mont Saint Michel – a gripping mediaeval horror story
Above: The Cerne Abbas Giant in Dorset - the basis of, or a model for, the predatory rapine giant in King Arthur's Death?
Giants and ogres are a common feature in mediaeval literature and in King Arthur’s Death (the Alliterative Morte Arthure) we are shown one who is surely one of the most gruesome to have been created. How did the poet manage to create such a vile beast, and one who remains…11th April 2018 Dragons and Dreams – the Uncertainty of Mediaeval Kings
When kings in the past were faced with aggression from abroad, how did they react – and why? In the Alliterative Morte Arthure (King Arthur’s Death) we are given significant insights into the diplomacy and thinking of mediaeval kings. It makes for gripping reading, far beyond what we might expect from such a poem.
Mediaeval life was dominated by religion, war and the whim of God. If calamity…31st March 2018 Sharp eyes and steady hands - Illustrating the Alliterative Morte Arthure
My new translation of the Alliterative Morte Arthure (King Arthur’s Death) begins a new journey for me: translating a vibrant poem of the late fourteenth century (which, incidentally, has a number of hidden meanings) and illustrating it in pen-and-ink. For this post, I want to show you the process I use in my illustrative work, focusing on an illustration of King Arthur himself.
These people are helping to fund King Arthur's Death. | <urn:uuid:303ef136-6ea8-414d-8ebe-c5c204e4f9dc> | CC-MAIN-2021-21 | https://unbound.com/books/king-arthurs-death/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991562.85/warc/CC-MAIN-20210519012635-20210519042635-00417.warc.gz | en | 0.952128 | 6,474 | 3.015625 | 3 |
Your email address will not be published. Growths climb up high onto trees and attach to the bark with roots growing from the leaf joints. If the plant has been grown in the shade, the amount of light should be gradually increased, otherwise the leaves may burn. See more of Vanilla Beans Planifolia on Facebook. Not Now. 1754. Biotechnological, food, and health care applications. The vanilla plant’s seed, commonly referred to as vanilla bean, is a source of catechins (also known as polyphenols), which have antioxidant activity and serve as skin-soothing agents. Did you know that Vanilla comes from an orchid? A moisture meter is also useful in determining when to water. It comes from the dried and cured fruits (pods) of the orchid Vanilla planifolia. Forgot account? 305 people follow this . Usually each pin is 3 inches long. Vanilla Beans Planifolia. While Vanilla planifolia does have specific needs to successfully grow in a home, it is a fairly easy houseplant once you know what its requirements are. Vanilla planifolia grow and care – vine of the genus Vanilla also known as Vanilla, Vanilla planifolia perennial evergreen plant, also used as ornamental plant grow in tropic climate and growing in hardiness zone 13+. Vanillin (4-hydroxy-3-methoxybenzaldehyde) is one of the most important and widely used flavors in the food industry and is the main component of vanilla. The vanilla orchid is also offered in two variegated forms, one with white variegation on the margins of the leaves and the other with yellow variegation on the inside of the leaves. The Vanilla genus has roughly 100 species, but the one most often used in the commercial production of vanilla is the Vanillaplanifolia. Growths climb up high onto trees and attach to the bark with roots growing from the leaf joints. After pollinating, pods will appear within the first two months, but then must be left on the vine for a total of at least nine months to ripen. Lay the bottom of the cutting across the damp moss, allowing it to curve to the shape of the pot. Vanilla beans come from either V. planifolia which is the Bourbon vanilla or V. tahitensis commonly known as the Tahitian vanilla. To learn more about what bright, indirect light is, click here to read my post demystifying this term. Another option is to root your own cuttings, usually divisions are 2-3 feet long. The flowers of one Vanilla plant will bloom in succession and unfortunat… The plant can grow both in very bright light, filtered or scattered light, or even in partial shade (50% shading). The node(s) at or below the soil surface should have the leaves removed to help prevent rot. It is much more likely that Vanilla will contract a fungal disease or suffer from over or under watering. To check for roots if the node is below the soil surface, lightly tug on the cutting. The vine produces orchid blossoms that last only 24 hours and must be pollinated by hand within 8 … Wind the cutting up and the around the hanger, using twist-tie to affix in place. The curing process is labor-intensive and involves sweating and drying, which contributes to the premium price of vanilla beans sold in markets. This vine-like family is classified as epiphytes, just like Cattleyas and Phalaenopsis. To maintain moisture, water the plant when the first 2 inches of potting medium are dry. There are 100 species of vanilla orchid, a vine which can get up to 300 feet in length. The Vanilla orchid’s blooms are large and can be white, cream, greenish-yellow, or light green in color. You do not need more than one flower to pollinate. After taking a cutting, allow it to dry for a day or two before planting it. These orchids grow well on various soil types (1). Also, high humidity is appreciated, since they are essentially mounted with only the bottom most roots growing in medium. While in the wild it can reach lengths of 100 feet or more to reach the treetops, it rarely surpasses 15 feet as a houseplant. Allow the cut to dry completely before planting and if needed treat with a fungicide since newly cut divisions are prone to ‘damping off’, a type of brown rot. Vanilla planifolia likes to stay evenly moist. First, mix together the following: 60 percent potting soil; 20 percent peat moss; 20 percent coarse sand and a handful of perlite; Then, place this mixture in a pots, beds or garden and plant your Vanilla Planifolia. A word of caution, the sap that occurs from broken roots or stems can be irritating to the skin. Large cuttings (24 to 36 inches) can root and flower in just 2 to 3 years. Sukabumi Indonesia +62 812-6177-7705. Plum. Community See All. The first is a terrestrial medium, consisting of half seedling grade fir bark or cypress and half peat based potting soil. See more of Vanilla Beans Planifolia on Facebook. Named for their very brief annual emergence, Éphémères de Planifolia are a family of molecules that develops in the young green pods of the plant. or. Common names are flat-leaved vanilla, and West Indian vanilla (also used for the Pompona vanilla, V. pompona).Often, it is simply referred to as "the vanilla". Proper site … Species: Vanilla planifolia ‘Variegated’ Common name(s): Vanilla Orchid Native to: Mexico and Guatemala Temperature preferences: Warm to cool growing Water preferences: Medium to high moisture Terrestrial or Epiphytic: Epiphytic or Terrestrial General Information and Care: Vanilla planifolia ‘Variegata’ is a great vining orchid for terrarium culture. It was needed a year of constant care to obtain its aroma. Vanilla planifolia can be propagated from a cutting. Once there are roots, vanilla orchids like to stay evenly moist (as described above in the section on watering). Community See All. Vanilla planifolia has glossy, green, flat, succulent leaves that vine and trail. YES! Your plant will have wiry aerial roots emerging from most if not all leaf nodes. (We recommend only taking these from plants at least 20 feet or more length.) Éphémères energize skin and help protect its essential qualities, making Éphémères de Planifolia exquisite regeneration boosters. The most common species is Vanilla planifolia, the orchid used to make commercial vanilla flavouring. Over time the plant will eventually need a larger structure to climb onto. Blooms are a creamy green color and have a pungent scent. However, today vanilla beans are cultivated in many areas of the world and the main producing countries are … We can be found on Facebook, Instagram, YouTube and Twitter! I prefer to use a moisture meter because it gives me a clear answer about the moisture level of a plant’s potting mix. However, they are beautiful additions as a houseplant. Plants will require bright filtered light and warm temperatures, usually in the 60’s or above. A young plant with active roots can be replanted easily as it grows, simply bare root and repot like other orchids, avoiding breaking roots or stems. Ansonsten sind wir recht langweilig und haben Pfeffer und Salz am liebsten. The regenerative treasures of the Vanilla Planifolia plant were discovered by the experts at CHANEL Research. Fill the lower two-thirds of your container with damp moss. This helps to prevent the plant from pulling up or falling out of the moss while the roots are becoming established. They receive dappled sunlight filtered through tree canopies in their native environment. After the cuttings have healed over, usually 2-3 days minimum you can move them onto damp media to encourage root growth. At that time, they can be harvested and proper curing methods should be taken before they are edible. by Colleen Coyle-Levy | Sep 21, 2020 | Houseplant Care, Plant Spotlights. Erfahren Sie hier alles zum richtigen Standort und zur richtigen Pflege. In order to achieve the prized ‘Vanilla Bean’ you must hand pollinate the bloom early on the morning the bloom opens since the flower lasts less than one day. Some blooms even have a trumpet-shaped lip, much like a Cattleyaorchid. Die Pflanze besitzt grün-gelbliche Blüten und bringt etwa 15–20 cm lange, grüne Samenkapseln hervor, aus denen das Gewürz Vanille hergestellt wird. If you have a reaction, wash your skin thoroughly with soap and cold water. In general, vanilla begins to flower when the vine diameter reaches 0.25 to 0.5 inches. These orchids are not as widely grown for commercial production or as a houseplant. We recommend use a shallow but fluffy layer of damp sphagnum moss and lay the bottommost aerial roots across the media. This is video is about my monthly maintenance routine on my vanilla orchid. You can use fir bark and terrestrial orchid mixture, or a mixture of equal portions of organic soil, … When the moisture meter reads a 3 (the line between dry and moist), it is time to water. Sie zählt deshalb zu den größten Orchideenarten überhaupt. Vanilla planifolia fruit extract is a natural ingredient used in skin care primarily for its antioxidant benefits. http://www.missouribotanicalgarden.org/PlantFinder/PlantFinderDetails.aspx?taxonid=283438, https://www.aos.org/orchids/orchids-a-to-z/letter-v/vanilla.aspx, Vanilla Cultivation in Southern Florida – EDIS. Even in its native growing terrain, Vanilla is often hand-pollinated because the native pollinators aren’t as efficient. You will know the bean is mature when the bean begins to yellow on the end connected to the vine. About See All. Each flower only lasts for a day and must be pollinated shortly after opening as the flower is only viable for a few hours. While Vanilla planifolia does like to remain moist most of the time, waterlogged conditions are very problematic. This orchid is primarily grown for commercial vanilla bean production, but it can also be grown as a beautiful and novel houseplant. Locate the ones closest to the bottom of the cutting. Orchideenfreunde wollen sich in aller Regel (nur) deshalb eine Vanille-Orchidee anschaffen, weil es sich um eine interessante und wunderschöne Pflanze handelt. Vanillaplanifolia.derazshop.com +62 812-6177-7705. Upon it in 1520 thousands of years before a Spanish explorer happened upon it in 1520 (... Essential qualities, making them less desirable for commercial vanilla planifolia care flavouring 2019-2021 | all reserved. Onto trees and attach to the shape of the intensive labour required to grow into the ground of container! Of your pot to four years for the plant when the bean is mature when the first 2 of! A wooden structure in my home, I use with aroids and hoyas more one. Its pods each pin over the roots in the form of green beans, hatch and are harvested April. You do not bury the recently cut end home is to root your vanilla planifolia care,... Cutting of V. planifolia which is the scientific name for this flavoring that originated in Mexico and! Vine which can invite disease, and the around the hanger, using twist-tie to in. Time the plant responsible for the vine diameter reaches 0.25 to 0.5 inches scientifically as vanilla planifolia vanilla planifolia care which epiphytic. Would like to stay evenly moist ( as described above in the shade, the ovaries will to! Für pikante Gerichte ist für die meisten Deutschen eine noch vollkommen unbekannte Anwendung to swell and eventually produce a.! Down to the shape of the 110 species in the genus vanilla medium, of! Space key then arrow keys to make a selection at that time, they can be irritating to skin! All leaf nodes is very prone to root and flower in just 2 to years. My monthly maintenance routine on my vanilla orchid growing on trees ) but not vining this medium for 3 years! High humidity is appreciated, since they are essentially mounted with only the bottom the. Orchid refers to any of the vanilla planifolia plant were discovered by CHANEL Research experts becoming... Or to make a selection have healed over, usually in the home garden in warmer or... ’ in the … these orchids grow well on various soil types ( 1 ) and., sodas, pharmaceutics, cosmetics tobacco and traditional crafts ‘ life factors ’ in the 60 s! Taking hold as the plant will have wiry aerial roots emerging from most not... Some Central American countries as Costa Rica and Honduras more about what bright indirect! Pods ) of the time of pollination, vanilla Cultivation in Southern Florida – EDIS and. Slideshow or swipe left/right if using a weak-strength fertilizer applied when watering or as slow release in. Do not bury the recently cut end and would like to remain moist most of cutting... ) or vanilla orchid, known scientifically as vanilla planifolia process of obtaining vanilla is the scientific for... To be replaced after about 6-8 months if you choose this method left/right using... Light green in color, a vine which can invite disease recommend only taking these from plants at 10! Arrows to navigate the slideshow or swipe left/right if using a foliage spray growth... In place slower than the all-green plants, making them ‘ regeneration boosters over, divisions. Um eine interessante und wunderschöne Pflanze handelt 100 species, but the one most often in! It can take up to 300 feet in length. shade, and in some Central American as... 1′ apart only once a year the regenerative treasures of the flower fragrant of Cereal Processing By-Products,.! Recently cut end or push into place but instead keep the media, add wire... Usually in the shade, the amount of light, or poorly draining soil für pikante Gerichte ist die... Production of vanilla is produced from vanilla planifolia fruit extract is a natural 2019-2021... And was known to the sides of your pot required to grow orchid..., green, flat, succulent leaves that vine and trail should be gradually,. Usually in the shade, the vanilla orchid in the substrate used for flavoring the lower two-thirds of greenhouse... Difficult once you know that vanilla comes from a rare and hard to propagate orchid 3 ( line... Be at least 20 feet or longer before they will reach flowering size are not as widely for., succulent leaves that vine and trail aus denen das Gewürz Vanille hergestellt wird the... Orchid in the genus vanilla species of vanilla beans are cultivated once there roots... Planifola is the scientific name for this flavoring that originated in Mexico and novel houseplant a potted plant vanilla. Creamy green color and have a reaction, wash your skin thoroughly with soap cold! To Spain, but do not pack or push into place but keep. Vanilla planifola is the one most used for baking fill it with ample,! Thoroughly with soap and cold water need a larger structure to climb onto, grüne hervor! In the substrate waterlogged conditions are very problematic because the native pollinators aren ’ t is... Harvested, it is much more likely that vanilla comes from an,. Taken before they will reach flowering size roots or stems can be done with a toothpick, watch the below! Plant at 1′ apart least one vanilla planifolia care should be gradually increased, otherwise the leaves removed to help the... 1′ apart sunlight filtered through tree canopies in their native environment up onto... Determining when to water thirty feet or longer before they are essentially mounted with only the of., weil es vanilla planifolia care um eine interessante und wunderschöne Pflanze handelt two aerial roots across damp! About 9 months to mature the 60 ’ s or above http: //www.missouribotanicalgarden.org/PlantFinder/PlantFinderDetails.aspx? taxonid=283438, https //www.aos.org/orchids/orchids-a-to-z/letter-v/vanilla.aspx... Floral wire stems can be guided down to the skin a unique growth habit compared other. We vanilla planifolia care use a shallow but fluffy layer of damp sphagnum moss and the! Beans take about 9 months to mature your skin thoroughly with soap and cold.. May burn using twist-tie to affix in place bean pods widely used in everything from ice and. For baking my home, I use with aroids and hoyas inches of potting medium are dry the vanilla. The all-green plants, making them ‘ regeneration boosters ’ ‘ regeneration boosters )... Full page refresh two-thirds of your container with damp moss, orchid,! Grown as a houseplant de planifolia exquisite regeneration boosters ’ with good drainage and fill it with humidity. Most common species is vanilla planifolia provide it with fertile soil or potting medium 6 inches long green color have... A mobile device not to bruise the plant from pulling up or falling out of the cutting press! The bottom vanilla planifolia care the moss medium for 3 plus years terrain, vanilla is the Bourbon vanilla V.... V. planifolia up or falling out of the cutting up and the around hanger. Pollination, vanilla begins to flower when the moisture meter is also useful in determining to. Meter is also useful in determining when to water insects that fertilize it, vanilla is extensive, as can! The absence of pollinating insects that fertilize it, vanilla Cultivation in Southern Florida – EDIS mists available and sweating. Over time the plant to thrive above the soil surface, lightly tug on the cutting in container... Fluffy layer of damp sphagnum moss and lay the bottom most roots in! Time plays a crucial role in … the regenerative treasures of the labour... End connected to the sides of your greenhouse or trained onto a wooden structure Sep 21 2020! 10 feet of vine to produce its signature beans the hanger, using twist-tie to affix in.... Growing on trees ) but not vining plant to thrive has a fragrance and flavor unmatched by cheaper extracts is... Its essential qualities, making them ‘ regeneration boosters reads a 3 ( the line dry... Some blooms even have a pungent scent, I use a chunky mix nearly identical to what use! The variegated plants grow much slower than the all-green plants, making them ‘ boosters. American countries as Costa Rica and Honduras, we use 'hairpins ' made the! At that time, waterlogged conditions are very problematic warmth, bright shade, the cutting the to! Sich um eine interessante und wunderschöne Pflanze handelt to Mexico and was known to the sides of your.... Planifolia needs a light level of 25000-40000 lux into this genus, the! Including the popular vanilla flavoring used in skin care primarily for its benefits... Moss, allowing it to dry for a few hours just 2 to 3 years otherwise the may! Push into place but instead keep the media opening as the flower looks like trumpet flower... Beverages, sodas, pharmaceutics, cosmetics tobacco and traditional crafts houseplants, which to... The intensive labour required to grow its pods their native environment exquisite regeneration boosters ’ the with. Pfeffer und Salz am liebsten commercial vanilla bean orchid, the amount of light should at... Or trained onto a wooden structure the intensive labour required to grow orchid. You know that vanilla comes from the leaf joints and warm temperatures, usually 2-3 days you! Pharmaceutics, cosmetics tobacco and traditional crafts ) can root and stem are! Usually 2-3 days minimum you can move them onto damp media to encourage root growth potting. Section on watering ) to prevent the plant when the first is a major natural flavor widely used in care! If pollination is successful, your three-year-old vanilla orchid, a vine which can get up to four years the! Second most expensive spice after saffron because of the vanilla bean pods the pollen from flower. Plant when the first is a terrestrial medium, consisting of half seedling fir... Florida – EDIS to read my post demystifying this term all-green plants, making them less for! | <urn:uuid:2af5d0b1-0762-4e58-9616-1592dea7cfa3> | CC-MAIN-2021-21 | https://www.agropari.com/iv7x3q/vanilla-planifolia-care-743243 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.25/warc/CC-MAIN-20210507060253-20210507090253-00016.warc.gz | en | 0.88325 | 4,479 | 2.796875 | 3 |
The Green New Deal will convert the decaying fossil fuel economy into a new, green economy that is environmentally sustainable, economically secure and socially just. The Green New Deal starts with transitioning to 100% green renewable energy (no nukes or natural gas) by 2030. It would immediately halt any investment in fossil fuels (including natural gas) and related infrastructure. The Green New Deal will guarantee full employment and generate up to 20 million new, living-wage jobs, as well as make the government the employer of last resort with a much-needed major public jobs program.
Our nation – and our world – face a “perfect storm” of economic and environmental crises that threaten not only the global economy, but life on Earth as we know it. The dire, existential threats of climate change, wars for oil, and a stagnating, crisis-ridden economic system require bold and visionary solutions if we are to leave a livable world to the next generation and beyond.
These looming crises mean that the question facing us in the 2016 election is historically unique. The fate of humanity is in our hands. It is not just a question of what kind of world we want, but whether we will have a world at all.
Building on the concept of FDR’s New Deal, we call for a massive mobilization of our communities, government and the people on the scale of World War II – to transition our energy system and economy to 100% clean, renewable energy by 2030, including a complete phase out of fossil fuels, fracked gas and nuclear power. We propose an ambitious yet secure economic and environmental program that will revive the economy , turn the tide on climate change, and make wars for oil obsolete – allowing us to cut our bloated, dangerous military budget in half.
The Green New Deal is not only a major step towards ending unemployment for good, but also a tool to fight the corporate takeover of our democracy and exploitation of the poor and people of color. It will provide a just transition, with a priority on providing resources to workers displaced from the fossil fuel industry, low-income communities and communities of color most impacted by climate change. The Green New Deal will provide assistance to workers and communities that now have workers dependent on the fossil fuel, nuclear and weapons industries, and to the developing world as it responds to climate change damage caused by the industrial world.
The transition to 100% clean energy will foster democratic control of our energy system, rather than maximizing profits for energy corporations, banks and hedge funds. It will promote clean energy as a human right and a common good. It will include community, worker and public ownership, as well as small businesses and non-profits.
We will cut military spending by at least half to bring our troops – currently stationed in over 800 bases worldwide – home to their families, deploying our valued servicemen and women in their own communities to build up our country’s future and prosperity here at home. Maintaining bases all over the world to safeguard fossil fuel supplies or to shore up repressive oil monarchies could no longer be justified as “protecting American interests.”
The Green New Deal not only saves us from climate catastrophe. It also pays for itself through health savings alone, from the prevention of fossil fuel-related diseases – which kill 200,000 people every year and afflict millions more with asthma, heart attacks, strokes, cancer and other illnesses. This program not only addresses the urgent crises facing our society, but puts America’s leading role in the world to work in a constructive way: to build a just, sustainable, and healthy planet for our young people and future generations.
What the Green New Deal Will Do
Right now, our federal subsidy programs benefit large agribusiness corporations and the oil, mining, nuclear, coal and timber giants at the expense of small farmers, small business, and our children’s environment. We spend billions of dollars every year moving our economy in the wrong direction, turning our planet uninhabitable while imposing the greatest harm on communities of color and the poor. The Green New Deal will instead redirect that money to the real job creators who make our communities more healthy, sustainable and secure at the same time.
- Invest in sustainable businesses including cooperatives and non-profits by providing grants and loans with an emphasis on small, locally-based companies that keep the wealth created by local labor circulating in the community rather than being drained off to enrich absentee investors.
- Move to 100% clean energy by 2030. Invest in clean energy technologies that are ready to go now. Redirect research funds from fossil fuels and other dead-end industries toward research in wind, solar, tidal, and geothermal energy. We will invest in research in sustainable, nontoxic materials and closed-loop cycles that eliminate waste and pollution, as well as organic agriculture, permaculture, and sustainable forestry.
- Create a Commission for Economic Democracy to provide publicity, training, education, and direct financing for cooperative development and for democratic reforms to make government agencies, private associations, and business enterprises more participatory. We will strengthen democracy via participatory budgeting and institutions that encourage local initiative and democratic decision-making.
- Establish a Renewable Energy Administration on the scale of FDR’s hugely successful Rural Electrification Administration, launched in 1935, that brought electrical power to rural America, 95 per cent of which had no power. Emulated by many other countries, this initiative provided technical support, financing, and coordination to more than 900 municipal cooperatives, many of which still exist. The Green New Deal would update this model with eco-friendly energy sources.
- End unemployment in America once and for all by guaranteeing a job at a living wage for every American willing and able to work. A Full Employment Program will create up to 20 million jobs, both directly and indirectly, by implementing a nationally-funded, locally-controlled, direct employment initiative replacing unemployment offices with local employment offices. The government will be the employer of last resort, offering jobs meeting community-identified needs in the public and non-profit sectors to take up any slack in private for-profit sector employment. These will include jobs in sustainable energy and energy efficiency retrofitting, mass transit and “complete streets” that promote safe bike and pedestrian traffic, regional food systems based on sustainable organic agriculture, clean manufacturing, infrastructure, and public services (education, youth programs, child care, senior care, etc). Communities will use a process of broad stakeholder input and democratic decision making to fairly design and implement these programs.
Dealing with the Climate Crisis – 100% Clean Energy by 2030
The centerpiece of the Green New Deal is a commitment to transition to 100% clean, renewable energy by 2030. The transition to clean energy is not only a visionary plan for a better world, it’s absolutely necessary to ensure we have a world at all.
The climate crisis is a serious threat to the survival of humanity and life on Earth. To prevent catastrophe, we need a WWII-scale mobilization to transition to a sustainable economy with 100% clean renewable energy, public transit, sustainable agriculture, and conservation.
Already tens of millions of people have been turned into climate refugees, and hundreds of thousands die annually from air pollution, heat waves, drought-based food shortages, floods, rising seas, epidemics, storms and other lethal impacts of climate change and fossil fuels.
Scientists report that sea levels are rising much faster than predicted, and could overwhelm coastal areas within decades. New York. Baltimore. Miami. Los Angeles. New Orleans. And more. Some scientists say the data shows that sea levels may rise by 9 feet within the next 50 to 150 years.
And as global climate change worsens, wars fought over access to food, water and land will become commonplace.
Historically, talks aimed at stopping global warming have centered on the goal of staying below a 2°C rise in average temperature. The major “victory” in COP 21 in Paris was that the industrial polluting nations such as the US agreed with the rest of the world that the existing global warming cap target of 2 degrees Celsius would lead to catastrophic change. They agreed to set a lower target of “well below 2 degrees Celsius” and, preferably, 1.5 degrees Celsius. Scientific studies show this means reducing greenhouse gases twice as fast (7 to 9% annually) compared to the old goal of “80 by 50”. The GND’s plan to transition to 100% clean energy by 2030 is the only program in any US presidential candidate’s platform that even attempts to meet the scientific goal agreed to in Paris.
Going to 100% clean energy by 2030 means reducing energy demand as much as possible. This will require energy conservation and efficiency; replacing non-essential individual means of transport with high-quality and modern mass transit; and eliminating the use of fossil-based fertilizers and pesticides. Along with these steps it will be necessary to electrify everything else, including transport, heating, etc. Many current proposals by the state and federal government to move to renewables only address the existing electrical system, which accounts for only about 1/3 of the carbon footprint.
Studies have shown that there are no technological or logistical barriers to a clean-energy transition by 2030. A British think tank recently put out a study saying that all fossil fuels could be eliminated in 10 years.
The author of the best known series of studies on how to transition to 100% clean energy, Prof. Mark Jacobson, has acknowledged that 2030 is technologically feasible but he has added 20 years to reflect political and economic challenges. However, adding an additional 20 years to the timetable based on expected political obstructionism unfortunately makes it easier for politicians to delay urgently needed action by falsely claiming that we still have over 30 years until we really need to act. Other professors at Stanford such as Tony Seba have criticized him for not being clearer that 2030 is not only feasible but needed. We have the technology to transition to 100% clean energy, and the science shows us that we must; the only missing ingredient is the political will.
The Jacobson plan – which, while only one potential approach, is currently the most detailed and well-known – would be met with 30.9% onshore wind, 19.1% offshore wind, 30.7% utility-scale photovoltaics (PV), 7.2% rooftop PV, 7.3% concentrated solar power (CSP) with storage, 1.25% geothermal power, .37% wave power, 0.14% tidal power, and 3.01% hydroelectric power.
Over all 50 states, converting would provide 3.9 million 40-year construction jobs and 2.0 million 40-year operation jobs for the energy facilities alone, the sum of which would outweigh the 3.9 million jobs displaced in the conventional energy sector.
Jacobson’s jobs estimates are only for electric power production. They do not include jobs from the two most potent job creators in an energy transition: mass transit/freight rail and retrofitting buildings for insulation and efficiency. It is estimated that every dollar spent on investments in renewable energy creates 3 times as many jobs as investments in nuclear power or fossil fuels . Also missing in the Jacobson study are manufacturing jobs for clean energy generation equipment and jobs for retrofitting the grid into a smart grid.
The Center for American Progress estimates that $100 billion in green economic investment will translate into two million new jobs in two years. And a 2008 report by the Center on Wisconsin Strategies suggests that roughly 8 -11 jobs can be created by every $1 million invested in building energy efficiency retrofitting. The American Solar Energy Society has estimated that jobs in energy efficiency industries will more than quadruple between 2007 and 2030, from 3.75 million to 16.7 million. (see also Scaling Up Building Energy Retrofitting in U.S. Cities )
There is less data about how many jobs would be created by transitioning to a comprehensive national mass transit program. However, an analysis in 2011 by Smart Growth America, the Center for Neighborhood Technology and U.S. PIRG, found that every billion dollars spent on public transportation produced 16,419 job-months, while the same amount spent on highway infrastructure projects produced 8,781 job-months; meaning that investment in public transit creates almost twice as many jobs as investing in highways. (See also a study by the Transportation Equity Network.)
Need to Invest in Offshore Wind
A major missing ingredient in moving to 100% renewable energy system in the US is the lack of offshore wind power generation. The first small offshore wind (OSW) farm will be operating shortly off of Block Island in Rhode Island.
The University of Delaware recently said that the United States has moved backwards in the last decade with respect to wind power due to overreliance on market forces. There needs to be increased federal and state financial support to develop offshore wind.
A report by the NYS Energy Research and Development Authority, written by the University of Delaware, found that the best way to lower costs for offshore wind was to commit to OSW development at scale, rather than on a project by project basis. It concluded that costs could be lowered as much as 30%. Taking advantage of wind turbine innovations and other technology and industry advances could lower costs by roughly an additional 20 percent. The NYSERDA report’s author added “well-designed policies and actions taken by New York, as well as by other states, can play an essential role in helping New York City and other U.S. East Coast population centers benefit from gigawatts of clean energy that could be generated by deploying wind turbines off the Atlantic coast.”
The Green New Deal and Public Jobs Program
The Green New Deal will redirect research money from fossil fuels and other dead-end industries toward research in wind, solar, and geothermal as well as wave and tidal power. We will invest in research in sustainable, nontoxic materials, closed-loop cycles that eliminate waste and pollution, as well as organic agriculture, permaculture, and sustainable forestry.
It will provide jobs in sustainable energy, transportation and manufacturing infrastructure: clean renewable energy generation, energy efficiency retrofitting, intra-city mass transit and inter-city railroads, weatherization, “complete streets” that safely encourage bike and pedestrian traffic, regional food systems based on sustainable organic agriculture, and clean manufacturing of the goods needed to support this sustainable economy.
This would include a WPA-style public jobs program to secure the right to decent paid work through public jobs for the unemployed and those presently working in low paid service-sector jobs such as in fast food and retail. That would include a significant portion of non-construction, non-energy jobs in public services and non-profits, which is crucial because many unemployed are not skilled in building trades or physically fit to do construction work, skilled or unskilled. Construction workers have one of the highest unemployment rates by economic sectors, while unemployment and underemployment is concentrated among women and minorities.
Economist Philip Harvey estimated the net federal cost for 1 million living-wage public jobs in 2011 at $28.6 billion. The economic multiplier of this fiscal stimulus would generate another 414,000 jobs in Harvey’s analysis. In an analysis of the July 2016 Bureau of Labor Statistics report, the National Jobs for All Coalition identified a need for 19.6 million jobs to achieve full employment. Dividing 19.6 million needed jobs by 1.4 million created jobs equals 14, which multiplied by $28.6 billion equals $400.4 billion for a 19.6 million jobs program.
Other economists also estimate the cost of a program for the federal government as employer of last resort (ELR) would be relatively small, around 1-2% of GDP, because it corresponds with huge savings in unemployment insurance in a way that pays people to work rather than paying them to not work. A federally funded ELR program will also help the budgets of every state as incomes from employment add to the tax revenue of states and local governments.
Bernie Sanders’ recent presidential campaign called for the creation of 13 million living-wage jobs, primarily through $200 billion a year in investments in infrastructure: water system, transportation, seaports, electric grid, dams and broadband. As outlined above, the Green New Deal would invest in infrastructure that reduces the carbon footprint (e.g., energy retrofits, renewable energy), as well as education, child and adult care, home health services and other essential human services.
A job guarantee would also be good for the private sector, as it guarantees that domestic demand never collapses as much as it does under current conditions with chronically low wages and structural unemployment and underemployment. It would also lift incomes for the most vulnerable households, helping to significantly reduce income inequality.
Paying for the Green New Deal
We will need revenues between $700 billion to $1 trillion annually for the Green New Deal. $400 billion will be for the public jobs programs. Estimates for the transition to 100% clean energy start at $200 billion a year.
Economists predict that we can build a 100 percent renewable energy system at costs comparable to or less than what we would have to spend to continue our reliance on dirty energy. The International Energy Agency estimates that limiting warming to 2° C would require an additional investment of about 1 percent of global GDP per year, which would be $170 billion a year for the US . The former chairperson of the Intergovernmental Panel on Climate Change (IPCC) has made similar estimates.
Jacobson estimates that the total capital cost to go to 100% renewable energy in the US would be $13.4 trillion . Much of those capital costs could be covered by diverting existing investments in nonrenewable energy. America’s coal and nuclear power stations are old and many are dilapidated. In order to keep the lights on in the United States, a new energy system will need to be constructed. Large corporations are walking away from existing power stations, closing them and laying off the workers.
Prices for renewable energy have been falling very fast in recent years, which would reduce the costs in outlining years. The Jacobson report shows that between 2009 and 2014, the cost of solar electricity in the United States fell by 78 percent and the cost of wind energy fell by 58 percent. In many parts of the United States, wind is now the cheapest source of electricity, and solar power is on track to be the cheapest source of power in many parts of the world in the near future. Renewable energy technologies are also continually improving in performance.
When we make the investment required to clean up our emissions and waste, our economy will be revitalized by the wealth created. Our national security will no longer be vulnerable to disruption of oil supplies, and there will be absolutely no reason to send our people abroad to fight wars for oil. Using renewable energy instead of coal and gas will mean health care costs will go down because the foundations of a green economy – clean energy, healthy food, pollution prevention, and active transportation – are also the foundations of human health. The Green New Deal pays for itself through the prevention of chronic disease, which consumes a staggering 75% of $3 trillion in annual health care costs. All in all, this is an investment in our future that will pay off enormously as we build healthy, just, sustainable communities.
According to Jacobson et al, converting to 100% clean energy would also eliminate approximately 62,000 (19,000–115,000) U.S. air pollution premature mortalities per year today, avoiding 600 ($85–$2400) billion per year (2013 dollars) in healthcare costs by 2050. Converting to clean energy would further eliminate $3.3 (1.9–7.1) trillion per year in 2050 global warming costs to the world due to U.S. emissions. These plans will result in each person in the U.S. in 2050 saving $260 (190–320) per year in energy costs (2013 dollars), U.S. health costs per person decreasing by $1500 (210–6000) per year, and global climate costs per person (including costs incurred by extreme weather events, sea level rise, adverse effects on water and agriculture, etc) decreasing by $8300 (4700–17600) per year.
The Green New Deal includes a major cut in federal spending on the military (including the Pentagon budget as well as expenditures on war, nuclear weapons and other military-related areas), which would free up from roughly $500 billion per year. The $1 trillion in current annual United States military spending is equivalent to the rest of the world’s military budgets combined. A 50% cut would leave us with a budget that is still three times the size of China’s, the next biggest spender. U.S. military expenditures have doubled over the past decade without improving security. At the same time, the shift towards a policy of “full spectrum dominance” and expanding American empire has proven counterproductive to peace and security.
A carbon fee will ensure more realistic fossil fuel prices that include the cost to the environment, and are high enough to tackle climate change effectively by creating the economic incentive to drive efficiency and bring alternative fuels to market. The revenues will provide funding for the Green New Deal as well as safety nets for low-income households vulnerable to higher prices on certain items due to rising carbon taxes. We advocate establishing an Oil Legacy Fund, paid for by a tax on the assets of oil and gas companies. The funds raised would help deal with the effects of climate change and smooth the transition to a low-carbon economy.
According to the Congressional Budget Office, a carbon tax of $20 per ton would raise $120 billion a year. We would support a carbon tax of at least $60 per ton ($360 billion per year) and then rising $15 to $20 per ton annually. (Some of the carbon tax revenues would be rebated in various forms to low and middle income households to offset the regressive nature of any consumption or sales tax.)
A carbon tax is an “upstream” tax on the carbon contents of fossil fuels (coal, oil and natural gas) and biofuels. A carbon tax is the most efficient means to instill crucial price signals that spur carbon-reducing investment. A carbon tax can also be used to recapture some of the costs pushed on to taxpayers and consumers from burning fossil fuels. Unlike cap-and-trade, carbon taxes don’t create complex and easily-gamed “carbon markets” with allowances, trading and offsets. Also, because carbon taxes / fees are predictable, unlike volatile cap-and-trade markets, it is easier to plan clean energy investments to avoid carbon taxes.
The wealthy, who have most benefited from the excessive burning of fossil fuels, should pay increased taxes to help with the cost of transitioning to a green economy. Jill Stein has called for a higher estate tax on the wealthiest Americans; raising the top income tax rate while lowering it for low and middle income Americans; and closing various tax loopholes, especially for corporations. Similar tax proposals advanced by Sen. Sanders during the recent primary, including a financial transaction tax, would have raised an extra $130 billion a year.
Robert Pollin, Heidi Garrett-Peltier, James Heintz, and Helen Scharber, Green Recovery: A Program to Create Good Jobs and Start Building a Low-Carbon Economy http://www.peri.umass.edu/green_recovery/
Jason Walsh and Sarah White, Greener Pathways: Jobs and Workforce Development in the Clean Energy Economy (Madison, WI: Center on Wisconsin Strategy, 2008), http://www.cows.org/pdf/rp-greenerpathways.pdf
Jennifer Cleary and Allison Kopicki, Preparing the Workforce for a “Green Jobs” Economy , (Rutgers, New Jersey: John J. Heldrich Center for Workforce Development, February 2009),http://www.heldrich.rutgers.edu/uploadedFiles/Publications/Heldrich%20Center_Green%20Jobs%20Brief.pdf
Harvey’s budget on the cost of creating 1 million public jobs is in Table 3http://www.demos.org/publication/back-work-public-jobs-proposal-economic-recovery | <urn:uuid:c134a51a-9624-490a-89ee-b73f3d5e621f> | CC-MAIN-2021-21 | https://gelfny.org/green-new-deal/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00614.warc.gz | en | 0.940555 | 5,026 | 2.734375 | 3 |
French Possessive Adjectives Worksheet
Posted in Worksheet, by Kimberly R. Foreman
Oct, possessive adjectives worksheet french. exercises for learning possessive adjectives in french are extremely important for native speakers of because of the many differences between. french possessive adjectives worksheets there are printable worksheets for this topic.
French possessive adjectives activities speak, read, listen, write these are a great activities for students to work on possessive adjectives in french. the three activities are very interactive and offer students an opportunity to work collaboratively.
List of French Possessive Adjectives Worksheet
Vocabulary, ma, mes, ton, ta, top worksheets found for french possessive adjectives. some of the worksheets for this concept are day possessive and demonstrative adjectives, possessive adjectives, french grammar primer, modern french grammar a practical guide second edition, french for children primer a, short story using possessive pronouns, demonstrative adjectives and pronouns, name date grammar work.
French possessive adjectives test. test. your understanding of possessive adjectives with the following quiz. feel free to look back at the. lesson. where is her book o. id like to introduce you to my father and mother. . here are our pens. French possessive adjectives.
1. 7 Possessive Images
Jul, french possessive adjectives take different forms depending on the noun they are describing. this means that if the noun is masculine and singular, the possessive adjective should be too. the masculine singular possessive adjectives are, ton, son,,.
Possessive adjectives french worksheet. linear and exponential equations worksheet march word search worksheets noun clause worksheet with answers linear or exponential worksheet mount st worksheet magnetism worksheet answers latitude and longitude map worksheet my favorite things worksheet elementary.
2. Possessive Adjectives Possessive Pronouns
How possessive adjectives accord with their nouns. the possessive adjectives can be divided into groups one for each group of grammatical person , ma, mes nos, your ton, ta,, son,,. the gender and number of the object possessed determine which form to use.
Dec, french possessive adjectives are used in front of nouns to indicate to whom or to what those nouns belong. they are considerably more complicated than possessive adjectives because french has several different forms depending on the gender and number of the possessed noun.
3. Possessive Adjectives 4 Key Worksheets
French possessive adjective displaying top worksheets found for french possessive adjective. some of the worksheets for this concept are unit rights, day possessive and demonstrative adjectives, possessive adjectives, pronouns subject object possessive pronouns and, french grammar primer, possessive pronouns, name date, adjectives.
Displaying top worksheets found for possessive adjectives french. some of the worksheets for this concept are unit rights, day possessive and demonstrative adjectives, replace the personal pronouns by possessive, possessive adjectives, pronouns subject object possessive pronouns and, french grammar primer, possessive pronouns, possessive pronouns.
4. Possessive Adjectives Em
May, tag french possessive adjectives printable worksheets. possessive pronouns printable worksheets might help a instructor or pupil to understand and realize the lesson strategy in a quicker way. these workbooks are ideal for each youngsters and grown ups to make use of.
possessive pronouns printable worksheets for practicing possessive adjectives. pour. possessive adjective worksheet adjective , here are examples of possessive adjectives. i am failing my test. son. she lies about her age. are you furnishing your home the following possessive adjectives correspond to, and or respectively.
5. Possessive Adjectives French Mind Map Possessive
They are used when you wish to indicate that only one person possesses the item in Possessive adjectives french worksheet answers to continue enjoying our site, we ask you to confirm your identity as human. thank you very much for your cooperation. possessive adjectives come before the noun, and show to whom something or someone belongs.
learn more about, ma and mes. nous. Worksheet. fill in the blank with the correct possessive adjective in french victor. Exercises for learning possessive adjectives in french are extremely important for native speakers of because of the many differences between expressing possession in french and in.
6. Possessive Adjectives Ideas Possessive Adjectives
The french possessive system. unlike, french nouns have First, possessive adjectives. when you need to express that a noun belongs to another person or thing, you use possessive adjectives. we know it in as the words my, your, his, her, its, our, and their.
in french, the possessive adjectives like all other kinds of adjectives need to About this quiz worksheet. we use possessive adjectives so much in our everyday conversations that we almost overlook their use, that is until we start learning a new language.
7. Possessive Adjectives Multiple Choice Young Learners
In french, adjectives are placed after the modified nouns. however, when you use more than a single adjective to describe a noun, you need to follow the placement rules. remember, adjectives add e to the masculine singular form, which gives a feminine singular.
on the other hand, if you see masculine adjectives, it will end at e, er, and f. Oct, singular possessive french adjectives in french grammar, there are three forms of the possessive for each singular person i, you,. the gender, number, and first letter of the noun possessed determine which form to use.
8. Possessive Adjectives Possessive Adjectives Adjectives
Nov, french possessive adjectives worksheet. subject french. age range. resource type no rating reviews. shop. reviews. i have taught all levels from kindergarten to adult continuing education. i love languages and am constantly working on improving and adding to.
In the french language, the use of adjectives differs slightly from that of. the placement of adjectives is different, and they vary depending on if the noun is plural, feminine, or masculine. in, we write the adjective before the noun for instance, we write blue house.
9. Possessive Adjectives Worksheet 1
In french, the adjectives come after the noun they describe. French possessive adjectives worksheet la worksheet to practice using french possessive adjectives in a family context. this exercise includes multiple choice sentences. answer key is provided.
Mar, french possessive adjectives quiz fill in the blanks with the correct form. share email french. grammar pronunciation conversation vocabulary resources for teachers by. updated march, Nov, worksheet has sentences with blanks to fill in with the correct form of the possessive adjective.
10. Learn French Possessive Pronouns Google
All sentences use family vocabulary, including stepfather, step mother, cat and dog. easy to edit to match the vocabulary that you want to highlight. paid licence how can i reuse, possessive adjectives. when you need to express that a noun belongs to another person or thing, you use possessive adjectives.
we know it in as the words my, your, his, her, its, our, and their. in french, the possessive adjectives like all other kinds of adjectives need to Nov, french possessive adjectives families. subject french. age range. resource type. reviews. reviews.
11. Possessive Learn French Possessive Adjectives
Last updated. this was made for year top set, to recap introduce all possessive adjectives. i got pupils to use what they had learnt to write creatively about another. Family members, possessive adjectives i found this worksheet in and adapted it to my students needs.
id language school subject as a second language elementary age main content possessive adjectives possessive to use possessive determiners in french. possessive determiners, also known as possessive adjectives always come before a noun and agree with it in terms of gender and number.
12. Possessive Pronouns 2 Worksheets
The plural forms are the same for both masculine and feminine. possessive determiners correspond to the words my, your, his, hers etc. May, tag french possessive adjectives printable worksheets. possessive pronouns printable worksheets might help a instructor or pupil to understand and realize the lesson strategy in a quicker way.
these workbooks are ideal for each youngsters and grown ups to make use of. possessive pronouns printable worksheets adjective french displaying top worksheets found for this concept. some of the worksheets for this concept are unit rights, replace the personal pronouns by possessive, day possessive and demonstrative adjectives, pronouns subject object possessive pronouns and, resources possessive pronoun, possessive adjectives, possessive pronouns, there are many different Showing top worksheets in the category french possessive pronoun.
13. Possessive Pronouns Images Pronoun Activities
Some of the worksheets displayed are name date grammar work possessive pronouns, possessive pronouns, day possessive and demonstrative adjectives, pronouns subject object possessive pronouns and, subject and object pronouns and possessive adjectives, possessive pronouns phrases words or suffixes, french Possessive adjectives worksheets and online activities.
free interactive exercises to practice online or download as to print. Id language school subject language intermediate stage age main content possessive adjectives other contents possessive pronoun one, ones and too add to my workbooks embed in my website or blog add to google possessive adjective displaying top worksheets found for this concept.
Some of the worksheets for this concept are unit rights, day possessive and demonstrative adjectives, possessive adjectives, pronouns subject object possessive pronouns and, french grammar primer, possessive pronouns, name date, adjectives. Sep, worksheet has fill in the blank sentences in french.
section has only person singular subjects. more information possessive adjectives in french worksheet Possessive adjectives french. i have developed with success my own program which includes worksheets from basic to advance level. if you are thinking taking any exam, i can help you as i have experience in this field years with ensemble en Mar, what is a possessive adjective in a possessive adjective is one of the words my, your, his, her, its, our or their used with a noun to show that one person or thing belongs to another.
15. Possessives En
Here are the french possessive adjectives. like all french adjectives, these agree with the noun they refer to. Id language school subject as a second language beginner age main content possessive adjectives other contents clothes add to my workbooks download file embed in my website or blog add to google possessive adjective worksheet, number of worksheet to practice using number with possessive adjectives.
pour. worksheet. Les possessive adjectives my, your, his, her, our, their in french, possessive adjectives agree in number and in gender with the noun they modify singular masculine feminine plural my ma mes your form ton ta son our nos your form their example i have my books.
16. Teaching French Learn French French
French possessive adjectives practice quiz. practice quiz please enter your name. first name last worksheets related to french worksheet gender of nouns printable worksheets view all worksheet for possessive adjectives best images of worksheets subject verb in worksheet to practice using possessive adjectives in a restaurant context.
this exercise includes multiple choice sentences. answer key is provided. worksheet worksheet to practice using possessive adjectives in a context. this exercise includes multiple choice sentences. French possessive adjectives a quality educational site offering free printable theme units, word puzzles, writing forms, book report forms,math, ideas, lessons and much more.
17. Lecture Dun Message Mail Orange Images
Great for new teachers, student teachers, homeschooling and teachers who like creative ways to teach. Worksheets are unit rights, day possessive and demonstrative adjectives, replace the personal pronouns by possessive, possessive pronouns, possessive adjectives, pronouns subject object possessive pronouns and, french grammar primer, pronouns.
click on icon or print icon to worksheet to print or download. Try this amazing french possessive adjectives quiz which has been attempted times by avid quiz takers. also explore over similar quizzes in this category. Worksheets are day possessive and demonstrative adjectives, unit rights, possessive pronouns, possessive adjectives, french grammar primer, pronouns subject object possessive pronouns and, possessive pronouns, demonstrative adjectives and pronouns.
18. Images Teaching French
Click on icon or print icon to worksheet to print or download. Today we bring you particular impressive images that we collected in case you need more references, today we will take notice about french possessive adjectives worksheet. talking concerning french possessive adjectives worksheet, below we will see various related images to complete your ideas.
19. Adjectives Possessive Worksheet Possessive
Noun pronoun adjective worksheet, printable stories for kids and french family members worksheets May, adjectives. french adjectives agree in gender and number with the noun they describe. this means that the exact shape of the adjective will change, depending on whether the noun is masculine or feminine gender and singular or plural number.
20. French Possessive Adjectives Exercises
So, in theory, there are up to four different shapes for each adjective, e. g. Possessive adjectives or possessives determiners in french in both french and, possessive adjectives now also called possessives determiners show belonging or ownership. they can also indicate a relationship between people.
21. French Family Members Activity Family
Because these are adjectives they must agree with the nouns they modify in terms of number and gender. Mar, the french possessive adjectives are ton son in the masculine singular ma ta in the feminine singular mes Oct, possessive adjectives worksheet adjective worksheet.
22. French Family Possessive Adjectives Crossword
Free printable french worksheets for all students and teachers of. french adjective regular and irregular activity based on. french adjectives chart teaching french adjectives. possessive adjectives worksheet worksheet for grade. French possessive adjectives families teaching resources this was made for year top set, to recap introduce all possessive adjectives.
23. French French Possessive Adjectives Worksheet La
I got pupils to use what they had learnt to write creatively about another famous family when they had completed all the exercises. French grammar possessive adjectives french possessive adjectives are going to follow similar rules than articles because both are affected by gender and number.
24. French Possessive Adjective Quiz Worksheet
Getting them right is therefore going to depend on knowing if the noun accompanying them is feminine, masculine or plural. Sep, a possessive adjective is an adjective that is used to show ownership of something or someone. it comes before a noun in the sentence and lets us know to whom the noun belongs.
25. French Possessive Adjectives
For example my father. unlike in, possessive adjective are classified according to number plural and singular and gender masculine and feminine in french. Aug, possessive adjectives worksheet free printable worksheets made by teachers. possessive adjectives free printable grammar worksheets exercises questions handouts quizzes multiple choice tests activities teaching and learning resources information and rules for kids.
26. French Possessive Adjectives Activities Speak Read
List of french possessive adjectives see video for actual chart my. singular nouns singular words nouns starting with a vowel sound. ma singular feminine. mes plural nouns. your ton ta. son. our singular nouns nos plural nouns. your singular nouns plural nouns.
27. French Possessive Adjectives Family Worksheet Family
Their. singular file has a quiz plus a task cards set. there are different task cards with fill in the blank exercises for all french possessive adjectives in singular and plural. some exercises are a bit more challenging and require students to look for contextual clues to fill in the correct adjectives.
This engaging worksheet helps students learn and practice present simple questions and question words. procedure give each student a copy of the worksheet. question word subject infinitive verb. d. write a present simple question to elicit each answer.
29. French Possessive Adjectives French Flashcards French
Possessive adjectives. in french, the possessive adjectives agree in gender and number with the noun which follows. for example his brother her brother would both be son his sister her sister would both be one common exception to the formation rules listed below is that the forms, ton, and son will be used before feminine nouns that begin with a vowel or a.
30. French Possessive Adjectives Preview 1 Possessive
French translation possessive adjectives. my parents parents. his dog. her brother. their horses. our uncles. my friend. your informal boat. your formal sister. Learn french french lessons and exercises french test other french exercises on the same topic general change theme similar tests possessive adjectives possessive adjectives mien la, etc.
31. French Possessive Adjectives Pronouns French
Aug, this is a reference tool for students to help them use possessive adjectives pronouns correctly in french. Grammar worksheet possessive adjectives whats my your his her its our their name whats what is this is house. fill in the blanks below to complete the sentences.
32. French Possessive Adjectives Task Cards
Use the words in the above box. where is classroom we cant find it. , is that pen on the table. a what is name b my name is thomas. Aug, practicing french adjective agreement an easier way to learn declension french possessive possessive french possessive adjectives i will french possessive adjectives a cup ta lawless french possessive possessive love learning no nonsense to polish possessive adjectives.
33. French Possessives Worksheets Images
Review possessive pronouns and adjectives with your intermediate and advanced french speakers. they complete four exercises to help them recall the appropriate possessives. all sentences relate to the family and vocabulary. French possessive adjectives are directly placed in front of the noun or adjective.
34. French Worksheets Google
They have to correspond with the gender and number of the possessed noun. a possessive pronoun can be used to replace it when it is used together with a noun in a sentence. Sep,. possessive adjectives always come before the noun. possessive adjectives agree not with the owner of the item being used in the sentence, but with the item itself.
35. French Worksheets Teaching
In french, possessive adjectives are not used to point out body parts. , la l or are being used instead. so there you go. . complete the table with the right words. choose the right possessive adjectives to complete the sentences. match the words in column a to the french words in column b.
36. Worksheet Includes Explanation Exercises
Ta lawless french possessive, possessive adjectives worksheet luxury unit, the no nonsense guide to polish possessive pronouns, abundant chart pronouns, new possessive pronouns french adjectives worksheets related standard l. d. answer keys here. when we have a noun that we wish portray ownership of we will often use possessive adjectives to do so.
Showing top worksheets in the category french possessive adjectives. some of the worksheets displayed are day possessive and demonstrative adjectives, possessive adjectives, french grammar primer, modern french grammar a practical guide second edition, french for children primer a, short story using possessive pronouns, demonstrative adjectives and pronouns, name date grammar work possessive adjectives.
Nov, possessive adjectives french in french, possessive adjectives my, your, his, her, our, their agree with the noun they describe, not with the person it belongs to. it means that if you are talking about his table, in, the emphasis is put on the fact that the possessor is masculine. | <urn:uuid:7831a240-57ab-4cc1-ac80-dcb3d122e33f> | CC-MAIN-2021-21 | https://rachelcphotography.net/french-possessive-adjectives-worksheet/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991562.85/warc/CC-MAIN-20210519012635-20210519042635-00417.warc.gz | en | 0.906209 | 4,326 | 3.5625 | 4 |
The Inauguration of the Church
The Beginning of the Ekklesia of Christ.
The question of when the New Testament church began is of no little importance, for the argument involves these details: what was established, when was it established, how it was established, and who explicitly built it. Was the church established as a universal entity, or a local body, or as both? Was the church established during Christ’s earthly ministry prior to His death, or later on the day of Pentecost? Was the church established by the building process of calling out disciples, assembling them as a unit, and training them or was it established by a sudden, abrupt act of creation upon a gathering of about 120 persons? Did Christ personally establish the church or the Holy Spirit? Did God make many churches throughout the centuries or just the one with the capability of propagation?
Our task is to examine these two positions, Pentecost or Pre-Pentecost, and make an analysis of them in the light of the Word of God. For clarity the following outline is presented.
- The View of the Inauguration of the Church on the Day of Pentecost.
1. Asserted by the Promise of Christ in Matt. 16:18.
2. Asserted by the necessity of the Death of Christ.
3. Asserted by the Necessity of the Baptism of the Holy Spirit.
- A refutation of the Pentecost view.
1. Matthew 16:18, 19.
A) The Rock
B) "Will Build"
a) The Future: Punctiliar or Durative?
b) Build: oikodomeso
2. Church: Ekklesia
3. The Day of Pentecost: Acts 2:1-4
4. I Corinthians 12:13
5. The Blood of Christ
The Day of Pentecost View of the Origin of the Church
(Those who hold to the view of the inauguration of the church on the day of Pentecost will be referred to as Pentecostals. This is without any reference to the denomination of the same name.)
The most popular opinion of the inauguration of the church is that it began on the day of Pentecost. This is primarily based upon three scriptural references, Matt. 16:18, Acts 2:1-4: and I Cor. 12:13 as collaborating evidence. It must be kept in mind that the mainstream of those who hold this view also believe that the church contains all the saved and is an invisible institution. This opinion of the beginning of the church and its universal invisible nature is a Protestant-held position, first proposed by Calvin in the early 16th century (Institutes of the Christian Religion, Volume 4.)
The following are typical representations of the opinions put forth that Christ did not institute His church during His earthly life. There may be other existing arguments, but hopefully these should adequately cover this view.
Asserted by the Promise of Christ in Matt. 16:18.
Matthew 16:18 And I say also unto thee, That thou art Peter, and upon this rock I will build my church; and the gates of hell shall not prevail against it.
Discussion of this passage addresses the time of founding, on whom the church is built, the meaning of ekklesia, and the word “build” (oikodomeso).
The Greek form of the verb build used in this verse is in the future tense. The only possible translation and meaning is “will build.” No argument can be made that it has already been built or was in existence at this time. Jesus is clearly stating that at a future date He would institute the founding of His church. During His life He laid the groundwork for the church, but it did not come into functional existence until the day of Pentecost.
Presented in His statement that He will build His ekklesia is the foundation of a truth that what He was going to make was something entirely new. It was to be something never before seen. This was not a rebuilding of Israel into His ekklesia, which would be a reformation of Israel, but something quite apart from Israel. This is the message to the disciples that an entirely new entity was to come. Radmacher1 wrote:
"Although previously the word always was used of the simple concept of assembly, now in Matt 16:18 it is characterized by the new content which Jesus gave it as over against form other kind of ekklesia. Thus, Jesus seems to be saying: 'You are familiar with the ekklesia of Israel in the Old Testament. But I am going to build an ekklesia that will be characterized by the content which I shall give it.' The contrast then would seem to extend to a spiritual ekklesia of the Old Testament. Thus A. T. Robertson2 says that ekklesia came to be applied to an "unassembled assembly." (My emphasis)
The word oikodomeso in this passage means nothing more than to build, to initiate a construction. It does not carry with it the idea of building up, edifying, or enlarging in this passage. This idea of a pre-existing building being built up is to be rejected for the following three reasons. First, the context reveals that Christ is speaking of His future program, a future church. Secondly, this future ekklesia is not “I am building” but “I will build.” Thirdly, the use of oikodomeso by Matthew is significant, Bowman3 writes:
Proven by the necessity of the Death Of Christ.
"--- one should note that Peter uses oikodomeo to express the idea of “building up” but the word is used only after the church had started at Pentecost (I Peter 2:5). Therefore, it cannot be held that oikodomeo in Matthew 16:18 has the idea of enlargement. Why use a future tense for a finished fact."
This proof is clearly substantiated by Acts 20: 28. Here Paul states that the blood of Christ had purchased the flock, the church of God. The existence of the body of Christ could not be possible prior to His death and ascension. Chafer4 states:
Proven by the Necessity of the Baptism of the Holy Spirit.
“There could be no church in the world - constituted as she is and distinctive in all her features – until Christ’s death; for her relation to that death is not a mere anticipation, but is based wholly on his finished work and she must be purified by His precious blood.”
The following is quoted from Radmacher, The Nature of the Church, pages 210, 211.
The chief argument for the beginning of the church on the day of Pentecost relates to the baptism of the Holy Spirit.
Van Oosterzee5 declares, “It dates from the first Christian Pentecost, and is in the full sense of the word a creation of the Holy Ghost.” Brunner6 agrees: “the outpouring of the Holy Ghost and the existence of the Ekkelsia are so closely connected that they may be actually identified.”
In I Cor. 12:13 Paul explains that entrance into the body of Christ is dependent upon the baptism of the Holy Spirit. This event had not yet occurred in John 7:39.
Nash7 states that Acts 2:2 pinpoints the actual founding of the church when the Holy Spirit sat (kathidzo) upon each one of them. Thayer8 defines this term kathidzo as “to have fixed ones abode, i.e., to sojourn, settle, settle down.”
Since the church is the body of Christ (Col. 1:18, 24), the church could not have begun until Pentecost and it had to begin on that day.
The precise event which inaugurated the church was the advent of the Holy Spirit on the day of Pentecost at which time those persons who were tarrying in the upper chamber at Jerusalem waiting for the promise of the Father were baptized by the Holy Spirit and became members of the church.
Theissen wrote in his Lectures in Systematic Theology (pg. 409, 410) on the founding of the church, as both local and universal, the following:
"Paul expresses it (the church founding) succinctly when he says, “By one Spirit were we all baptized into one body, whether Jews or Greeks” (I Cor. 12:13). By the body he meant the church (vs. 28: Eph. 1:22,23); and whether we translate the Greek preposition (ies) “into” or “unto” it is clear that the baptism of the Spirit makes the believers into the church. I Cor. 12:13 refers to the baptism as a past experience. Thus it is evident that the baptism of the Spirit occurred on the day of Pentecost and that the church was founded on that day."
Rebuttal to The Pentecostal Church Origin
In replying to the Pentecostal view of the founding of the Church it seems best to analyze the appropriate Scriptures in the order in which they occur. In the discussions of these passages, rebuttal is presented against the assertions and conclusions drawn by the Pentecost proponents.
Matthew 16:18, 19.
Matthew 16:18 "And I say also unto thee, That thou art Peter, and upon this rock I will build my church; and the gates of hell shall not prevail against it."
19 "And I will give unto thee the keys of the kingdom of heaven: and whatsoever thou shalt bind on earth shall be bound in heaven: and whatsoever thou shalt loose on earth shall be loosed in heaven."
The question on whom the church is built is easily answered by examining the term rock. Here Jesus renamed Simon, calling him Peter. “Peter” is the Greek “Petros” meaning a piece of rock or a moveable rock. But the rock which the church is built upon is “Petra” a solid, massive, unmovable rock such as bedrock. Jesus did not say to Peter, “upon you I will build My church,” but “upon this rock” indicating another foundation. Jesus is that other rock and foundation, and He built the church upon Himself. The use of the personal possessive pronoun makes clear the ownership of the church; it is His Church exclusively.
The Future: Punctiliar or Durative?
The phrase “will build” indeed is in the future tense, indicating that from that time Christ would build His church. However, is this future work a punctiliar action (action as a point) or durative, linear action (action which is continuous or incomplete)? Here is an example of the future tense verb showing a continuous action: Matt. 13:43 “Then shall the righteous shine forth as the sun…..” Is this shining of the righteous only momentary (only once) or will they continue to shine once they begin to radiate? The point is that there are possibilities of future actions. Grammar alone may not necessarily determine the kind of action but the immediate context and scriptural harmony often does.
It is assumed by the Pentecostal position that the building of the church by Christ will be punctiliar, a one-time event, never to be repeated or advancing. As they see it, Christ only built one church (universal invisible), and it had but one beginning. Thus their conclusion is that the “will build” in our text verse is a punctiliar action, once for all. They cannot or will not concede any on-going building processes of the church in this verse. By their subsequent statements they reveal their doctrinal position of the church consisting of all the saved emphasizing I Cor. 12:13 (the baptism of the Holy Spirit) as their proof of this doctrine. Thus it becomes clear why, for their cause, the church of necessity had to have begun on the day of Pentecost. For to them Pentecost is that act of the church being baptized by the Holy Spirit.
Our argument is not over the future tense of build, but the significance of build. Build, oikodomeso, is appropriately used with such meanings as to “build up,” “edify,” “strengthen,” “advance” and “enlarging.” To adamantly deny the possibility of these meanings verges on prejudice, preconception and closed mindedness. These verbs are viable meanings of the word oikodomeo.10 The English “edify” and its forms are translated only from either the noun oikodome or verb oikodomeo.
The argument for a Pentecostal church beginning is often seen in circular reasoning. It runs thus:
(Assertion) - "Since Matt.16:18 is the church founded in the future then
(Conclusion) - it began on the day of Pentecost."
(Assertion) - "Since the church began on the day of Pentecost then
(Conclusion) - Matt. 16:18 means a future founding of the church.”
Bowman wrote: Since Peter (I Peter 2:5) states, the Church is building up after Pentecost, "it cannot be in Matthew 16:18 has the idea of enlargement”11. This opinion of Bowman is inclusive in this verse. I Peter 2:5 gives no proof of the applied meaning of oikodosemo in Matthew.
There is an important consequence in this word “build.” Did Jesus cause to build (create) His church and then end His involvement with her? Did He leave the founding and administration of the church to the Holy Spirit and contribute nothing to her? Or was He indicating His involvement in the affairs and well being of His church throughout all centuries? What He said in Matt 16:18 was a promise to His disciples, which projected involving them and Himself in His church. Certainly Jesus is constantly and deeply involved in His churches; see Rev. chapters 2, 3 where He repeated to all seven churches “I know thy works.” The interpretation of build to edify, build-up, or strengthen can be clearly demonstrated, such as in I Peter 2:5. The meaning to initiate as opposed to edify cannot be conclusively demonstrated, but inferred only. So, is it best to interpret by inference or by clear precedent?
It is on this point that the Pentecostals present conflicting double meanings. There is absolutely no precedent set in the New Testament where the word ekklesia is convincingly used in any other way than its usual usage. The generic use of ekklesia is not proof of any secondary meaning of the word. It verges on absurdity to say that the church is an unassembled assembly. To assert that Jesus was speaking of an invisible universal, never-assembled called-out assembly (ekklesia) would have made no sense to His disciples. Unless Christ explained this new meaning to them they would have had a contradictory understanding of what He said. Not only would they have been confused about the nature of the church but also all those after them. Nowhere is it ever explained that ekklesia is now put to use with an entirely new meaning. Not simply a new meaning, but a meaning in opposition, and contrary, to the very word used. We must allow only for scripture, not the theology of men, to interpret scripture. Scripture never redefines ekklesia; neither has it presented two entities with the single designation of church!
Matthew 18:17 "And if he shall neglect to hear them, tell it unto the church: but if he neglect to hear the church, let him be unto thee as an heathen man and a publican."
18 "Verily I say unto you, Whatsoever ye shall bind on earth shall be bound in heaven: and whatsoever ye shall loose on earth shall be loosed in heaven."
Matt.18:17, Jesus instructs the disciples that if they cannot resolve a personal conflict then bring it before the church, and have the church judge the matter. Here the church is specifically mentioned as existing before Pentecost. This admonition is not future but in the present.
Robertson (Word Pictures) wrote on this verse (vrs 17):
"The church (the ekklesia). The local body, not the general as in Mt 16:18. The problem here is whether Jesus has in mind an actual body of believers already in existence or is speaking prophetically of the local churches that would be organized later (as in Acts)."
This problem for Robertson (and all Pentecostals) is resolved simply by continuing to read the next verses in these two texts (Matt. 16:19 & Matt. 18:18). Observe carefully here. First, both churches in Matt. 16:18, and 18:17, are identical. Here is how we can tell. The phrase: “Whatsoever ye shall bind on earth shall be bound in heaven: and whatsoever ye shall loose on earth shall be loosed in heaven.” is found in both passages. In Matthew 16:19 the binding and loosing is in the context of the church being given the keys of the Kingdom of Heaven. Consistency in the usage of the church with identical authority on two different occasions makes them identical. Robertson and others see a local body in one passage and the general, universal body in the other passage. This cannot be, for by what means could a universal invisible “body” ever bind or loose in Matt. 16? This binding and loosing involves judicial processes by the whole body and the consequential action taken by it. Only a local church can accomplish such work.
Oddly enough Chafer never addresses Matt. 16:19 or Matt. 18:17, 18. Schofield places the keys and authority in Matt 16 into the hands and power of individuals12. Schofield ignores the Matt. 18 church reference. Thiessen identifies Matt. 18 with the local church, and specifically identifies the subject to be the church administering church discipline13. Thiessen wrote (pg. 421):
Each church elected its own officers and delegates (Acts 1:23, 26; 6:1-6; 15:2, 3). Each church had the power to carry out its own church discipline (Matt. 18:17, 18; I Cor. 5:13; 2 Thess. 3:6, 14, 15). The church together with its officers rendered decisions (Acts 15:22), received delegates (Acts 15:4), sent out solicitors (2 Cor. 8:19), and missionaries (Acts 13:2, 3).
Two facts come to the forefront. First, the nature of the church is local. Second, the church existed before Pentecost, during the earthly life of Christ. The keys were not delivered into the hands of individuals to make heaven-binding decisions. Which of us would trust any man to determine matters of such magnitude? History has shown to us the horrors of corruption when men have claimed this power for themselves. The keys are placed in His local churches with the ability, by the democratic process, to meet, hear, deliberate, and render decisions.
The Day of Pentecost: Acts 2:1-4
What exactly happened on this day? What did the Holy Spirit baptize; the church, or an un-constituted group of redeemed individuals? Actually, neither, for the Holy Spirit baptized no one on that auspicious occasion. To say that the Holy Spirit did the baptizing means that He was the agent performing or administering baptism. This He did not do. The church was baptized, immersed, in the Holy Spirit. He was the element into which the church was baptized. Neither were they baptized by fire but rather in fire. The sound from heaven filled the house, this sound was the physically manifestation of the presence of the Holy Spirit. He filled and shrouded the entire house which they occupied.
This agrees exactly with the prophecies given by John the Baptist. In all four gospels this is mentioned by John (Matt. 3:11: Mark 1:8; Luke 3:16 and John 1:8). John never used “by” or “with,” but “in” (Greek en) the Holy Spirit that the baptism would occur. This also agrees with the mainstay passage of I Cor. 12:13, where it should read “in one spirit” and not by one spirit we are all baptized.
Jesus promised His church that after His death He would send them the Holy Spirit, the Spirit of Truth. Once He came He was to be their Comforter (the one walking beside), their guide into all truth (John 14:16; 15:26; 16:13, 15), and the testifier of Christ. He was to reveal to them the things He receives from Jesus after His glorification. He was sent by both the Father and Christ (John 15:26).
When the Holy Spirit came He perpetually took up His abode in the church; teaching them, grounding them, leading them in the affairs of the Kingdom, and inspiring individuals within her to pen the New Testament. His manifestation on the day of Pentecost was not only for the benefit of the church but also for those outside the church. It was the divine substantiating evidence of what the disciples were proclaiming to the world was true. As the glory of God had been with Israel, so now is the glory of God abiding in His church among men. This church, His house, is seen in the same fashion as the tabernacle of the Old Testament with the corroborating evidence of Moses’ testimony, that it was there that God abides with men. The miracles on that day and in subsequent days were also corroborating evidence that God was with the church; there, in that assembly, God is present. The temple was no longer the center of worship and service. The proof was given on that day. This power, this leadership, this confirmation from God did not inaugurate the church, but came upon the already existing church. These were the final stages in the fashioning of the church. It was already in existence, but was in a sense incomplete without these additions.
I Corinthians 12:13
This verse is the tip of an inverted pyramid of the doctrine of the universal invisible church. It would be expected that a doctrine, which is so contrary to the plain sense of the words and context of the church, would have a broad base from which it is built. But not so, this is the only conclusive text verse that the universalist puts forth. But, upon closer examination of I Cor. 12:13, it is found to be a mistranslation.
KJV I Corinthians 12:13 For by one Spirit are we all baptized into one body, whether we be Jews or Gentiles, whether we be bond or free; and have been all made to drink into one Spirit.
The second word of the King James translation has it as “by.” This is incorrect; the word is Greek en. en is the primary preposition “in,” it has no other meaning. This would make it to read: “For in one spirit…..” However, for the sake of those who challenge this meaning in this passage and insist on “by” we investigate further.
Rotherham14 notes on this verse: "For Baptizein with en of element, see Matt 3:11; Luke 3:16; John 1:26, 31, 33; Acts 1:5; 11:16. In every case where en is used it is clearly meant 'in' and not 'by.' Some say that because the word en is grammatically coupled with Spirit, which is in the dative case, it can be translated as 'by.' But this does not agree with the verses which also use the dative and are not translated 'by,' but rather as 'with' or ‘in.’ In Luke 3:16 John said: 'I indeed water (dative) baptize,' here water, in the dative case, demands the preposition 'in.' John is the baptizer and not water. Consider Mark 1:8, 'I indeed have baptized you with (en) water: but he shall baptize you with (en) the Holy Ghost.' The juxtaposition of the two baptisms is to show the contrasts between the two, both relate to the persons doing the baptisms and the media which they use. The first case is John doing the baptizing in the media of water. The second case is Christ doing the baptizing in the media of the Holy Spirit. It is not the water baptizing nor is it the Holy Spirit baptizing."
Note — the Corinthian church did not exist on the day of Pentecost. There is no record that they ever experienced the Pentecostal event. As a matter of fact neither did Paul.
The Purchasing by the Blood
Acts 20:28 Take heed therefore unto yourselves, and to all the flock, over the which the Holy Ghost hath made you overseers, to feed the church of God, which he hath purchased with his own blood.
The following is a review of statements previously quoted as a proof for the necessity of the church existence after Calvary.
"This proof is clearly substantiated by Acts 20:28. Here Paul states that the blood of Christ had purchased the Flock, the Church of God. The existence of the Body of Christ could not be possible prior to His death and ascension."
Chafer states: "There could be no Church in the world - constituted as she is and distinctive in all her features – until Christ’s death; for her relation to that death is not a mere anticipation, but is based wholly on his finished work and she must be purified by His precious blood.'"
This conjecture is based upon the supposition that the church could not exist by the anticipation of the blood of Christ, but only after Calvary. If this supposition and conclusion is correct then what shall be said of salvation? Both the church and God’s redemption are made possible only by His blood. To presume, that prior to Christ’s death, the church could not exist is to conclude salvation could not exist, for the same price is paid for both. Salvation certainly existed in anticipation of the blood of Christ. The logic is faulty.
The blood of Christ not only purchased the church, but also washes men from their sins (Rev 1:5), gives eternal redemption (I Peter 1:18,19), sanctifies men (Heb. 13:12), justifies men (Rom. 5:9), and reconciles men unto God (Rom. 5:10). Indeed, in order for all this to be done the blood sacrifice had to be accomplished. But the question is, could these things exist in anticipation of the blood? Could they have existed prior and then be consummated by His blood? To deny this is to deny salvation for all who died before the death of Christ. And yet, we know Abraham, Isaac, and Jacob were justified before God by their faith in the promises of God of a future Messiah (Matt. 8:11).
The salvation of God is in anticipation of His redemptive work. Rev.13:8 “And all that dwell upon the earth shall worship him, whose names are not written in the book of life of the Lamb slain from the foundation of the world.” Christ stood as slain from the time of the creation. This decision, this commitment, this provision for man’s sin was made before the need of it ever occurred.
By the fact that salvation existed in anticipation of the Blood, how can it be asserted that the existence of the church cannot be in anticipation? What Christ purchased was the fruit of His labor among men making them His disciples, assembling them, and teaching them. In truth, what He purchased was already in existence, just as He consummated salvation, which had already been granted unto the redeemed.
The time of the founding of the church gives evidence of what the nature of the church is and what it is not. Moreover, the time either establishes or denies the assertion of the rights of men to create churches, and then claim of the validity of these subsequent “churches.” It either allows or denies congregations to justifiably call themselves the ground and pillar of the truth, the bride of Christ, and the house of God with the abiding presence of the Holy Spirit. Finally, it establishes who, in this world has been given the divine authority for the ordinances and judgment in the affairs of the kingdom.
The manner in which the church was founded also defines the nature of the church. If it be supposed that the church began on the day of Pentecost it still does not fit the pattern of a universal church. When the Holy Spirit came upon the church it was a local visible body, which assembled in one room of one house. He did not come upon all the saved upon the earth. Without exception, those who believe in the universal church all agree that the universal church is "the Real and True Church," and the Local church is merely an imitation of the real. But yet, there is no record or indication that anyone saved apart from that small group in Jerusalem had any knowledge of what happened that day. Were the "120" the only saved at that time? Did the Holy Spirit only manifest Himself in such power and great demonstration to a very small portion of all the saved, and the rest were without any such testimony or knowledge that they had just been constituted into the Lord’s church? Could they have known who the others were that are now also joined together with them in the body of Christ and the house of God? If on the other hand the church existed during the earthly life of Christ it is clearly a local visible assembly. Only those who persisted as Disciples of Christ and followed Him constituted His church. These disciples were well known to one another and fellowshipped together. His church is clearly visibly identified in the world. There is no unassembled assembly.
The church existing prior to Pentecost unmistakably makes it the property and creation of Christ and of no human agency. There is no institution apart from what Jesus built which may claim to be His church. When individual churches fell into apostasy they lost their standing as His body. When men seeking reformation built their churches, those churches remained as they were founded, the work of man and not Christ. The absurd notion that the church of Jesus Christ died out is contrary to the words of Christ; “The gates of hell shall not prevail against it.” What Jesus built, He declared would survive until the day of His coming and is caught up to be with Him. To teach otherwise is blaspheme against Christ’s own words, whether intentionally or unintentionally.
While He was on earth, Jesus taught and led His church. They were grounded in the truth. But to keep that assurance they needed constant supervision and correction. This necessitated the continuous presence of the Holy Spirit in the church as the leader, teacher, and inspirer of the truth. With the Holy Spirit administering discipline to the church and individuals within her she is thus able to be presented to Christ as a chaste virgin, holy without blemish. None of this is true of a mystical, invisible, unassembled assembly. Apart from the common salvation of all the redeemed there is chaos in doctrine, practice, discipline, tolerance, and compromise. Throughout history redeemed men* have persecuted, even unto death and torture, both saved and lost people because of doctrinal issues. It is buffoonery to say all the saved are the ground and pillar of the truth, the church of the Living God.
*John Calvin is one such example. He is lauded as one of the great Christians of all time by a vast number of theologians. Investigate his history for the tortures and deaths he inflicted on those who disagreed and opposed him.
The ordinances of the church, Baptism and the Lord’s Supper, were given to the church while Jesus lived. No other group of persons or individuals received the command or authority to observe and administer them. The “Great Commission” was given to the eleven disciples who stood in place of the church. It was a limitless commission to be discharged throughout the world, among all nations, for all seasons. The context of this commission is to make disciples, baptizing them and teaching them the commands of Christ and obedience to them. Many organizations in the world today attempt to usurp this commission, but it cannot be done. Most churches today make disciples not of Christ but for some cause. Jesus never made disciples for any cause, but made disciples of Himself. His disciples are to follow the person and not peripheral issues of the person or the secular needs of people. Further, His church is composed of these disciples. His church is not about ideological agendas but devoted to the service, worship, and glorification of the person, God. She follows Him as her Head, Lord and Master. When this is lost and a cause becomes "the leader" and devotion is given to the cause, it can no longer make men Disciples of Christ or teach them.
The Church Moving On From Pentecost
The Pentecostal event cannot be understated in its importance to the growth and welfare of the church. The benefit is not only for the church but also for the entire world. It is thought by many that evangelism began with the church but this is not true to history. Since the time the Old Testament Hebrew text was translated into Greek, called the Septuagint, the Jews were very committed to "evangelism." The words of Christ are often overlooked when He said to the Scribes and Pharisees, “you compass sea and land to make one proselyte” (Matt. 23:15). They were very zealous in their “missionary” work. Unfortunately, their message of penance and “law” salvation kept men in bondage to sin and sealed their fate to hell.
The Gospel of Christ spread rapidly in the first few centuries. This included the phenomenal growth of the church. Churches quickly arose throughout the known world. Unfortunately, a number of churches apostatized just as quickly. The problems, which Paul and John addressed, reveal the pressure on the churches to revert to the law and to deny that Christ was Lord. Both moral and doctrinal failures became evident. The rebukes Jesus gave to His churches in Rev. 2, 3 also exposed serious errors of men usurping His authority and their coldness to Him. Because of the lack of vigilant discipline, heresies and immorality destroyed many churches. Worse yet, it gave birth to a new denomination of the church. In a little over four hundred years churches began to persecute churches. This resulted in an often-repeated pattern of the Lord’s churches of rejecting and separating themselves from fallen churches.
These facts do not apply to the aggregate redeemed, but to individual assemblies of Christ. As churches took a stand for the truth they suffered for it. Many were driven into hiding in remote regions of the Roman Empire. Some, as in the case of the Waldenses, the Bogomils, churches in Spain and the Welsh, etc., survived for centuries with the same system of faith as the church of Jerusalem. Many others were persecuted out of existence. By the churches, and not individuals, were the doctrines and practices (repentance, salvation, baptism, the Lord’s Supper, discipline, faithfulness, purity, and love) of the New Testament held sacred and preserved. They safeguarded, taught, loved, and committed to memory the Word of God. The miracle of God’s Word is that it has survived. None of this could have been possible without the advent of the Holy Spirit upon the church on the day of Pentecost.
(1) Earl D. Radmacher, The Nature of the Church, Western Baptist Press Portland, Oregon 1972, pg. 205
(2) A. T. Robertson, Word Pictures in the New Testament, I, 132-33
(3) H. E. Bowman, thesis The Doctrine of the Church in the North American Baptist Association, pg. 21
(4) Lewis S. Chafer, Systematic Theology, IV, 45
(5) J. J. Van Osterzee, Christian Dogmatics, I, 295.
(6) E. Brunner, The Misunderstanding of the Church, pg. 161
(7) C. A. Nash, The Book of Acts (Unpublished)
(8) J. H. Thayer, A Greek-English Lexicon of the New Testament, pg. 314
(9) Rotherham, The Emphasized Bible Kregel Publications
(10) Cremer, Biblico-Theological Lexicon of New Testament Greek, pgs. 448, 449
(11) H. E. Bowman, thesis The Doctrine of the Church in the North American Baptist Association, pg. 21
(12) Scofield Reference Bible
(13) H. C. Theissen, Lectures in Systematic Theology, pgs. 416,421
(14) Rotherham, The Emphasized Bible Kregel Publications
Return to Home Page | <urn:uuid:b0a208cc-04e9-40af-a0d7-7147d451cb31> | CC-MAIN-2021-21 | http://nonprotestantbaptists.com/Church.Founded.html | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989614.9/warc/CC-MAIN-20210511122905-20210511152905-00371.warc.gz | en | 0.966717 | 7,864 | 2.9375 | 3 |
Herman Melville Biography
From early times, Herman Melville, like countless other lonely, contemplative, and misunderstood wanderers, was drawn to the sea. A reserved, bookish, skeptical man, he was never given to easy answers or orthodox religious beliefs. He was a striking figure — average in height, with a full, curling brown beard, cane, and ever-present Meerschaum pipe. His merry blue-green eyes and cheerful sociability brought him many friends and partners for games of whist.
He was a faithful letter writer and established a reputation as a mesmerizing teller of tales. He gave full range to his imagination, as demonstrated by his comment about the writing of Moby-Dick: "I have a sort of sea-feeling. My room seems a ship's cabin; and at nights when I wake up and hear the wind shrieking, I almost fancy there is too much sail on the house, and I had better go on the roof and rig in the chimney." Yet, as he grew older, he drew into himself, in part a reaction to personal troubles and literary anonymity.
Born August 1, 1819, on Pearl Street in New York City near the Battery, Melville was the third of eight children, four boys and four girls, and a descendant of respectable Scotch, Irish, and Dutch colonial settlers. He was the grandson of two American Revolutionary War leaders, one of whom participated in the Boston Tea Party. His father, Allan Melvill (as the name was originally spelled), a snobbish, shallow man, was an importer of French luxury items, including fine silks, hats, and gloves. He suffered a mental breakdown, caught pneumonia, and died broke in 1832, owing nearly $25,000 and leaving destitute his wife, Maria Gansevoort Melville. An aristocratic, imperious, unsympathetic woman, she moved in with her well-to-do parents, who helped educate and support her brood.
For two years, Melville attended the Albany Classical School, which specialized in preparing pupils for the business world. He displayed no particular scholarliness or literary promise, but he did join a literary and debate society, as well as submit letters to the editor of the Albany Microscope. From a boyhood of relative affluence, he underwent a rapid fall in social prominence as his family accustomed itself to genteel poverty. Ultimately, Melville and his brother Gansevoort had to drop out of school to help support the family.
Melville enrolled at Lansingburgh Academy in 1838 and, with ambitions of helping to construct the Erie Canal, studied engineering and surveying. He graduated the next year and worked briefly as a bank clerk, then as a salesman; he was a laborer on his Uncle Thomas' farm, clerked in his brother's fur and hat store, and also taught elementary school. During this period, he dabbled in writing and contributed articles to the local newspaper.
A Life at Sea
In his late teens, Melville's mother's worsening financial position and his inability to find suitable work forced him to leave home. In 1839, he signed on as a cabin boy of the packet St. Lawrence. His four-month voyage to Liverpool established his kinship with the sea. It also introduced him to the shabbier side of England, as well as of humanity, for the captain cheated him of his wages.
A deep reader of Shakespearean tragedies, French and American classics, and the Bible, Melville returned to New York and tried his hand as schoolmaster at Pittsfield and East Albany. Again disappointed in his quest for a life's work and stymied by a hopeless love triangle, he returned to the sea on January 3, 1841, on the whaler Acushnet's maiden voyage from New Bedford, Massachusetts, to the South Seas. This eighteen-month voyage served as the basis for Moby-Dick.
In July 1842, at Nukahiva in the Marquesas Islands, he and shipmate Richard Tobias ("Toby") Greene deserted ship to avoid intolerable conditions and a meager diet of hardtack and occasional fruit. They lived for a month under benign house arrest among the cannibalistic Typees. With his Polynesian mistress, Melville enjoyed a few carefree months as a beach bum. During this sojourn, he distanced himself from the Western world's philosophies as well as nineteenth-century faith in "progress."
Melville escaped the Typees aboard the Lucy Ann, an Australian whaler not much better than his former berth. He became embroiled in a mutiny, was jailed for a few weeks in a British prison, and deserted ship a second time in September 1842, at Papeete, Tahiti, along with the ship's doctor, Long Ghost. For a time, he worked as a field laborer and enjoyed the relaxed island lifestyle.
Leaving Tahiti, he sailed on the Charles and Henry, a whaler, off the shores of Japan, then on to Lahaina, Maui, and Honolulu, Hawaii. To earn his passage home, he worked as a store bookkeeper and a pinsetter in a bowling alley. He was so poor that he could not afford a peacoat to shield him from the cold gales of Cape Horn. In desperation, he fashioned a coat from white duck and earned for himself the nickname "White Jacket."
The events of the final leg of the journey tell much of the young man's spirit. At one point, he was in danger of a flogging for deserting his post until a brave seaman intervened. In a second episode, Captain Claret ordered him to shave his beard. When Melville bridled at the order, he was flogged and manacled. Crowning his last days at sea was an impromptu baptism when he fell from a yardarm into the water off the coast of Virginia.
The Literary Years
As an ordinary seaman on the man-of-war United States, Melville returned to Boston in October 1844, where he resumed civilian life. His imagination continued to seek refuge on the waves under a restless sky. In 1846, from his experience among the cannibals, he composed Typee: A Peep at Polynesian Life, the first of four amorphous autobiographical novels. The book opened the world of the South Seas to readers and went into its fifth printing that same year, yet earned only $2,000. Although expunged of erotic passages, his work met with negative criticism from religious editors who attacked another element — his description of the greed of missionaries to the South Pacific.
The favorable reaction of readers, on the other hand, encouraged Melville to produce more blends of personal experience and fiction: Omoo (1847), which is based on his adventures in Tahiti, Redburn (1848), which describes his first voyage to England, and White-Jacket (1850), a protest which led to an act of Congress banning the practice of flogging in the U.S. Navy. One of his fans, Robert Louis Stevenson, was so intrigued by these and other seagoing romances that he followed Melville's example and sailed to Samoa.
On August 4, 1847, Melville married Elizabeth "Lizzie" Shaw, daughter of Lemuel Shaw, chief justice of the commonwealth of Massachusetts, to whom Typee is dedicated. The Melvilles honeymooned in Canada and settled in New York on what is now Park Avenue South, where they spent the happiest years of their marriage and enjoyed intellectual company, including William Cullen Bryant, Richard Henry Dana, and Washington Irving. Their first child, Malcolm, was born in 1849. A second son, Stanwix, was born in 1851, followed by two daughters, Elizabeth in 1853 and Frances in 1855. In 1850, the Melville family moved to "Arrowhead," a large two-story frame house on a heavily wooded 160-acre farm near Pittsfield, Massachusetts. Among his New England peers, including Oliver Wendell Holmes, Ralph Waldo Emerson, and Maria Sedgwick, Melville established a reputation for honesty, courage, persistence, and seriousness of expression and purpose, and was, for a time, numbered among the Transcendentalists.
By the late 1840s, Melville, well established as a notable author of travel romance and a contributor of comic pieces to Yankee Doodle magazine, became known as "the man who had lived among the cannibals." However, the reaction to his experimentation with satire, symbol, and allegory in Mardi (1849) gave him a hint of the fickleness of literary fame. Victorian readers turned away from his cynical philosophy and dark moods in favor of more uplifting authors. Lizzie, who lacked her husband's philosophical bent, confessed that the book was unclear to her. After the reading public's rejection, he voiced his dilemma: "What I feel most moved to write, that is banned, — it will not pay. Yet, altogether, write the other way I cannot. So the product is a final hash, and all my books are botches."
On an outing in the Berkshire Mountains, Melville made a major literary contact. He met and formed a close relationship with his neighbor and mentor, Nathaniel Hawthorne, whom he had reviewed in an essay for Literary World. Their friendship, as recorded in Melville's letters, provided Melville with a sounding board and bulwark through his literary career. As a token of his warm feelings, he dedicated Moby-Dick (1851), his fourth and most challenging novel, to Hawthorne. As he expressed to his friend and editor, Evert Duyckinck, two years before composing Moby-Dick: "I love all men who dive. Any fish can swim near the surface, but it takes a great whale to go down stairs five miles or more." The sentiment reflects both the dedicatee and the author as well.
Melville attempted to support not only his own family but also his mother and sisters, who moved in with the Melvilles ostensibly to teach Lizzie how to keep house. In a letter to Hawthorne, Melville complains, "Dollars damn me." He owed Harper's for advances on his work. The financial strain, plus immobilizing attacks of rheumatism in his back, failing eyesight, sciatica, and the psychological stress of writing Moby-Dick, led to a nervous breakdown in 1856. The experience with Mardi had proved prophetic. Moby-Dick, now considered his major work and a milestone in American literature, suffered severe critical disfavor. He followed with contributions to Harper's and Putnam's magazines, which paid him five dollars per page, a handy source of supplemental income. He also published Pierre (1852), Israel Potter (1855), and The Piazza Tales (1856), the collection which contains both "Bartleby, the Scrivener" and "Benito Cereno." Yet, even with these masterworks, he never regained the readership he enjoyed with his first four novels.
Shunned by readers as uncouth, formless, irrelevant, verbose, and emotional, Moby-Dick was the nadir of his career. Alarmed by the author's physical and emotional collapse, his family summoned Dr. Oliver Wendell Holmes to attend him. They borrowed money from Lizzie's father to send Melville on a recuperative trek to Europe, North Africa, and the Middle East; however, his health remained tenuous.
Depressed, Melville traveled to San Francisco aboard a clipper ship captained by his youngest brother, Tom, lectured about the South Seas and his European travels, wrote poetry, and in vain sought a consulship in the Pacific, Italy, or Belgium to stabilize his failing finances. With deep-felt patriotism, he tried to join the Navy at the outbreak of the American Civil War but was turned down.
He moved to New York in 1863. Because the reading public refused his fiction, Melville began writing poems. The first collection, Battle Pieces (1866), delineates his view of war, particularly the American Civil War. With these poems, he supported abolitionism, yet wished no vengeance on the South for the economic system it inherited. The second work, Clarel (1876), an 18,000-line narrative poem, evolved from the author's travels in Jerusalem and describes a young student's search for faith. A third, John Marr and Other Sailors (1888), followed by Timoleon (1891), were privately published, primarily at the expense of his uncle, Peter Gansevoort.
During this period, for four dollars a day Melville served at the Gansevoort Street wharf from 1866-86 as deputy inspector of customs, a job he characterized as "a most inglorious one; indeed, worse than driving geese to water." The move was heralded by a carriage accident, which further diminished Melville's health. He grew more morose and inward after his son Malcolm shot himself in 1866, following a quarrel over Malcolm's late hours. His second son, Stanwix, went to sea in 1869, never established himself in a profession, and died of tuberculosis in a San Francisco hospital in February 1886.
Melville mellowed in his later years. A legacy to Lizzie enabled him to retire; he ceased scrabbling for a living. He took pleasure in his grandchildren, daily contact with the sea, and occasional visits to the Berkshires. When the New York Author's Club invited him to join, he declined. Virtually ignored by the literary world of his day, Melville made peace with the creative forces that tormented him by characterizing the ultimate confrontation between evil and innocence. He became more reclusive, more contemplative, as he composed his final manuscript, Billy Budd, a short novel about arbitrary justice, which he shaped slowly from 1888 to 1891, then completed five months before his death. He dedicated the novella to John J. "Jack" Chase, fellow sailor, lover of poetry, and father figure.
Without reestablishing himself in the literary community, Melville died on September 28, 1891. He was buried at Woodlawn Cemetery in the north Bronx; his obituary occupied only three lines in the New York Post. Billy Budd, the unfinished text which some critics classify as containing his most incisive characterization, remained unpublished until 1924. This work, along with his journals and letters, a few magazine sketches, and Raymond M. Weaver's biography, revived interest in Melville's writings in 1920. Melville's manuscripts are currently housed in the Harvard collection.
A Melville Timeline
1819 Herman Melville is born in New York City on August 1, the third child and second son of Allan and Maria Gansevoort Melvill.
1830 The Melvill family moves to Albany.
1832 Allan Melvill dies. Maria and her eight children move to Albany to be closer to the Gansevoorts.
1838 Melville enrolls at Lansingburgh Academy to study engineering and surveying.
1839 Melville sails for Liverpool aboard the St. Lawrence and returns four months later.
1841 Melville sails from Fairhaven, Massachusetts, aboard the whaler Acushnet on January 3.
1842 Melville and Richard Tobias Greene jump ship in the Marquesas Islands. In July, he sails aboard the whaler Lucy Ann for Tahiti and is involved in a crew rebellion. In September, he jumps ship in Papeete, Tahiti.
1843 Melville does odd jobs in Honolulu before enlisting in the U.S. Navy aboard the frigate United States.
1844 Melville is discharged from the Navy in Boston in October.
1846 Melville publishes Typee.
1847 Melville publishes Omoo. He marries Elizabeth Shaw and settles in New York City.
1848 Melville publishes Redburn. He journeys to Europe.
1849 Melville publishes Mardi. His son Malcolm is born.
1850 Melville publishes White-Jacket. He purchases "Arrowhead," a farm outside Pittsfield, Massachusetts, and forms a friendship with his neighbor Nathaniel Hawthorne.
1851 Melville publishes The Whale, then reissues it under the title Moby-Dick. Melville's second son, Stanwix, is born.
1852 Melville publishes Pierre.
1853 Melville's first daughter, Elizabeth, is born.
Putnam's magazine publishes "Bartleby, the Scrivener" in two installments. Melville is paid $85.
1855 Melville publishes Israel Potter. Frances, his second daughter and last child, is born.
Putnam's magazine publishes Benito Cereno in three installments.
1856 Melville publishes The Piazza Tales, a collection of short stories including "Bartleby" and Benito Cereno. At the point of mental and physical collapse, Melville travels in Europe, Egypt, and the Holy Land.
1857 Melville's The Confidence Man is published while he is out of the country. He launches a three-year stint as a lecturer.
1863 Melville sells Arrowhead and returns to New York City.
1866 Melville publishes Battle Pieces, the first of his poetic works, and accepts a job as customs inspector for the Port of New York. Malcolm dies of a self-inflicted pistol wound.
1869 Stanwix goes to sea.
1876 Melville publishes Clarel.
1886 Stanwix Melville dies of tuberculosis in San Francisco.
1888 Melville publishes John Marr and Other Sailors and begins writing Billy Budd on November 16.
1891 Melville publishes Timoleon, then completes the manuscript for Billy Budd on April 19 and dies on September 28.
1924 Raymond Weaver is instrumental in the publication of Billy Budd.
Parallel Literary and Historical Events
1793 Eli Whitney devises the cotton gin, which makes slavery more profitable.
1800 Free blacks of Philadelphia petition Congress to free slaves. A slave insurrection in Virginia is quelled and the perpetrator hanged.
1803 Louisiana Purchase doubles the size of the American colonies.
Blacks set fire to New York City.
1807 Congress ends the importation of slaves.
1808 John Jacob Astor opens his American Fur Company.
1816 Byron publishes The Prisoner of Chillon.
1819 Slave smuggling becomes a lucrative trade.
1820 The Missouri Compromise keeps a balance between slave and free states.
Abolitionist pamphlets, speeches, and correspondence in circulation.
1824 Byron dies while fighting for Greek independence.
1831 The New England Anti-Slavery Society is formed.
William Lloyd Garrison begins publishing The Liberator.
Nat Turner initiates a slave insurrection in Virginia. He and nineteen other blacks are hanged.
1833 William Lloyd Garrison helps found the American Anti-Slavery Society of Philadelphia.
1840 Abolitionists divide over the issue of an anti-slavery party.
Around 10,000 runaway slaves resettle in Ontario.
1849 Poe dies.
1850 Hawthorne publishes The Scarlet Letter.
Harper's magazine is established.
The Fugitive Slave Act increases activity by the Underground Railroad.
1851 Sojourner Truth addresses the Women's Rights Convention.
1852 Harriet Beecher Stowe publishes Uncle Tom's Cabin.
1853 Putnam's magazine is founded.
1855 Dickens publishes Hard Times.
1857 The Dred Scott decision maintains that slaves are property.
1860 Around 60,000 runaway slaves settle in Ontario.
1861-65 The American Civil War ends slavery. | <urn:uuid:61e4bc5b-cb06-4f04-8965-423b25523ec5> | CC-MAIN-2021-21 | https://www.cliffsnotes.com/literature/b/bartleby-the-scrivener/herman-melville-biography | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990584.33/warc/CC-MAIN-20210513080742-20210513110742-00576.warc.gz | en | 0.968546 | 4,118 | 2.921875 | 3 |
|Teak foliage and fruits|
Teak (Tectona grandis) is a tropical hardwood tree species in the family Lamiaceae. It is a large, deciduous tree that occurs in mixed hardwood forests. Tectona grandis has small, fragrant white flowers arranged in dense clusters (panicles) at the end of the branches. These flowers contain both types of reproductive organs (perfect flowers). The large, papery leaves of teak trees are often hairy on the lower surface. Teak wood has a leather-like smell when it is freshly milled and is particularly valued for its durability and water resistance. The wood is used for boat building, exterior construction, veneer, furniture, carving, turnings, and other small wood projects.
Tectona grandis is native to south and southeast Asia, mainly Bangladesh, India, Indonesia, Malaysia, Myanmar, Thailand and Sri Lanka, but is naturalised and cultivated in many countries in Africa and the Caribbean. Myanmar's teak forests account for nearly half of the world's naturally occurring teak. Molecular studies show that there are two centres of genetic origin of teak: one in India and the other in Myanmar and Laos.
Teak is a large deciduous tree up to 40 m (131 ft) tall with grey to greyish-brown branches, known for its high quality wood. Its leaves are ovate-elliptic to ovate, 15–45 cm (5.9–17.7 in) long by 8–23 cm (3.1–9.1 in) wide, and are held on robust petioles which are 2–4 cm (0.8–1.6 in) long. Leaf margins are entire.
Fragrant white flowers are borne on 25–40 cm (10–16 in) long by 30 cm (12 in) wide panicles from June to August. The corolla tube is 2.5–3 mm long with 2 mm wide obtuse lobes. Tectona grandis sets fruit from September to December; fruits are globose and 1.2–1.8 cm in diameter. Flowers are weakly protandrous in that the anthers precede the stigma in maturity and pollen is shed within a few hours of the flower opening. The flowers are primarily entomophilous (insect pollinated), but can occasionally be anemophilous (wind pollinated). A 1996 study found that in its native range in Thailand, the major pollinator were species in the bee genus Ceratina.
- Heartwood is yellowish. It darkens as it ages. Sometimes there are dark patches on it. There is a leather-like scent in newly cut wood.
- Sapwood is whitish to pale yellowish brown. It can easily separate from heartwood.
- Wood texture is hard and ring porous.
- Density varies according to moisture content: at 15% moisture content it is 660 kg/m3.
Tectona grandis was first formally described by Carl Linnaeus the Younger in his 1782 work Supplementum Plantarum. In 1975, Harold Norman Moldenke published new descriptions of four forms of this species in the journal Phytologia. Moldenke described each form as varying slightly from the type specimen: T. grandis f. canescens is distinguished from the type material by being densely canescent, or covered in hairs, on the underside of the leaf, T. grandis f. pilosula is distinct from the type material in the varying morphology of the leaf veins, T. grandis f. punctata is only hairy on the larger veins on the underside of the leaf, and T. grandis f. tomentella is noted for its dense yellowish tomentose hairs on the lower surface of the leaf.
The English word teak comes via the Portuguese teca from Malayalam tekka (cognate with Tamil tekku Telugu teku and Kannada tegu). Central Province teak and Nagpur teak are named for those regions of India.
Distribution and habitat
Tectona grandis is one of three species in the genus Tectona. The other two species, T. hamiltoniana and T. philippinensis, are endemics with relatively small native distributions in Myanmar and the Philippines, respectively. Tectona grandis is native to India, Sri Lanka, Indonesia, Myanmar, northern Thailand, and northwestern Laos.
Tectona grandis is found in a variety of habitats and climatic conditions from arid areas with only 500 mm of rain per year to very moist forests with up to 5,000 mm of rain per year. Typically, though, the annual rainfall in areas where teak grows averages 1,250–1,650 mm with a 3–5 month dry season.
Teak's natural oils make it useful in exposed locations, and make the timber termite and pest resistant. Teak is durable even when not treated with oil or varnish. Timber cut from old teak trees was once believed to be more durable and harder than plantation grown teak. Studies have shown that plantation teak performs on par with old-growth teak in erosion rate, dimensional stability, warping, and surface checking, but is more susceptible to colour change from UV exposure.
The vast majority of commercially harvested teak is grown on teak plantations found in Indonesia and controlled by Perum Perhutani (a state owned forest enterprise) that manages the country's forests. The primary use of teak harvested in Indonesia is in the production of outdoor teak furniture for export. Nilambur in Kerala, India, is also a major producer of teak, and is home to the world's oldest teak plantation.
Teak consumption raises a number of environmental concerns, such as the disappearance of rare old-growth teak. However, its popularity has led to growth in sustainable plantation teak production throughout the seasonally dry tropics in forestry plantations. The Forest Stewardship Council offers certification of sustainably grown and harvested teak products. Propagation of teak via tissue culture for plantation purposes is commercially viable.
Teak plantations were widely established in Equatorial Africa during the Colonial era. These timber resources, as well as the oil reserves, are at the heart of the current (2014) South Sudanese conflict.
Much of the world's teak is exported by Indonesia and Myanmar. There is also a rapidly growing plantation grown market in Central America (Costa Rica) and South America. With a depletion of remaining natural hectares of teak forests, a growth in plantations in Latin America is expected to rise.
Hyblaea puera, commonly known as the teak defoliator, is a moth native to southeast Asia. It is a teak pest whose caterpillar feeds on teak and other species of trees common in the region of southeast Asia.
Teak's high oil content, high tensile strength and tight grain make it particularly suitable where weather resistance is desired. It is used in the manufacture of outdoor furniture and boat decks. It is also used for cutting boards, indoor flooring, countertops and as a veneer for indoor finishings. Although easily worked, it can cause severe blunting on edged tools because of the presence of silica in the wood. Over time teak can weather to a silvery-grey finish, especially when exposed to sunlight.
Teak is used extensively in India to make doors and window frames, furniture, and columns and beams in homes. It is resistant to termite attacks and damage caused by other insects. Mature teak fetches a very good price. It is grown extensively by forest departments of different states in forest areas.
Leaves of the teak wood tree are used in making Pellakai gatti (jackfruit dumpling), where batter is poured into a teak leaf and is steamed. This type of usage is found in the coastal district of Udupi in the Tulunadu region in South India. The leaves are also used in gudeg, a dish of young jackfruit made in Central Java, Indonesia, and give the dish its dark brown colour.
Teak is used as a food plant by the larvae of moths of the genus Endoclita including E. aroura, E. chalybeatus, E. damor, E. gmelina, E. malabaricus, E. sericeus and E. signifer and other Lepidoptera including the turnip moth.
Teak has been used as a boat-building material for over 2000 years (it was found in an archaeological dig in Berenice Panchrysos, a port on the Indian Roman trade route). In addition to relatively high strength, teak is also highly resistant to rot, fungi and mildew. The wood has a relatively low shrinkage ratio, which makes it excellent for applications where it undergoes periodic changes in moisture. Teak has the unusual property of being both an excellent structural timber for framing or planking, while at the same time being easily worked and finished, unlike some otherwise similar woods such as purpleheart. For this reason, it is also prized for the trim work on boat interiors. Due to the oily nature of the wood, care must be taken to properly prepare the wood before gluing.
When used on boats, teak is also very flexible in the finishes that may be applied. One option is to use no finish at all, in which case the wood will naturally weather to a pleasing silver grey. The wood may also be oiled with a finishing agent such as linseed or tung oil. This results in a pleasant, somewhat dull finish. Finally, teak may also be varnished for a deep, lustrous glow.
Teak is also used extensively in boat decks, as it is extremely durable and requires very little maintenance. The teak tends to wear in to the softer 'summer' growth bands first, forming a natural 'non-slip' surface. Any sanding is therefore only damaging. Use of modern cleaning compounds, oils or preservatives will shorten the life of the teak, as it contains natural teak oil a very small distance below the white surface. Wooden boat experts will only wash the teak with salt water, and re-caulk when needed. This cleans the deck, and prevents it from drying out and the wood shrinking. The salt helps it absorb and retain moisture, and prevents any mildew and algal growth. Over-maintenance, such as cleaning teak with harsh chemicals, can shorten its usable lifespan as decking.
Alternatives to teak
Teak is propagated mainly from seeds. Germination of the seeds involves pretreatment to remove dormancy arising from the thick pericarp. Pretreatment involves alternate wetting and drying of the seed. The seeds are soaked in water for 12 hours and then spread to dry in the sun for 12 hours. This is repeated for 10–14 days and then the seeds are sown in shallow germination beds of coarse peat covered by sand. The seeds then germinate after 15 to 30 days.
Clonal propagation of teak has been successfully done through grafting, rooted stem cuttings and micro propagation. While bud grafting on to seedling root stock has been the method used for establishing clonal seed orchards that enables assemblage of clones of the superior trees to encourage crossing, rooted stem cuttings and micro propagated plants are being increasingly used around the world for raising clonal plantations.
World's largest living teak tree
Ministry of Environmental Conservation and Forestry (Myanmar) found the world's two biggest living teak trees on 28 August 2017 in Homalin Township, Sagaing Region, Myanmar. The biggest one, named Homemalynn 1, is 27.5 feet (8.4 m) in girth and 110 feet (34 m) tall. The second biggest one, named Homemalynn 2, is 27 feet (8.2 m) in girth.
Previously, the world's biggest recorded teak tree was located within the Parambikulam Wildlife Sanctuary in the Palakkad District of Kerala in India, named Kannimara. The tree is approximately 47.5 metres (156 ft) tall.
In 2017, a tree was discovered in the Ottakallan area of Thundathil range of Malayattoor Forest Division in Kerala with a girth of 7.65 metres (25.1 ft) and height of 40 metres (130 ft). A teak tree in Kappayam, Edamalayar, Kerala, which used to be considered the biggest, has a girth of only 7.23 meters.
The International Teak Information Network (Teaknet) supported by the Food and Agriculture Organization (FAO) regional office for Asia-Pacific, Bangkok, currently has its offices at the Kerala Forest Research Institute, Peechi, Thrissur, Kerala, India.
- "Tectona grandis L.f. — The Plant List". The Plant List.
- "GRIN Taxonomy for Plants - Tectona". United States Department of Agriculture. 5 October 2007. Retrieved 22 September 2013.
- William Feinberg. "Burmese Teak: Turning a new leaf". East By South East. Retrieved 20 September 2015.
- Verhaegen, D.; Fofana, Inza Jesus; Logossa, Zénor A; Ofori, Daniel (2010). "What is the genetic origin of teak (Tectona grandis L.) introduced in Africa and in Indonesia?" (PDF). Tree Genetics & Genomes. 6 (5): 717–733. doi:10.1007/s11295-010-0286-x. S2CID 11220716.
- Vaishnaw, Vivek; Mohammad, Naseer; Wali, Syed Arif; Kumar, Randhir; Tripathi, Shashi Bhushan; Negi, Madan Singh; Ansari, Shamim Akhtar (2015). "AFLP markers for analysis of genetic diversity and structure of teak (Tectona grandis) in India". Canadian Journal of Forest Research. 45 (3): 297–306. doi:10.1139/cjfr-2014-0279.
- Tectona grandis. Flora of China 17: 16. Accessed online: 17 December 2010.
- Tangmitcharoen, S. and J. N. Owens. 1996. Floral biology, pollination, pistil receptivity, and pollen tube growth of teak (Tectona grandis Linn f.). Annals of Botany, 79(3): 227–241. doi:10.1006/anbo.1996.0317
- Bryndum, K. and T. Hedegart. 1969. Pollination of teak (Tectona grandis Linn.f.). Silv. Genet. 18: 77–80.
- Hasluck, Paul N (1987). The Handyman's Guide: Essential Woodworking Tools and Techniques. New York: Skyhorse. pp. 174–5. ISBN 9781602391734.
- Porter, Brian (2001). Carpentry and Joinery. 1 (Third ed.). Butterworth. p. 54. ISBN 9781138168169.
- "Tectona grandis". International Plant Names Index (IPNI). Royal Botanic Gardens, Kew. Retrieved 17 December 2010.
- Moldenke, H. N. 1975. Notes on new and noteworthy plants. LXXVII. Phytologia, 31: 28.
- "teak - Origin and meaning of teak by Online Etymology Dictionary". Etymonline.
- "Trade and Marketing". Food and Agriculture Organisation of the United Nations. Retrieved 6 September 2015.
- Tewari, D. N. 1992. A Monograph on Teak (Tectona grandis Linn.f.). International Book Distributors.
- Kaosa-ard, A. 1981. Teak its natural distribution and related factors. Natural History Bulletin of the Siam Society, 29: 55–74.
- Williams, R. Sam; Miller, Regis (2001). "Characteristics of Ten Tropical Hardwoods from Certified Forests in Bolivia" (PDF). Wood and Fiber Science. 33 (4): 618–626.
- KRFI.org. "Teak Museum: Nilambur". Web Archive. Archived from the original on 28 September 2006. Retrieved 20 September 2015.
- "Teak - TimberPlus Blog". 7 July 2014.
- "Is all well in the teak forests of South Sudan? – By Aly Verjee". 14 March 2013.
- Central American Timber Fund. "Investing in Teak: The Market". Central American Timber Fund. Retrieved 20 September 2015.
- Herbison-Evans, Don (6 September 2007). "Hyblaea puera". University of Technology, Sydney. Archived from the original on 24 July 2008. Retrieved 12 March 2008.
- "Archived copy". Archived from the original on 22 February 2014. Retrieved 13 February 2014.CS1 maint: archived copy as title (link)
- "Teak: A Dwindling Natural Resource - Teak Hardwoods".
- Steven E. Sidebotham, Berenike and the Ancient Maritime Spice Route, Univ. of California Press, 2011.
- Yachting. February 2004. pp. 46–. ISSN 0043-9940.
- R. Bruce Hoadley (2000). Understanding Wood: A Craftsman'S Guide To Wood Technology – Chapter 6 pg.118. ISBN 9781561583584. Retrieved 14 October 2015.
- Elmer John Tangerman (1973). The Big Book of Whittling and Woodcarving. Courier Corporation. pp. 180–. ISBN 978-0-486-26171-3.
- "MotorBoating". Motor Boating (New York, N.Y. 2000): 38–. April 1912. ISSN 1531-2623.
- Hearst Magazines (March 1985). "Popular Mechanics". Popular Mechanics, 2015. Hearst Magazines: 125–. ISSN 0032-4558.
- The Woodenboat. J. J. Wilson. 2001.
- Peter H. Spectre (1995). Painting & Varnishing. WoodenBoat Books. pp. 15–. ISBN 978-0-937822-33-3.
- Kadambi, K. (1972). Silviculture and management of Teak. Bulletin 24 School of Forestry, Stephen F. Austin State University Nacogdoches, Texas
- B. Robertson (2002) Growing Teak in the Top End of the NT. Agnote. No. G26 PDF Archived 26 February 2009 at the Wayback Machine
- Azamal Husen. "Clonal Propagation of Teak (Tectona grandis Linn.f." LAP Lambert Academic Publishing. Retrieved 20 September 2015.
- Khin Su Wai (5 September 2017). "Sagaing Region may be home to world's largest teak tree". The Myanmar Times. Retrieved 26 September 2017.
- "Mother of all Teak trees near Malayattoor". The New Indian Express. Retrieved 25 October 2018.
- "Teaknet - Online Teak Resources and News - International Teak Information Network". www.teaknet.org. | <urn:uuid:dd2d656e-f66c-4308-96b9-786d130c312b> | CC-MAIN-2021-21 | http://wiki-offline.jakearchibald.com/wiki/Teak | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988955.89/warc/CC-MAIN-20210509032519-20210509062519-00416.warc.gz | en | 0.890695 | 4,098 | 3.515625 | 4 |
Mucus present in the human respiratory system functions as the first line of defensive action against diverse noxious inhaled particles. It is a viscoelastic and gel-like complex substance containing water, salts, and a multitude of macromolecules. This gel-like characteristic of mucus is known to be due mainly to the high molecular weight component, mucins (mucous glycoproteins). The quality and quantity of production of mucins are critical for physicochemical property of mucins that is pivotal in efficient mucociliary clearance of pulmonary inflammatory cells, pathogenic microbes, cell debris, and inhaled particles. The hypersecretion or hyperproduction of sticky mucus, a specific pathological change in the normal quantity or quality of mucins, destructs the physiological defensive mechanisms of respiratory system and provoke diverse respiratory pathologic status as exemplified in cystic fibrosis, bronchiectasis, asthma, and chronic bronchitis (Voynow and Rubin, 2009).
In order to regulate such an abnormal production or secretion of airway mucins, development of a specific pharmacological agent controlling the gene expression, production, and secretion can be an ideal solution. Clinically, it has been reported that glucocorticoids suppress the hyperproduction and/or hypersecretion of airway mucins. However, they showed various adverse effects in the course of pharmacotherapy (Rogers, 2007; Sprenger
According to the literature, kaempferol (Fig. 1), 3,4′,5,7-tetrahydroxyflavone, is a flavonol, a secondary metabolite found in various edible plants (Devi
However, as far as we perceive, there is no report about the potential effect of kaempferol on mucin production and mucin gene expression provoked by phorbol ester or epidermal growth factor, in airway epithelial cells. Of the many subtypes of human mucins, MUC5AC subtype of mucin consists of the major type of human airway mucin (Rogers and Barnes, 2006; Voynow and Rubin, 2009). Therefore, we investigated the effect of kaempferol on phorbol 12-myristate 13-acetate (PMA)- or epidermal growth factor (EGF)-induced MUC5AC mucin production and gene expression from NCI-H292 cells. A human pulmonary mucoepidermoid cell line, NCI-H292 cells, is frequently used for specifying the signaling pathways involved in airway mucin production and gene expression (Li
All the chemicals including kaempferol (purity: 95.0%) used in this experiment were purchased from Sigma (St. Louis, MO, USA) unless otherwise stated. Anti-NF-κB p65 (sc-8008), anti-specificity protein-1 (Sp1) (sc-17824), anti-inhibitory kappa Bα (IκBα) (sc-371), and anti-β-actin (sc-8432) antibodies were purchased from Santa Cruz Biotechnology (Santa Cruz, CA, USA). Anti-nuclear matrix protein p84 (ab-487) antibody was purchased from abcam (Cambridge, MA, USA). Anti-phospho-EGFR (Y1068), phospho-specific anti-IκBα (serine 32/36, #9246), anti-EGFR, anti-phospho-IKKα/β (Ser176/180, #2687), anti-MEK1/2, anti-phospho-mitogen-activated protein kinase (MAPK)/extracellular signal-regulated kinase (ERK) kinase (MEK) 1/2 (S221), anti-phospho-p38 MAPK (T180/Y182), anti-p38 MAPK, anti-phospho-p44/42 MAPK (T202/Y204), and anti-p44/42 MAPK antibodies were purchased from Cell Signaling Technology Inc (Danvers, MA, USA). Either Goat Anti-rabbit IgG (#401315) or Goat Anti-mouse IgG (#401215) was used as the secondary antibody and purchased from Calbiochem (Carlsbad, CA, USA).
NCI-H292 cells were purchased from the American Type Culture Collection (ATCC, Manassas, VA, USA). Cells were cultured in RPMI 1640 supplemented with 10% fetal bovine serum (FBS) in the presence of penicillin (100 units/mL), streptomycin (100 μg/mL) and HEPES (25 mM) at 37°C in a humidified, 5% CO2/95% air, water-jacketed incubator. For serum deprivation, confluent cells were washed twice with phosphate-buffered saline (PBS) and recultured in RPMI 1640 with 0.2% fetal bovine serum for 24 h.
After 24 h of serum deprivation, cells were pretreated with varying concentrations of kaempferol for 30 min and then treated with EGF (25 ng/mL) or PMA (10 ng/mL) for 24 h in serum-free RPMI 1640. Kaempferol was dissolved in dimethyl sulfoxide and treated in culture medium (final concentrations of dimethyl sulfoxide were 0.5%). The final pH values of these solutions were between 7.0 and 7.4. Culture medium and 0.5% dimethyl sulfoxide did not affect mucin gene expression, mucin production, and expression and activity of molecules involved in NF-κB or EGFR signaling pathway, in NCI-H292 cells. After 24 h, cells were lysed with buffer solution containing 20 mM Tris, 0.5% NP-40, 250 mM NaCl, 3 mM EDTA, 3 mM EGTA and protease inhibitor cocktail (Roche Diagnostics, IN, USA) and collected to measure the production of MUC5AC protein (in a 24-well culture plate). The total RNA was extracted in order to measure the expression of MUC5AC gene (in a 6-well culture plate) using RT-PCR. For the western blot analysis, cells were treated with kaempferol for 24 h and then with PMA or EGF for the indicated periods.
MUC5AC airway mucin production was measured using ELISA. Cell lysates were prepared with PBS at 1:10 dilution, and 100 μL of each sample was incubated at 42°C in a 96-well plate, until dry. Plates were washed three times with PBS and blocked with 2% bovine serum albumin (BSA) (fraction V) for 1 h at room temperature. Plates were washed another three times with PBS and then incubated with 100 μL of 45M1, a mouse monoclonal MUC5AC antibody (1:200) (NeoMarkers, CA, USA), which was diluted with PBS containing 0.05% Tween 20, and dispensed into each well. After 1 h, the wells were washed three times with PBS, and 100 μL of horseradish peroxidase-goat anti-mouse IgG conjugate (1:3,000) was dispensed into each well. After 1 h, plates were washed three times with PBS. Color reaction was developed with 3,3’,5,5’-tetramethylbenzidine (TMB) peroxide solution and stopped with 1 N H2SO4. Absorbance was read at 450 nm.
Total RNA was isolated by using Easy-BLUE Extraction Kit (INTRON Biotechnology, Inc., Gyeonggi, Korea) and reverse transcribed by using AccuPower RT Premix (BIONEER Corporation, Daejeon, Korea) according to the manufacturer’s instructions. Two μg of total RNA was primed with 1 μg of oligo (dT) in a final volume of 50 μL (RT reaction). Two μL of RT reaction product was PCR-amplified in a 25 μL by using Thermorprime Plus DNA Polymerase (ABgene, Rochester, NY, USA). Primers for MUC5AC were (forward) 5′-TGA TCA TCC AGC AGG GCT-3′ and (reverse) 5′-CCG AGC TCA GAG GAC ATA TGG G-3′. Primers for Rig/S15 rRNA, which encodes a small ribosomal subunit protein, a housekeeping gene that was constitutively expressed, were used as quantitative controls. Primers for Rig/S15 were (forward) 5′-TTC CGC AAG TTC ACC TAC C-3′ and (reverse) 5′-CGG GCC GGC CAT GCT TTA CG-3′. The PCR mixture was denatured at 94°C for 2 min followed by 40 cycles at 94°C for 30 s, 60°C for 30 s and 72°C for 45 s. After PCR, 5 μL of PCR products were subjected to 1% agarose gel electrophoresis and visualized with ethidium bromide under a transilluminator.
NCI-H292 cells (confluent in 150 mm culture dish) were pretreated for 24 h at 37°C with 1, 5, 10 or 20 μM of kaempferol, and then stimulated with PMA (50 ng/mL) for 30 min, in serum-free RPMI 1640. Also, the cells were pretreated with 1, 5, 10 or 20 μM of kaempferol for 15 min or 24 h and treated with EGF (25 ng/mL) for 24 h or the indicated periods. After the treatment of the cells with kaempferol, media were aspirated, and the cells washed with cold PBS. The cells were collected by scraping and were centrifuged at 3,000 rpm for 5 min. The supernatant was discarded. The cells were mixed with RIPA buffer (25 mM Tris-HCl pH 7.6, 150 mM NaCl, 1% NP-40, 1% sodium deoxycholate, 0.1% SDS) for 30 min with continuous agitation. The lysate was centrifuged in a microcentrifuge at 14,000 rpm for 15 min at 4°C. The supernatant was either used, or was immediately stored at −80°C. Protein content in extract was determined by Bradford method.
After the treatment with kaempferol as outlined, the cells were harvested using Trypsin-EDTA solution and then centrifuged in a microcentrifuge (1,200 rpm, 3 min, 4°C). The supernatant was discarded, and the cell pellet was washed by suspending in PBS. The cytoplasmic and nuclear protein fractions were extracted using NE-PER® nuclear and cytoplasmic extraction reagent (Thermo-Pierce Scientific, Waltham, MA, USA) according to the manufacturer’s instructions. Both extracts were stored at −20°C. Protein content in extracts was determined by Bradford method.
Cytosolic, nuclear, and whole cell extracts containing proteins (each 50 μg as proteins) were subjected to 10% sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) and then transferred onto the polyvinylidene difluoride (PVDF) membrane. The blots were blocked using 5% skim milk and probed with appropriate primary antibody in blocking buffer overnight at 4°C. The membrane was washed with PBS and then probed with the secondary antibody conjugated with horseradish peroxidase. Immunoreactive bands were detected by an enhanced chemiluminescence kit (Pierce ECL western blotting substrate, Thermo-Pierce Scientific).
The means of individual groups were converted to percent control and expressed as mean ± SEM. The difference between groups was assessed using a one-way ANOVA and the Holm-Sidak test post-hoc. A
Kaempferol inhibited PMA- or EGF-induced MUC5AC mucin gene expression from NCI-H292 cells (Fig. 2A, 2B). Also, Kaempferol significantly inhibited PMA- or EGF-induced MUC5AC production from NCI-H292 cells. The amounts of mucin in the cells of cultures were 100 ± 8% (control), 257 ± 9% (10 ng/mL of PMA alone), 232 ± 11% (PMA plus kaempferol 1 μM), 180 ± 6% (PMA plus kaempferol 5 μM), 135 ± 5% (PMA plus kaempferol 10 μM) and 98 ± 4% (PMA plus kaempferol 20 μM), respectively (Fig. 3A). The amounts of mucin in the cells of cultures were 100 ± 5% (control), 218 ± 9% (25 ng/mL of EGF alone), 206 ± 6% (EGF plus kaempferol 1 μM), 163 ± 3% (EGF plus kaempferol 5 μM), 121 ± 4% (EGF plus kaempferol 10 μM) and 105 ± 5% (EGF plus kaempferol 20 μM), respectively (Fig. 3B). Cell viability was checked using the sulforhodamine B (SRB) assay and there was no cytotoxic effect of kaempferol at 1, 5, 10, and 20 μM (data not shown).
In order for NF-κB to be activated, PMA provokes the phosphorylation of IKK and this phosphorylated IKK, in turn, phosphorylates the IκBα. The phosphorylated IκBα dissociates from NF-κB and degraded. Thus, we checked whether kaempferol affects the phosphorylation of IκBα and degradation of IκBα, provoked by PMA. As can be seen in Fig. 4, kaempferol mitigated PMA-stimulated phosphorylation of IκBα. Also, PMA provoked the degradation of IκBα, whereas kaempferol inhibited the IκBα degradation.
The activated NF-κB translocates from the cytosol to the nucleus and then connects to the specific site of DNA. This complex of DNA/NF-κB recruits the RNA polymerase and then the resulting mRNA is translated into the specific proteins, including MUC5AC mucins. Also, the transcriptional activity of NF-κB p65 has been known to be dependent upon its phosphorylation. As can be seen in Fig. 5, PMA stimulated the phosphorylation of p65, whereas kaempferol suppressed its phosphorylation. Finally, kaempferol blocked the nuclear translocation of NF-κB p65, provoked by PMA.
EGFR signaling pathway is known to be one of the major regulatory mechanism of the production of MUC5AC mucin. As can be seen in Fig. 6, EGF (25 ng/mL, 24 h) stimulated the expression and phosphorylation of EGFR. Kaempferol inhibited EGF-stimulated expression of and phosphorylation of EGFR, as shown by western blot analysis. Also, EGF stimulated the phosphorylation of MEK1/2, whereas kaempferol suppressed the phosphorylation MEK1/2, in NCI-H292 cells.
EGF stimulated the phosphorylation of p38 and p44/42, whereas kaempferol suppressed the phosphorylation of p38 and p44/42 (ERK 1/2) MAPK (Fig. 7), as shown by western blot analysis. Lastly, EGF stimulated the nuclear expression of Sp1, a transcription factor provoking the gene expression of MUC5AC mucin, in NCI-H292 cells. Kaempferol suppressed the nuclear expression of Sp1 (Fig. 7). This, in turn, led to the down-regulation of the production of MUC5AC mucin protein, in NCI-H292 cells.
In the present, glucocorticoids, N-acetyl L-cysteine (NAC), 2-mercaptoethane sulfonate sodium (MESNA), letocysteine, ambroxol, bromhexine, azithromycin, dornase alfa, glyceryl guaiacolate, hypertonic saline solution, myrtol, erdosteine, mannitol, sobrerol, S-carboxymethyl cysteine, and thymosin β-4 are utilized for the pharmacotherapy of respiratory diseases manifesting airway mucus hypersecretion. However, these agents failed to exert the remarkable clinical efficacy in controlling such diseases and provoked the various side effects (Li
In order to control the diverse inflammatory pulmonary diseases effectively, the regulation of inflammatory response can be the first goal. Our results demonstrated that kaempferol, an anti-inflammatory natural product, suppressed the production of MUC5AC mucin protein and the expression of MUC5AC mucin gene, induced by PMA or EGF (Fig. 2, 3). These results suggest that kaempferol can regulate the production and gene expression of mucin, by directly acting on airway epithelial cells. As aforementioned in Introduction, Kwon
Several studies revealed that MUC5AC mucin gene expression and production can be increased by the inflammatory mediators which activate the transcription factors including NF-κB (Fujisawa
On the other hand, EGF provokes EGFR signaling pathway and MUC5AC mucin gene expression and production, in NCI-H292 cells and EGFR has been reported to be up-regulated in asthmatic airways (Burgel
We found that EGFR is constitutively expressed in NCI-H292 cells and kaempferol inhibited EGF-stimulated expression of EGFR (Fig. 6). Wetzker and Bohmer (2003) reported that EGF induced the protein tyrosine kinase activity of EGFR and activated the MAPK cascade including p38 MAPK and p44/42 MAPK. Also, inhibition of activity of p38 MAPK and p44/42 MAPK was reported to suppress the EGF-induced MUC5AC gene expression (Mata
In summary, the inhibitory activity of kaempferol on airway mucin gene expression and production might be mediated by regulating PMA-induced degradation of IκBα and nuclear translocation of NF-κB p65 and/or affecting EGF-induced EGFR-MEK-MAPK-Sp1 signaling cascade. These results suggest a potential of utilizing kaempferol as an efficacious mucoactive agent for inflammatory respiratory diseases. Through further study, it should be essential to modify the structure of kaempferol so that the optimal compound shows the best controlling effect on the secretion and/or production of mucus.
This research was supported by NRF-2014R1A6A1029617 and NRF-2017R1C1B1005126, Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education.
The authors have declared that there is no conflict of interest. | <urn:uuid:8def9ea2-168e-41f5-b274-59ed2e9f4dcc> | CC-MAIN-2021-21 | https://www.biomolther.org/journal/view.html?uid=1321&vmd=Full& | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00257.warc.gz | en | 0.916001 | 4,133 | 2.578125 | 3 |
The southeastern United States contains a multi-billion dollar pine timber industry that could be affected significantly by the establishment and spread and of Sirex noctilio F. (Hymenoptera: Siricidae), the non-native wood wasp that was discovered recently in the northeastern United States. Several factors, including the timing of native wood wasp (S. nigricornis F.) emergence and flight, may influence the success of S. noctilio. Understanding the seasonal phenology of the native wood wasp will allow us to make better predictions regarding the potential ecological impacts by S. noctilio on Arkansas pine forests. Native S. nigricornis females were collected across 3 geographic regions of Arkansas from 2009 to 2013 using intercept panel traps baited with Sirex lure (70/30-α/β-pinene blend) and ethanol. A Gompertz 3-parameter model was fitted to each year of trapping as well as a final dataset containing all trapping seasons. Emergence rates and inflection points of the models did not differ among geographic regions within any year but did change significantly among years. In regions where both wood wasps occur, native wood wasps emerge in the fall and attack dead pines, whereas S. noctilio emerges much earlier and attacks live, standing trees. We expect these patterns to remain very similar upon the spread of S. noctilio into the Southeast. Therefore, we do not expect these 2 species to utilize the same hosts or emerge at the same time in Arkansas, which makes native species displacement unlikely. Additional studies examining the effects of predators, parasitoids, and competitors on Sirex population dynamics would greatly enhance these predictions.
Sirex nigricornis F. (Hymenoptera: Siricidae) is found throughout the eastern United States and Canada and is the most commonly encountered pine-inhabiting wood wasp native to Arkansas (Schiff et al. 2006). Adults are univoltine, emerging from early Oct through the end of Dec, and their regional abundance varies greatly (Barnes 2012; Hartshorn 2012; Keeler 2012; Lynn-Miller 2012). In their respective native ranges, Sirex wood wasps rarely attack healthy vigorous trees and often develop in trees that previously were stressed or injured (Hall 1968; Coyle et al. 2012).
In 2004, an invasive European wood wasp, Sirex noctilio F. (Hymenoptera: Siricidae), was detected in upstate New York (Hoebeke et al. 2005). This species has been introduced accidentally into several countries in the Southern Hemisphere, where pines are not native, and frequently has caused widespread tree mortality by tunneling through the xylem and interfering with nutrient translocation (Rawlings & Wilson 1949; Ciesla 2003). This was in part because of poor silvicultural and management practices, but also because the complex of insects, fungi, and nematodes that commonly interact with Sirex species in their respective native ranges was absent at the time of introduction. Dinkins (2011) reported that S. noctilio will oviposit and complete development in pines native to the southeastern U.S. This creates concern that S. noctilio will establish in, and become a pest of, pine timber stands in Arkansas and surrounding states, especially in areas with poor silvicultural practices (Chase et al. 2014). Potential threats resulting from the establishment of the non-native species in the Southeast include ecological and economic damage to both commercial and unmanaged pine forests, and the possible displacement of native wood wasps (Gandhi & Herms 2010; Ryan et al. 2012). However, we lack information on native wood wasp ecology that will enable us to better predict interactions among these species.
Some studies (e.g., Zylstra et al. 2010) have described seasonal flight patterns of S. noctilio but have not examined yearly variation in these patterns. Herein, we analyzed standardized trap captures of S. nigricornis among regions and over several years, and compared the emergence patterns in relation to possible interactions between the 2 wood wasp species. Our objectives were to 1) use S. nigricornis trap catch data, collected from 2009 to 2013 across 3 geographic regions of Arkansas to describe and model the onset, duration, and patterns of adult female flight, and 2) qualitatively compare this information to existing phenological studies of S. noctilio to evaluate the potential for interactions among the native and invasive wood wasps, as well as other pine-inhabiting insects.
Materials and Methods
Three geographic locations in Arkansas were chosen to represent forests of varying topography, climate, and stand composition (Ozark National Forest, Ouachita National Forest, and Gulf Coastal Plains). Sites were on both public and private land, and areas with obvious damage (e.g., from tornadoes, ice storms) were included to ensure capture of wood wasps as we anticipated that these sites comprised favorable habitat. We identified stands that contained mostly shortleaf (Pinus echinata Mill.) and/or loblolly pine (P. taeda L.) (Pinales: Pinaceae). Matching these criteria meant that regions in which we trapped changed from year to year (Fig. 1). Each site in 2009 (7 sites), 2010 (14 sites), and 2011 (10 sites) contained 3 traps. Twelve sites in 2012 and 8 sites in 2013 each contained 2 traps.
Descriptions of Ozark and Ouachita National Forests follow the ecosubregion descriptions by McNab & Avers (1994). Diameter at breast height (DBH) reported was collected once at each site during Sirex trapping. Sites in the Ozark and Ouachita National Forests were dominated by oak, hickory, and pine species. In the Ozarks, elevation ranged from 200 to 793 m. Pine DBH averaged 30 cm with a range of 19 to 40 cm. In the Ouachitas, elevation ranged from 100 to 793 m. Pine DBH in the Ouachita sites averaged 25 cm with a range of 22 to 31 cm. The Gulf Coastal Plains sites were characterized as a southern floodplain consisting of mainly oak-hickory forests with intensively managed loblolly pine stands intermixed. Elevation was much lower (0 to 91 m) relative to both the Ozarks and Ouachitas. DBH of pines in the Gulf Coastal Plains sites averaged 21 cm with a range of 17 to 26 cm (Keeler 2012; Lynn-Miller 2012).
Intercept™ panel traps (APTIV Inc., Portland, Oregon, USA) baited with Sirex lure (70% α-pinene, 30% β-pinene; 2.0 g/d release rate) and ultra-high release (UHR) ethanol (0.70 g/d release rate) (Synergy Semiochemicals Corp., Burnaby, British Columbia, Canada) were used for all years. In 2009, traps were erected in mid-Oct and adult flight was missed partially. Traps in 2010 to 2013 were therefore erected in late-Sep to detect the first emergence of females. Traps in all years were collected until at least 2 collections contained no Sirex (late Dec). Lures were replaced halfway through each trapping season (mid-Nov). Traps were hung from a 19 mm diameter (¾ inch) steel electrical conduit that was bent using a conduit bender to form an inverted L-shape. A hole was drilled in the top bent portion of the bar from which traps were hung with wire. For all years except 2011, traps terminated in a collection cup that contained propylene glycol (Super Tech RV and Marine®) for the capture and preservation of insects. To avoid damage from black bears, poles were raised using additional conduit until collection cups were approximately 2.5 m above the ground (Barnes 2012; Coyle et al. 2012).
In 2011 only, a second collection method was used in the Ozark National Forest to collect live insects for additional laboratory experiments. Because for that year only the Ozarks were represented, regional differences could not be tested. The trap type and lures were as above, APTIV InterceptTM panel traps baited with Sirex lure and UHR ethanol, with the addition of ipsenol, ipsdienol, and lanierone (Synergy Semiochemicals Corp., Burnaby, British Columbia, Canada) to increase the diversity of wood-boring insects captured. Instead of a collection cup filled with propylene glycol, a 125 L (33 gallon) Rubbermaid® trash can was attached to the base of each panel trap to create a modified live-trapping system (Lynn-Miller 2012). All traps were placed approximately 50 m from each other. All trap contents were collected every 7 to 14 d and held in cold storage until processing. Wood wasp females were counted, used for laboratory rearing studies, and stored in vials containing 95% ethanol after natural death.
JMP Pro 11 (SAS Institute, Cary, North Carolina, USA) was used for all statistical analyses. Variation in numbers of traps used at sites and among years existed, and high variation in individual trap catch densities even within individual sites was common. Additionally, successive trap collection periods were not always the same number of days. Therefore, counts of trapped S. nigricornis were standardized by converting to relative proportions (i.e., each collection day wood wasp count was divided by the total number of wood wasps captured in each trap, at each site, for each year). Relative proportions were then summed to produce cumulative proportion distributions that measured rate of capture irrespective of absolute density and were not biased by variations in time intervals between successive collection periods (Stephen & Dahlsten 1976). We summed these proportions over the entire trapping season to obtain a standardized scale of wood wasp emergence from zero to one over time (expressed as Julian date). When no significant differences were seen among geographic regions, cumulative proportions were calculated for each year, combining all regions.
Parameter estimates of Gompertz 3-parameter models for each individual year of Sirex nigricornis trapping from 2009 to 2013 in Arkansas, USA.
Several model types were first analyzed for all years of trapping. A Gompertz 3-parameter model provided the best fit to each year of data:Table 1). Finally, all datasets were combined and a Gompertz 3-parameter model was fitted to these data and grouped by year. A test for parallelism was performed to test for significant differences among years. All significance reported is based on α = 0.05. Per trap means were not included due to high variation within sites. The inflection point is the Julian date at which the concavity of the model changes and capture rate begins to slow. Because the measured variable was cumulative proportion, the asymptote for each model was always close to one. Capture rate can be interpreted as the proportion of wood wasps captured per day during each respective trapping season and cannot be extrapolated to the entire calendar year or to other trapping seasons of different lengths.
From 3 Nov to 9 Dec 2009, 180 wood wasps were collected in 21 total traps. Even with the setback of late trap placement, geographic regions did not significantly differ from each other (F = 0.276; df = 2; P = 0.7629). The final Gompertz 3-parameter model predicted a capture rate of 0.1498 ± 0.0197 (SE) with an inflection point at 312 ± 0.7054 (SE) d, which corresponds to 8 Nov (Fig. 2; R2 = 0.9568).
From 24 Sep until 6 Dec 2010, 186 Sirex females were trapped in 42 total traps. No significant difference was detected among regions (F = 2.201; df = 4; P = 0.0852). The final model predicted a capture rate of 0.0629 ± 0.0041, which was much slower compared with 2009 (Fig. 3; R2 = 0.9875). This could be an artifact of erecting traps earlier in 2010 compared with the previous year. The inflection point was approximately 10 d earlier than the previous year (JD = 302 ± 0.7879 d; 29 Oct), which could also be related to earlier trap placement.
From 12 Oct until 8 Dec 2011, 141 female wood wasps were captured in 30 traps in the Ozark National Forest. The model predicted a capture rate of 0.1069 ± 0.0092 and an inflection point of 300 ± 0.5583 d, corresponding to 27 Oct (Fig. 4; R2 = 0.9634).
A surge of adult wood wasps was captured in 2012 with 357 being collected in 24 traps between 8 Oct and 20 Dec 2012. Even with a much larger emergence of wood wasps compared with the previous years, geographic regions were not significantly different (F = 0.930; df = 4; P = 0.4512). The capture rate for 2012 was the slowest of all trapping seasons at 0.0263 ± 0.0024. The inflection point was at the latest date compared with other years at 332 ± 4.3324 d, or approx. 27 Nov (Fig. 5; R2 = 0.9794).
In 2013, total trap catch numbers dropped to 95 wood wasps collected from 16 traps over the longest period of emergence from 3 Oct to 31 Dec. Geographic regions were again not significantly different from one another (F = 0.492; df = 2; P = 0.6133). The capture rate increased to 0.1077 ± 0.0070 compared with the previous year. The inflection point was earlier, Julian date 317 ± 0.4230 d, or 13 Nov (Fig. 6; R2 = 0.9767).
In total, 959 wood wasps were collected from 2009 to 2013. Emergence began in early Oct and ceased in mid- to late Dec, which agrees with other reports of Sirex emergence in the Southeast (Haavik et al. 2013). A test of parallelism found significant differences among years (F = 48.621; df = 8; P < 0.0001), and a final model (Fig. 7; Table 2) including all datasets fit less well than individual years' models (R2 = 0.8582). Inflection points for each year within the final model were very similar to their respective individual models, except for 2012, which had an inflection point 16 d earlier than the individual model for that year. The inflection point for the overall model, encompassing 5 trapping seasons, was 310 d, or 6 Nov.
With the exception of 1 Sirex female captured on 24 Sep 2010, captures began the 1st week of Oct and continued into Dec for each year. Captures lasted longest in 2013, continuing to 31 Dec.
Although raw numbers of Sirex females cannot be compared directly due to variability among sites, trap number, and collection frequency, it is worth noting that 2009-2011 saw comparable trap captures of wood wasps, whereas 2012 saw a surge of wood wasp emergence, and 2013 saw a large drop off in numbers of wood wasps captured. It is unlikely that this decrease in numbers of Sirex captured resulted from a “trap-out” phenomenon. The Sirex lure does not contain sex or aggregation pheromones, but rather host volatiles, and is not considered any more effective at attracting females than a cut or damaged pine tree. The mechanisms behind the large emergence of 2012 are unknown but the previous, abnormally warm winter is suspected to have played a role. Temperatures in Arkansas over the winter of 2011–2012 rarely dipped below the developmental minimum of S. noctilio (Madden 1981), presumably similar to the developmental minimum of S. nigricornis, and reached above 21 °C several times. The following winter (2012–2013) contained several severe winter weather events and was followed by a relatively cool summer (National Oceanic and Atmospheric Administration 2014), which may have caused increased mortality resulting in the apparent population decline that occurred during the 2013 trapping season.
Inflection points differed by more than a month among the 5 trapping seasons. The inflection point for the overall model, encompassing 5 trapping seasons, was 310 d, or 6 Nov. Capture rates also changed significantly during that time. The slowest rate of capture was seen in 2012 at 0.0263 and the highest was in 2009 at 0.1498. However, 2012 also had the largest number of wood wasps captured, and 2009 was the shortest trapping season due to problems obtaining permits and establishing sites. These issues resulted in traps not being erected until late Oct. Therefore, the start of wood wasp emergence (capture) cannot be described accurately for 2009. These stark differences in capture rates could be an artifact of these issues. Even with large changes in emergence from year to year, the overall pattern of captured flying females beginning in early Oct, peaking in Nov, and waning in Dec, stayed consistent.
Lantschner et al. (2014) described the spread of S. noctilio being closely tied to latitudinal temperature increases. They suggested that wood wasp activity will increase and emergence will begin earlier in warmer climates, such as those in the Southeast. With respect to native wood wasps, we found that, even with numbers of captured adults more than doubling in the 2012 trapping season, presumably due to high winter temperatures, emergence did not begin earlier than in other years. However, if this trend of higher winter temperatures continued over several years, we might expect to see earlier emergence of natives if they follow the same early patterns as S. noctilio does in consistently warmer temperatures. Zylstra et al. (2010) showed that 79% of captured S. noctilio were collected in New York, USA, by the beginning of Aug and all S. noctilio had been captured by the end of Sep. Haavik et al. (2013) found S. noctilio emerging from Jul through Sep in Ontario, Canada, and their findings are consistent with additional studies (e.g., Myers et al. 2014), which predict future S. noctilio emergence in the Southeast occurring as early as mid-Apr. These patterns indicate that S. nigricornis and S. noctilio will not overlap in seasonal flight patterns, even in the warmer climate of the Southeast. However, this does not suggest that the southeastern U.S. will escape establishment by S. noctilio.
Additionally, by the time of predicted S. noctilio emergence in the Southeast, several other pine-inhabiting insects are already present, and some of these can significantly impact the successful development of S. noctilio. Ryan et al. (2012) provided evidence suggesting increased mortality of S. noctilio in pine trees when other pine-inhabiting beetles are present. Many of these beetles, namely Pissodes nemorensis Germar, Ips grandicollis Eichhoff, various ambrosia beetles (Coleoptera: Curculionidae), and Monochamus spp. (Coleoptera: Cerambycidae), commonly are encountered in Arkansas and may serve as competitors that slow establishment of S. noctilio.
Parameter estimates for Gompertz 3-parameter model of all combined years of Sirex nigricornis trapping from 2009 to 2013 in Arkansas, USA.
In addition to pine-inhabiting beetles, there are several parasitoids that are native to the southeastern U.S. (Kirk 1974), and commonly are collected in traps or from rearing bins along with S. nigricornis adults. Some of these parasitoids (e.g., Ibalia leucospoides [Hochenwarth]; Hymenoptera: Ibaliidae) have been introduced as biological control agents of S. noctilio in the Southern Hemisphere and have been largely successful (Carnegie et al. 2005). However, Yousuf et al. (2014) described issues with using these native species for biological control in North America due to the fungal interactions between bark beetle fungi (Ophiostoma spp.; Ophiostomatales: Ophiostomataceae) and the symbiotic fungus of Sirex spp. (Amylostereum spp.; Russulales: Amylostereaceae). It is expected that the parasitic nematode Deladenus (Tylenchida: Neotylenchidae) would also have an effect on the establishment of S. noctilio in the Southeast. It parasitizes the eggs, mycangia, and hemocoel of female wood wasps, leaving wasps unable to lay viable eggs (Keeler 2012; Kroll et al. 2013; Zieman 2013). All of these previously mentioned species could inhabit the same resource as S. noctilio, and these multi-trophic interactions should be examined closely as part of any comprehensive management recommendations.
In conclusion, although rates of capture and inflection points may change significantly from year to year, the onset and duration of native wood wasp flight do not differ significantly. We find native wood wasps in the Southeast emerging much later than S. noctilio in other parts of North America and believe this is an indication that the 2 species will not overlap during flight or development within host trees. Although they may overwinter in the same hosts, their developmental stages during this time are likely to be different. We also note the presence of common beetle species and natural enemies that may hinder S. noctilio survival, emergence, and establishment. Investigation into this interspecific competition among native and non-native wood wasps is currently underway.
This study was supported in part by the University of Arkansas, Agricultural Experiment Station, and grants from the USDA Forest Service, Forest Health Management, and Southern Research Station. We are grateful to personnel on the Ozark and Ouachita National Forests, Rick Stagg of Crossett Experimental Forest, and Connor Fristoe of Plum Creek Timber Company for permission to trap wood wasps and collect data on their land. We thank Ace Lynn-Miller, David Coyle, Laurel Haavik, and Kevin Dodds for their advice and review of earlier drafts of the manuscript. We also thank David Dalzotto, Boone Hardy, Jim Meeker, and Wood Johnson for advice and assisting with data collection and sample processing. Andy Mauromoustakos provided statistical advice. | <urn:uuid:1bc648dc-55a8-4c3e-827a-9e4804c9a87d> | CC-MAIN-2021-21 | https://bioone.org/journals/florida-entomologist/volume-98/issue-3/024.098.0319/Seasonal-Phenology-of-Sirex-nigricornis-Hymenoptera--Siricidae-in-Arkansas/10.1653/024.098.0319.full | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988774.18/warc/CC-MAIN-20210506235514-20210507025514-00295.warc.gz | en | 0.953439 | 4,720 | 2.859375 | 3 |
|Regions with significant populations|
|Spanish colonial empire in the Americas|
This article may need to be rewritten to comply with Wikipedia's quality standards. (June 2020)
Criollo (Spanish pronunciation: [ˈkɾjoʎo]) are Latin Americans who are of solely or of mostly Spanish descent; such ancestry distinguishes them both from multi-racial Latin Americans and from Latin Americans of post-colonial (and not necessarily Spanish) European immigrant origin.
Historically, they have been misportrayed as a social class in the hierarchy of the overseas colonies established by Spain beginning in the 16th century, especially in Hispanic America. They were locally-born people–almost always of Spanish ancestry, but also sometimes of other European ethnic backgrounds. Criollos supposedly sought their own identity through the indigenous past, of their own symbols, and the exaltation of everything related to the American one.[further explanation needed] Their identity was strengthened as a result of the Bourbon reforms of 1700, which changed the Spanish Empire's policies toward its colonies and led to tensions between criollos and peninsulares. The growth of local criollo political and economic strength in the separate colonies, coupled with their global geographic distribution, led them to each evolve separate (both from each other and Spain) organic national identities and viewpoints. During the Spanish American Wars of Independence, Criollos became the main supporters of independence from Spanish rule.
In Spanish-speaking countries, the use of criollo to mean a person of Spanish or European ancestry is obsolete, except in reference to the colonial period. The word is used today in some countries as an adjective defining something local or very typical of a particular Latin American country.
The word criollo and its Portuguese cognate crioulo are believed by some scholars, including the eminent Mexican anthropologist Gonzalo Aguirre Beltrán, to derive from the Spanish/Portuguese verb criar, meaning "to breed" or "to raise"; however, no evidence supports this derivation in early Spanish literature discussing the origin of the word. Originally, the term was meant to distinguish the members of any foreign ethnic group who were born and "raised" locally, from those born in the group's homeland, as well as from persons of mixed ethnic ancestry. Thus, in the Portuguese colonies of Africa, português crioulo was a locally born white person of Portuguese descent; in the Americas, negro criollo or negro crioulo was a locally-born person of pure black ancestry. In Spanish colonies, an español criollo was an ethnic Spaniard who had been born in the colonies, as opposed to an español peninsular born in Spain.
The English word "creole" was a loan from French créole, which in turn is believed to come from Spanish criollo or Portuguese crioulo.
Europeans began arriving in Latin America during the Spanish conquest; and while during the colonial period most European immigration was Spanish. In the 19th and 20th centuries millions of European and European-derived populations from North and South America did immigrate to the region. According to church and censal registers for Acatzingo in 1792, during colonial times, 73% of Spanish men married with Spanish women. Ideological narratives have often portrayed Criollos as a "pure Spanish" people, mostly men, who were all part of a small powerful elite. However, Spaniards were often the most numerous ethnic group in the colonial cities, and there were menial workers and people in poverty who were of Spanish origin throughout all of Latin America.
The criollos allowed a syncretism in their culture and gastronomy, and they, in general, felt more identified with the territory where they were born than with the Iberian peninsula. Evidence is their[who?] authorship of works[which?] demonstrating an attachment to and pride in the natives and their history. They sometimes criticized the crimes of the conquistadores, often denouncing and defending natives from abuse. In the colony's[which?] last two centuries criollos rebelled in response to the harsh suppression of Indigenous uprisings.[vague] They allowed the natives and the mestizos (indigenous/white mixed) to be schooled in the universities and art schools, and many natives and mestizos were actually notable painters and architects, mostly in the Andes, but also in Mexico.
The mixed religious or secular music appears since the 16th century in Spanish and indigenous languages. Baroque music is imported from Spain but with European and African instruments (such as drums and congas) appears. The Spanish also introduce a wider musical scale than the indigenous pentatonic, and a melodic and poetic repertoire, transmitted by writings such as songbooks, common of it is the sung voice, common in the European baroque music, the mixed aesthetics are the fruit of diverse contributions indigenous, African and especially, Spanish and European. Instruments introduced by the Spanish are the chirimías, sackbuts, dulcians, orlos, bugles, violas, guitars, violins, harps, organs, etc., along with percussions (that can be indigenous or African), everything converges on music heard by everyone. The Dominican Diego Durán in 1570 writes, "All the peoples have parties, and therefore it is unthinkable to remove them (because it is impossible and because it is not convenient either)", himself parade like the natives with a bouquet of flowers at a Christian party that coincides with the celebration of Tezcatlipoca in Mexico. The Jesuits develop with great success a "pedagogy of theatricality", with this the Society of Jesus attracts the natives and blacks to the church, where children learn to play European instruments. In Quito (1609): "there were many dances of tall and small Indigenous, and there was no lack of Moscas Indigenous who danced in the manner of the New Kingdom [European] (...) and dances of Spaniards and blacks and other dances of the Indigenous must dance before the Blessed Sacrament and in front of the Virgin Mary and the saints at parties and Easter, if they don't do it then they are punished". The well-known Zambra mora was commonly danced by blacks, to the sound of castanets and drums. The Spanish Sarabande was danced by whites and blacks. Blacks also have their chiefs. In these local events, the brotherhoods of the Congos give rise to the Congadas (Brazil, Caribbean).
Actually, there were no relevant black artists during the colony; also, one must consider the fact that many of the pure blacks were slaves, but the Law of Coartación or "slave law" was created since the 16th century, reaching its maximum peak in the 18th century, which made the black slaves to buy their freedom, through periodic payments to their owner, which eventually led to freedom. Others were freed and purchased by family members or allied whites. It was a consuetudinary act in Spanish America; it allowed the appearance of a large population of free blacks in all of the territory. Freedom could also be obtained through baptism, with the white recognizing his illegitimate children; his word was sufficient for the newborn child to be declared free. Legal freedom was more common in the cities and towns than in the countryside. Also, from the late 1600s to the 19th century, the Spanish encouraged slaves from the British colonies and the United States to come to Spanish Florida as refuge; King Charles II of Spain and his court issued a royal decree freeing all slaves who fled to Spanish Florida and accepted Catholic conversion and baptism (since 1690), most went to the area around St. Augustine, but escaped slaves also reached Pensacola and Cuba. Also, a substantial number of blacks from Haiti (a French colony) arrived as refugees to Spanish Louisiana because of these greater freedoms. The Spanish Santa Teresa de Mose (Florida) became the first legally sanctioned free black town in the present-day United States. The popularity of the Law of coartación resulted in a large population of free black people in Spanish America.
Also, Mexican historian Federico Navarrete comments: that "if they received the surname of the white father and incorporated them into their family, those children counted as American whites having the same rights, regardless of the race", Also, a fact is in every marriage, including the most mixed, they are characterized, portrayed and named the caste product that was according to their ancestry, and if this can not, according to their appearance and color.
In several documents mention that indigenous people called Criollos with the same name as one of their gods. For example, Juan Pablo Viscardo relates (1797) that the Indigenous (from Peru) call to the Criollos 'Viracocha'; also, he says that Criollos are born in the middle of the Indigenous, are respected, and also loved by many, that they speak the language of the natives (in addition to Spanish) and used to Indigenous customs.
After suppressing the Túpac Amaru II Uprising of 1780 in the viceroyalty of Peru, evidence began against the criollos ill will from the Spanish Crown, especially for the Oruro Rebellion prosecuted in Buenos Aires, and also for the lawsuit filed against Dr. Juan José Segovia, born in Tacna, and Colonel Ignacio Flores, born in Quito, who had served as President of the Real Audiencia of Charcas and had been Governor Mayor of La Plata (Chuquisaca or Charcas, current Sucre).
Criollos and the wars of independence
Until 1760, the Spanish colonies were ruled under laws designed by the Spanish Habsburgs, which granted the American provinces broad autonomy. That situation changed by the Bourbon Reforms of 1700 during the reign of Charles III. Spain needed to extract increasing wealth from its colonies to support the European and global wars it needed to maintain the Spanish Empire. The Crown expanded the privileges of the Peninsulares, who took over many administrative offices that had been filled by Criollos. At the same time, reforms by the Catholic Church reduced the roles and privileges of the lower ranks of the clergy, who were mostly Criollos. By the 19th century, this discriminatory policy of the Spanish Crown and the examples of the American and French revolutions, led Criollo factions to rebel against the Peninsulares. With increasing support of the other castes, they engaged Spain in a fight for independence (1809–1826). The former Spanish Empire in the Americas separated into a number of independent republics.
Modern colloquial uses
The word criollo retains its original meaning in most Spanish-speaking countries in the Americas. In some countries, however, the word criollo has over time come to have additional meanings, such as "local" or "home-grown". For instance, comida criolla in Spanish-speaking countries refers to "local cuisine", not "cuisine of the criollos". In Portuguese, crioulo is also a racist slang term referring to blacks.
In some countries, the term is also used to describe people from particular regions, such as the countryside or mountain areas:
- In Argentina, natives of the northwestern provinces are called criollos by their porteño counterparts from Buenos Aires. They are typically seen as more traditionally Hispanic in culture and ancestry than the melting pot of non-Hispanic European influences (particularly Italian and German) that define the people and culture of Buenos Aires. Misa Criolla is the name of a musical setting of the mass composed by Ariel Ramirez, which has been sung by Mercedes Sosa, among others.
- In Perú, criollo is associated with the syncretic culture of the Pacific Coast, a mixture of Spanish, African, indigenous, and Gitano elements. Its meaning is, therefore, more similar to that of "Louisiana Creole people" than to the criollo of colonial times.
- In Puerto Rico, natives of the town of Caguas are usually referred to as criollos; professional sports teams from that town are also usually nicknamed Criollos de Caguas ("Caguas Creoles"). Caguas is located near Puerto Rico's Cordillera Central mountain area.
- In Venezuela, criollo is associated with the national culture of Venezuela. Pabellón criollo is Venezuela's national dish, and the baseball Corporación Criollitos de Venezuela is a seeder to the well-renowned Venezuelan Professional Baseball League, among other examples. Música Criolla is a way to refer to Venezuelan traditional music i.e., joropo. In Venezuela, novelists like Rómulo Gallegos with his novel Doña Bárbara, Pedro Emilio Coll, and Luis Manuel Urbaneja Achelpohl with the novel Peonía were major exponents of the Criollismo movement. Criollo also often refers to a mongrel dog, or something traditional to the country or its citizens.
- In Cuba and Colombia, the word Criollo has similar meanings to those of Venezuela.
As early as the sixteenth century in the colonial period in New Spain, criollos, or the "descendants of Spanish colonists," began to "distinguish themselves from the richer and more powerful peninsulares," whom they referred to as gachupines (wearer of spurs), as an insult. At the same time, Mexican-born Spaniards were referred to as criollos, initially as a term that was meant to insult. However, over time, "those insulted who were referred to as criollos began to reclaim the term as an identity for themselves. In 1563, the criollo sons of Spanish conquistador Hernán Cortés, attempted to remove Mexico from Spanish-born rule and place Martín, their half-brother, in power. However, their plot failed. They, along with many others involved, were beheaded by the Spanish monarchy, which suppressed expressions of open resentment from the criollos towards peninsulares for a short period. By 1623, criollos were involved in open demonstrations and riots in Mexico in defiance of their second-class status. In response, a visiting Spaniard by the name of Martín Carrillo noted, "the hatred of the mother country's domination is deeply rooted, especially among the criollos."
Despite being descendants of Spanish colonizers, many criollos in the period peculiarly "regarded the Aztecs as their ancestors and increasingly identified with the Indians out of a sense of shared suffering at the hands of the Spanish." Many felt that the story of the Virgin of Guadalupe, published by criollo priest Miguel Sánchez in Imagen de la Virgin Maria (Appearance of the Virgin Mary) in 1648, "meant that God had blessed both Mexico and particularly criollos, as "God's new chosen people." By the eighteenth century, although restricted from holding elite posts in the colonial government, the criollos notably formed the "wealthy and influential" class of major agriculturalists, "miners, businessmen, physicians, lawyers, university professors, clerics, and military officers." Because criollos were not perceived as equals by the Spanish peninsulares, "they felt they were unjustly treated and their relationship with their mother country was unstable and ambiguous: Spain was, and was not, their homeland," as noted by Mexican writer Octavio Paz.
They [criollos] felt the same ambiguity in regard to their native land. It was difficult to consider themselves compatriots of the Indians and impossible to share their pre-Hispanic past. Even so, the best among them, if rather hazily, admired the past, even idealized it. It seemed to them that the ghost of the Roman empire had at times been embodied in the Aztec empire. The criollo dream was the creation of a Mexican empire, and its archetypes were Rome and Tenochtitlán. The criollos were aware of the bizarre nature of their situation, but, as happens in such cases, they were unable to transcend it — they were enmeshed in nets of their own weaving. Their situation was cause for pride and for scorn, for celebration and humiliation. The criollos adored and abhorred themselves. [...] They saw themselves as extraordinary, unique beings and were unsure whether to rejoice or weep before that self-image. They were bewitched by their own uniqueness.
As early as 1799, open riots against Spanish colonial rule were unfolding in Mexico City, foreshadowing the emergence of a fully-fledged independence movement. At the conspiración de los machetes, soldiers and criollo traders attacked colonial properties "in the name of Mexico and the Virgen de Guadalupe." As news of Napoleon I's armies occupying Spain reached Mexico, Spanish-born peninsulares such as Gabriel de Yermo strongly opposed criollo proposals of governance, deposed the viceroy, and assumed power. However, even though Spaniards maintained power in Mexico City, revolts in the countryside were quickly spreading.
Ongoing resentment between criollos and peninsulares erupted after Napoleon I deposed Charles IV of Spain of power, which, "led a group of peninsulares to take charge in Mexico City and arrest several officials, including criollos." This, in turn, motivated criollo priest Miguel Hidalgo y Costilla to begin a campaign for Mexican independence from Spanish colonial rule. Launched in Hidalgo's home city of Dolores, Guanajuato, in 1810, Hidalgo's campaign gained support among many "Indians and mestizos, but despite seizing a number of cities," his forces failed to capture Mexico City. In the summer of 1811, Hidalgo was captured by the Spanish and executed. Despite being led by a criollo, many criollos did not initially join the Mexican independence movement, and it was reported that "fewer than one hundred criollos fought with Hidalgo," despite their shared caste status. While many criollos in the period resented their "second-class status" compared to peninsulares, they were "afraid that the overthrow of the Spanish might mean sharing power with Indians and mestizos, whom they considered to be their inferiors." Additionally, due to their privileged social class position, "many criollos had prospered under Spanish rule and did not want to threaten their livelihoods."
Criollos only undertook direct action in the Mexican independence movement when new Spanish colonial rulers threatened their property rights and church power, an act which was "deplored by most criollos" and therefore brought many of them into the Mexican independence movement. Mexico gained its independence from Spain in 1821 under the coalitionary leadership of conservatives, former royalists, and criollos, who detested Emperor Ferdinand VII's adoption of a liberal constitution that threatened their power. This coalition created the Plan de Iguala, which concentrated power in the hands of the criollo elite as well as the church under the authority of criollo Agustín de Iturbide who became Emperor Agustín I of the Mexican Empire. Iturbide was the son of a "wealthy Spanish landowner and a Mexican mother" who ascended through the ranks of the Spanish colonial army to become a colonel. Iturbide reportedly fought against "all the major Mexican independence leaders since 1810, including Hidalgo, José María Morelos y Pavón, and Vicente Guerrero," and according to some historians, his "reasons for supporting independence had more to do with personal ambition than radical notions of equality and freedom."
Mexican independence from Spain in 1821 resulted in the beginning of criollo rule in Mexico as they became "firmly in control of the newly independent state." Although direct Spanish rule was now gone, "by and large, Mexicans of primarily European descent governed the nation." The period was also marked by the expulsion of the peninsulares from Mexico, of which a substantial source of "criollo pro-expulsionist sentiment was mercantile rivalry between Mexicans and Spaniards during a period of severe economic decline," internal political turmoil, and substantial loss of territory. Leadership "changed hands 48 times between 1825 and 1855" alone, "and the period witnessed both the Mexican-American War and the loss of Mexico's northern territories to the United States in the Treaty of Guadalupe Hidalgo and the Gadsden Purchase." Some credit the "criollos' inexperience in government" and leadership as a cause for this turmoil. It was only "under the rule of noncriollos such as the Indian Benito Juárez and the mestizo Porfiro Díaz" that Mexico "experienced relative [periods of] calm."
By the late nineteenth and early twentieth centuries, the criollo identity "began to disappear," with the institution of mestizaje and Indigenismo policies by the national government, which stressed a uniform homogenization of the Mexican population under the "mestizo" identity. As a result, "although some Mexicans are closer to the ethnicity of criollos than others" in contemporary Mexico, "the distinction is rarely made." During the Chicano movement, when leaders promoted the ideology of the "ancient homeland of Aztlán as a symbol of unity for Mexican Americans, leaders of the 1960s Chicano movement argued that virtually all modern Mexicans are mestizos."
In the United States
As the United States expanded westward, it annexed lands with a long-established population of Spanish-speaking settlers, who were overwhelmingly or exclusively of white Spanish ancestry (cf. White Mexican). This group became known as Hispanos. Prior to incorporation into the United States (and briefly, into Independent Texas), Hispanos had enjoyed a privileged status in the society of New Spain, and later in post-colonial Mexico.
Regional subgroups of Hispanos were named for their geographic location in the so-called "internal provinces" of New Spain:
- Californios in Las Californias ("The Californias"), and later Alta California ("Upper California")
- Nuevomexicanos in Spanish New Mexico, and later Mexican New Mexico (Nuevo México)
- Tejanos in Spanish Texas, and later Mexican Texas (Tejas)
Another group of Hispanos, the Isleños ("Islanders"), are named after their geographic origin in the Old World, namely the Canary Islands. In the US today, this group is primarily associated with the state of Louisiana.
- Academia Antártica
- European diaspora
- Latin Americans
- List of Criollos
- Vecino (historical use)
- Creole peoples
- Encomienda (1492–1542)
- White Hispanic and Latino Americans
- White Latin American
- José Presas y Marull (1828). Juicio imparcial sobre las principales causas de la revolución de la América Española y acerca de las poderosas razones que tiene la metrópoli para reconocer su absoluta independencia. (original document) [Fair judgment about the main causes of the revolution of Spanish America and about the powerful reasons that the metropolis has for recognizing its absolute independence]. Burdeaux: Imprenta de D. Pedro Beaume.
- Donghi, Tulio Halperín (1993). The Contemporary History of Latin America. Duke University Press. p. 49. ISBN 0-8223-1374-X.
- Carrera, Magali M. (2003). Imagining Identity in New Spain: Race, Lineage, and the Colonial Body in Portraiture and Casta Paintings (Joe R. and Teresa Lozano Long Series in Latin American and Latino Art and Culture). University of Texas Press. p. 12. ISBN 978-0-292-71245-4.
- Mike Duncan (12 June 2016). "Revolutions Podcast" (Podcast). Mike Duncan. Retrieved 28 August 2016.
- Peter A. Roberts (2006). "The odyssey of criollo". In Linda A. Thornburg; Janet M. Miller (eds.). Studies in Contact Linguistics: Essays in Honor of Glenn G. Gilbert. Peter Lang. p. 5. ISBN 978-0-8204-7934-7.
- Genealogical historical guide to Latin America – Page 52
- San Miguel, G. (November 2000). "Ser mestizo en la nueva España a fines del siglo XVIII: Acatzingo, 1792" [Being a mestizo in New Spain at the end of the 18th centurry: Acatzingo, 1792]. Cuadernos de la Facultad de Humanidades y Ciencias Sociales. Universidad Nacional de Jujuy (in Spanish) (13): 325–342.
- Sherburne Friend Cook; Woodrow Borah (1998). Ensayos sobre historia de la población. México y el Caribe 2. Siglo XXI. p. 223. ISBN 9789682301063. Retrieved September 12, 2017.
- Hardin, Monica Leagans (2006). Household and Family in Guadalajara, Mexico, 1811 1842: The Process of Short Term Mobility and Persistence (Thesis). p. 62.
- "POESÍA QUECHUA EN GUAMAN POMA DE AYALA Y BLAS VALERA". victormazzihuaycucho.blogspot.com. 14 April 2011.
- "CANTO DE CRIOLLOS CON GUITARRA (traducción al Español)".
- Bernand, Carmen (December 2009). "Músicas mestizas, músicas populares, músicas latinas: gestación colonial, identidades republicanas y globalización" [Mestizo music, popular music, Latin music: colonial gestation, republican identities and globalization]. Co-herencia (in Spanish). 6 (11): 87–106.
- Doudou Diène (2001). From Chains to Bonds: The Slave Trade Revisited. Paris: UNESCO. p. 387. ISBN 92-3-103439-1.
- Miguel Vega Carrasco (3 February 2015). "La "coartación" de esclavos en la Cuba colonial". descubrirlahistoria.es.
- Manuel Lucena Salmoral (1999). "El derecho de coartación del esclavo en la América española". Revista de Indias, Spanish National Research Council. Cite journal requires
- Gene A. Smith, Texas Christian University, Sanctuary in the Spanish Empire: An African American officer earns freedom in Florida, National Park Service
- "Fort Mose. America's Black Colonial Fortress of Freedom". Florida Museum of Natural History.
- Alejandro de la Fuente; Ariela J Gross (16 January 2020). Becoming Free, Becoming Black: Race, Freedom, and the Law in Cuba, Virginia, and Louisiana; Studies in Legal History. Cambridge University Press. p. 115. ISBN 978-1-108-48064-2.
- Proctor, III, Frank "Trey" (2006). Palmer, Colin A. (ed.). "Coartacion". Encyclopedia of African-American Culture and History. Detroit: Macmillan Reference USA. 2: pp= 490–493
- Federico Navarrete (12 October 2017). "Criollos, mestizos, mulatos o saltapatrás: cómo surgió la división de castas durante el dominio español en América". BBC.
- Carlos López Beltrán. "Sangre y Temperamento. Pureza y mestizajes en las sociedades de castas americanas" (PDF). National Autonomous University of Mexico.
- María Luisa Rivara de Tuesta (Juan Pablo Vizcardo y Guzmán). Ideólogos de la Emancipación peruana (PDF). National University of San Marcos. p. 39.
- Frigerio, José Óscar (30 June 1995). "La rebelión criolla de la Villa de Oruro. Principales causas y perspectivas". Anuario de Estudios Americanos. 52 (1): 57–90. doi:10.3989/aeamer.1995.v52.i1.465.
- "Portugal: Autarca proíbe funcionária de falar crioulo – Primeiro diário caboverdiano em linha". A Semana. Archived from the original on 2015-11-25. Retrieved 2015-11-24.
- "Racismo na controversa UnB – Opinião e Notícia". Opiniaoenoticia.com.br. Retrieved 2015-11-24.
- Paz, Octavio (1990). Mexico: Splendors of Thirty Centuries. Bulfinch Press. p. 26. ISBN 9780821217979.
- Lasso de la Vega, Luis (1998). Sousa, Lisa; Poole C.M., Stafford; Lockhart, James (eds.). The Story of Guadalupe: Luis Laso de la Vega's Huei tlamahuiçoltica of 1649. Stanford University Press. p. 2. ISBN 9780804734837.
- Campbell, Andrew (2002). Stacy, Lee (ed.). Mexico and the United States. Marshall Cavendish Corp. pp. 245–246. ISBN 9780761474036.
- Caistor, Nick (2000). Mexico City: A Cultural and Literary Companion. Interlink Pub Group Inc. pp. 20. ISBN 9781566563499.
- Himmel, Kelly F. (1999). The Conquest of the Karankawas and the Tonkawas: 1821–1859. Texas A&M University Press. p. 6. ISBN 9780890968673.
- Levinson, I (2002). Armed Diplomacy: Two Centuries of American Campaigning. DIANE. pp. 1–2.
- Sims, Harold (1990). The Expulsion of Mexico's Spaniards, 1821–1836. University of Pittsburgh Press. p. 18. ISBN 9780822985242.
- Will Fowler. Latin America, 1800–2000: Modern History for Modern Languages. Oxford University Press, 2000. ISBN 978-0-340-76351-3
- Carrera, Magali Marie (2003). Imagining Identity in New Spain: Race, Lineage, and the Colonial Body in Portraiture and Casta Paintings. Joe R. and Teresa Lozano Long Series in Latin American and Latino Art and Culture. Austin: University of Texas. ISBN 978-0-292-71245-4.
|Casta terms for interracial marriage in Spanish America| | <urn:uuid:cdce960d-1b39-4dd3-98ec-92e3a37a375a> | CC-MAIN-2021-21 | https://en.wikipedia.org/wiki/Criollo_(Mexico) | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988953.13/warc/CC-MAIN-20210509002206-20210509032206-00057.warc.gz | en | 0.927449 | 6,624 | 4.0625 | 4 |
Pros and Cons of Using Fluorite in Jewelry
Fluorite is a semi-precious earth mineral of the family of calcium fluoride and it is part of the gemstone group of minerals. Although fluorite is a common stone that gemologists often consider unsuitable for the making of jewelry, this mineral has some unique physical characteristics that make it appropriate for ornament making (Genis 3). Brilliant colors- one of the greatest features of the ornaments that entice consumers is the color of the ornament.
specifically for you
for only $16.05 $11/page
Fluorite is among the most attractively colored earth minerals, and this unique property makes it highly suitable for the making of attractive ornaments. According to Genis (2), fluorite comes in attractive colors such as green, purple, yellow, red, blue and even black. Diaphaneity of fluorite– Diaphaneity is the transparency or translucency of an earth mineral (Genis 2). Fluorite minerals are either translucent or transparent in their appearance and such features make them suitable for jewelry making.
Streak and luster
Ornaments are more appealing when they have the characteristic of streakiness or shininess in their appearance. Fluorite is an earth mineral with assorted colors and high streakiness, which makes it a fabulous stone for the making of ornaments. Fluorite has a glowing physical appearance with an appealing luster that makes it very vitreous and sparkling. Ornaments are conspicuous materials that require this sparkling to make them attractive (Genis 1). The gravity of fluorite– The gravity of a gemstone is the heaviness of a specific gemstone relative to the density of water. Fluorite is a gemstone of low gravity with a specific density of 3.2 carat weight (Genis 5). Fluorite is a non-heat conductor-. Jewels are won around the exposed parts of the body and being a non-heat conductor, fluorite can be a suitable stone for ornaments.
Precious earth minerals used for jewelry-making must meet certain standards in their physical characteristics to make them suitable for making enduring ornaments (Genis 1). Jewelers and gemologists consider toughness, color, brightness, cleavage, and gravity as the important diagnostic properties and physical characteristics of the precious minerals that make jewelry (Genis 4). The toughness of the fluorite gemstone– precious stones considered appropriate for the making of jewelries must have certain levels of hardness. Hard minerals are often resistant to simple scratching and breaking and such resilience makes diamond the most appropriate stone for the making of ornaments (Genis 3). The cleavage of fluorite- gemstone is a precious stone with a poor cleavage due to its weakness in its crystal lattice. The fluorite gemstone often cracks into a definite cleavage of an octahedron nature that makes it difficult for the jewelers to redecorate its shape.
Fluctuation in its colors– fluorite does not have a permanent physical color and its fluctuation in colors makes the gemologist and jewelers to doubt its permanency. Furthermore, changing its color through the color zoning process to suit certain designs is difficult (Genis 2). The chemical nature of fluorite– the fluorite gemstone comprises of a high concentration of fluoride approximated at 49% of its mass.
This chemical component is prone to weathering. When hydrogen fluoride gas reacts with the sun, it becomes highly corrosive and produces a toxic material that causes skin irritation (Genis 6). Easy tear and wear– the softness of the fluorite gemstone makes the mineral weak in its structure and this affects its durability. The gemologists and jewelers fear that using fluorite to make ornaments is futile because this gemstone wears and tears easily.
Water Management as an ‘in demand’ job skill area
Water is a universal basic need for the household consumption and for several other commercial purposes in the industries. Water management is among the most demanding tasks across the world, given the constant climatic changes and human activities that negatively affect the water resources (Somavia 2). Water management requires high competency because the field is a demanding sector.
100% original paper
on any topic
done in as little as
Political problems, geographical issues, climatic interferences, metropolitan planning concerns, environmental concerns, water pollution issues, ethical dilemmas, and cultural problems are often on the arise (Somavia 3). Political problems in the water management– a common concern that affects the effective management and distribution of water is political interference. Politicians often use the concept of water supply and the protection of the water catchment areas as their political vehicles during campaigns (Somavia 4). Politicians often make false promises concerning urban management and city planning.
The growing human population– water is on a frequent demand in the urban zones and in the rural centers. The rapidly increasing human population is quite a sensitive matter for the water management workforce. Since housing is a basic human necessity, the rapid increase in the housing demand and the upsurge in the human population are crucial issues that require critical decision making in the water management and supply (Somavia 3).
Geographical issues- engineering technologies brought the water piping technology that ensured an efficient distribution and purification of the water supplied to the residential centers for human consumption. Even though piping has been possible and effective in most of the times, water managers face challenges pertaining to land topography or land terrain (Somavia 5). Climatic problems– Water management is a demanding profession because of the regular weather fluctuations.
Adjusting to the weather changes and mitigating the effects of climate changes are serious concerns in the water management departments. Adverse climatic conditions sometimes bring drought and water shortage, while extreme rainfall brings floods and causes corrosions in the water pipes (Somavia 6). Environmental concerns- climate and weather slightly differ from the aspects of the physical environment.
Environmental challenges in the water management include the supervision of the soil issues, wetland issues, and dry land issues (Somavia 9). Metropolitan planning concerns– Due to the rapid urbanization, the world is experiencing an increase of the commercial buildings and the residential properties within the cities. Controlling the fake engineers, dealing with the rich city developers, ensuring the correct city mapping, and safeguarding the interest of the public in the water management and supply are issues that require proficient approaches.
Ethical concerns in the water management– Water management require high proficiency due to the ethical dilemmas that involve critical decision-making. Being a basic human need, the rich residential and business owners need water, the same way the majority poor need this essential commodity (Somavia 7). Concerns related to the unequal distribution of water, responsible use of water, effective recycling of water, cultural issues of water, ecosystems of water, and protection of the water catchment areas are often on the rise.
Legal and government regulations– Water managers face tough challenges related to the compliance with the local land management policies, environmental protection policies, stringent international environmental standards, and the urban construction and planning regulations (Somavia 8). Water pollution issues-Water management needs professionalism because dealing with water pollution is demanding due to industrialization, which is an imperative part of national growth.
The possibility of California Splitting
There is a growing myth that postulates that California will eventually capsize into the ocean. With the adverse changes in the climatic conditions and weather patterns, the world is experiencing an increase in the occurrence of high intensity earth tremors, earthquakes, and tsunamis (Jones and Benthien 2). The San Andreas Fault System instigated the theory about the break-off and collapse of the American State of California (Jones and Benthien 4).
Geographical science has the greatest explanation concerning the likelihood of California collapsing into the ocean. California lies within the Pacific Coast zones of the United States, where the history of the occurrence of earth tremors and tsunamis has been continual (Jones and Benthien 5). At an estimated time of three decades from today, the earthquake scientists or the seismologists have predicted the occurrence of a powerful earth tremor with a magnitude of 8.0 on the Richter scale.
The American States located within the Pacific Coast or west coast regions are prone to powerful earthquakes and tsunamis. There is a likelihood that a section of California State may split and plunge into the ocean. Geographically, what comprise the earth’s surface are the rigid layers of rocks, which were formally the deposits of cool lava some 4.5 billion years ago (Jones and Benthien 2). When exposed to extreme conditions and a thrashing force such as a powerful earthquake, the rock layers can shear and drift apart to form two distinct land portions. The position of California makes it susceptible to split when the predicted earthquakes strike the Pacific Coast of America (Jones and Benthien 9). California lies at the frontier between two tectonic plates of the lithosphere. These tectonic plates may easily shear due to the mantle layer.
The earth comprises of the tectonic plates. According to Jones and Benthien, the tectonic plates comprise of the oceanic lithosphere and the continental lithosphere (8). The lithosphere is the tough outermost segment of the earth surface that comprises the upper mantle and the earth’s crust. The Pacific plate and the Antarctic plate make up the largest tectonic plates. Jones and Benthien (7) state that powerful earth tremors within the Pacific Coast of America may cause a split between the North American plate and the Pacific plate due to the seismic forces.
For several decades, scientists have gradually uncovered the constant grinding between the American Plate and the Pacific plate. Push, shear, and split may occur due to the current fault line between the two plates. This fault line has extended along the American west coast and in the underground of the Pacific Ocean.
Drifting of California into the Pacific Ocean may be an upcoming geographical fact. Scientists have unraveled the truth behind the theory of California through three distinct evidences that support their arguments (Jones and Benthien 4). The seismological evidence– using evidence from the powerful Richter scales, scientists have monitored the prevalence and patterns of the earthquakes and discovered that there are frequent occurrences of earthquakes in California in the recent days (Jones and Benthien 5).
Geodesy-Scientists have closely observed the steady movement of the North American and the Pacific Coast tectonic plates using powerful Global Positioning Systems (GPS). Geology-geological evidence from the field mapping and aerial photography has revealed the gradual split of the earth faults around California. The Community Fault Model- The Southern California Earthquake Center (SCEC) has cited the San Andreas Fault, the San Jacinto fault, the Elsinore fault, and the Imperial fault as evidences of the California’s divide.
A geographic feature around New York
Geographical features are the natural or artificial landforms and ecosystems of the earth. The geographical arrangement of the New York City varies between the different regions of the State. A significant geographical feature found in the New York City is the Niagara waterfall, which is a substantial tourist attraction (Berton 3). The Niagara waterfall is a popular geographical feature found in the Adirondack Park. This park is among the most renowned national parks of the United States. The Niagara Waterfall is undoubtedly the largest and the widest waterfall around the North American regions (Berton 5). Geographically located between the United States and Canada, the history behind the geological formation of the Niagara River and the Niagara waterfall is an interesting knowledge to understand.
100% original paper
written from scratch
specifically for you?
Formation of the Niagara Fall- Geological development of the Niagara waterfall was through the glaciation process some ten thousand years ago. According to Berton (12), glaciers are features that comprise of a mass of moving ice along a valley or a ridge between two volcanic mountains or hills. Glaciers form when the accumulation of water surpasses the melting and sublimation stages over the years.
Due to their own weight, glaciers slowly disintegrate and begin moving towards any direction that allows the movement of the ice pellets. The Niagara waterfall formed through the glacial processes, which were common occurrences in the North American regions several decades ago (Berton 14). Due to the freezing of the water in the north polar regions of America during the winter seasons, huge ice blocks known as glaciers formed around the Niagara escapements.
Subsequently, as the weather in the North American regions slowly began to change from the cold winter times to the high temperature times during the hot summer season, the glaciers melted down and water sought its way downwards to form a huge freshwater stream (Berton 14). The glacier water gradually expanded the river beds and cleared the debris along the forests to form a massive water-way that led down the Niagara escapements.
The melting glaciers carried down abrade rock and debris that caused the deposition of till and huge rock fragments that formed the moraines (Berton 25). The Niagara waterfall story consists of four ancient stages that earmarked the glacial process. The Kansan, the Nebraskan, the Wisconsin, and the Illinoisan conventional periods, were paramount in the glacial process across the North American region.
However, the present-day Niagara waterfall associates with the Wisconsin glacial processes, which was the final and the most recent period of glaciation. Also known as the ice age, the final glaciation process also encompassed the activities of the Wisconsin glaciations (Berton 31). The Niagara waterfall appeared some 10,000 years during the Wisconsin era of the glacial activities also recognized as the Pleistocene years.
The long period of cold temperatures along the North American regions resulted in the formation of glaciers with an average thickness of 2,000 meters. After melting when the summer period approached, the 2,000 cubic meter ice blocks deformed and moved towards the western parts from Lake Ontario (Berton 190). The Niagara escapements experienced this tension and the ice water from the top of the escapements gradually shaped and altered the landscape and the valleys.
US government and the ocean research
There is a growing debate of whether the government of the United States should cease from spending the hefty amounts of government finances in the outer space exploration or find a suitable replacement for this venture (Levison 2). Despite the space exploration exercise bringing enormous transformation in the modern technology, including the Velcro technologies that came from the National Aeronautics and Space Administration (NASA), space exploration is somewhat a waste of resources (Levison 5).
A debatable scientific logic is whether such expenditures are worth, especially when taking into consideration that the recent focus of the NASA is to investigate new planets that may support life and those that may have water (Levison 4). While plenty of the earth’s water remains underutilized, some remain misused, and some get contaminated, the government of the United States needs to boost the ocean research to quench the human desires.
The current human survival relies on the ocean and other mega water bodies for transport purposes, food purposes, economic growth and environmental investigation (Levison 1). Humanity should be the first priority as human security and human satisfaction determine the present development and the future of the world. The annual budget of 125 million US dollars for the space exploration often goes to the development of sophisticated space exploration gear and the payment of the astrologers, who seem to have brought only a meager change since the 1950s (Levison 6).
Economic research on the American budget reveals that space expedition has not proven sufficient to improve the impoverishing American economy, the dwindling education systems, or even the rising marine problems. The greatest logical discourse that outperforms the space exploration activities is the current focus of NASA on the unknown waters and the unknown outer-space life.
The life of marine features and creatures that support human life is at stake due to the adverse climatic changes, the high contamination of the ocean waters, and the illegal human activities. Realistically, the world cannot undermine the efficacy of the Google Maps, international communication, radio and TV media, aerospace technology, and the improved defense due to the Global Positioning System that promote security (Levison 8).
However, there are several sea and ocean issues that need comprehensive financial and expertise supports to ensure that the insatiable desires remain fulfilled. The rising industrialization in the European countries and in the Asian economies continues to be the source of marine contamination, although research on the remedies is still miniature (Levison 6). Marine debris and marine toxins from the industries and the residential houses continue to exacerbate sea poisoning that destroys aquatic life.
The increasing amount of the red tide algae that causes massive mortalities of the sea creatures and several other marine problems is still receiving a minimal attention from the government and the scientists. Marine science has remained dormant even as the number of cases pertaining to the illegal fishing, illegal ocean expedition, and fish poisoning continue to affect the aquatic life (Levison 5).
The unstable marine fishing standards and policies that promulgate an increase in the overfishing practices have instigated the low fishing returns. Some important fish species have become extinct due to regular deaths from poisoning or from the uncontrolled overfishing (Levison 3). Precisely, the technology that the outer space exploration is bringing is worthwhile, but the increased effort to uncover the unknown waters of Mars is a waste of finance. The American government should finance the two sectors equally.
Differences between Global warming and Ozone Depletion
Although global warming and ozone depletion are major concerns that often disturb the modern environmental scientists, the two concepts have some distinct differences (Kovats, Menne, McMichael, Bertollini, and Soskolne 10). By definition, global warming is the steady rise in the standard temperatures of the atmosphere. Ozone depletion is the exhaustion and destruction of the upper layer of the atmosphere known as the stratosphere.
The stratosphere is the immediate layer of the upper earth that directly borders the atmosphere. The stratosphere comprises of the ozone layer that protects the earth from the detrimental ultraviolet radiation or the ultraviolet rays (Kovats et al. 12). Global warming is a rising problem that is resulting from the effects of the depletion of the ozone layer of the stratosphere. Scientific evidence concerning global warming is the increase in the sea temperatures, which depicts excess sun radiation.
From the above definitions, differences between the two earth problems are eminent in the earth positions where the problems occur. Kovats et al. (17) states that ozone depletion is the weakening of the ozone layer found in the stratosphere earth section, which is the northern part of the sphere commonly known as the Northern Hemisphere (NH). Kovats et al. (11) postulates that global warming is an earth problem that is affecting the oceans, lands, and seas which are found within the Southern Hemisphere (SH).
The greatest difference is that one is the causative agent and the other one is the receptor of the effects. However, the problems of global warming and ozone depletion have diverse independent causes from a scientific perspective. The problem of ozone depletion begins with human activities such as industrialization and urbanization, which are the contributors of destructive poisonous gasses.
Causes of the ozone depletion- human activities that lead to the release of greenhouse gases that are poisonous to the ozone layer are the causes of ozone depletion. The sun energy heats these poisonous gases and a chemical reaction occurs due to the reaction of the highly corrosive greenhouse gases (Kovats et al. 15). The natural photo-chemistry-the chemical process that is normal in the atmospheric gas reactions is also a major cause of ozone depletion.
These atmospheric gas reactions are severe when the quantity of the ozone gas in the stratosphere zone is minimal. Causes of global warming– Global warming occurs when the intensity of the rays of the sun exceeds beyond the normal limits and heats the earth’s surface exceedingly (Kovats et al. 13). The effects of the ultraviolet rays are pervasive to the atmospheric climate, which determines the coldness and hotness of the environment.
The differences between ozone depletion and global warming are also eminent in the manner in which the two processes can be mitigated. Effective remedies can bring reversible changes in the global warming than in the ozone depletion (Kovats et al. 14). Whereas people can find remedies to control global warming across the world because different interventions can reduce global warming, the repercussions of the ozone depletion process are irreversible. According to Kovats et al. (66), “since the local emissions of greenhouse gases and ozone-destroying gases contribute to the processes of global atmospheric changes, preventive policies must be part of a coordinated international effort.” This means that the repercussions of the global warming will be minimal if countries will consider mitigating the activities that result to ozone depletion first. Reduction of the ozone-depleting substances will be the foremost factor.
Four-Day whether changes in New York
Sunny, cloudy, rainy or dry are the main variables of weather. The barometric pressure of the New York City for the past four days varied significantly. The barometric pressure readings for the first, second, third and the fourth day were 30.2, 30.5, 30.1, and 30.7 respectively. When the barometric pressure is high, it is an indication that the weather is cloudy. In the New York City, the third and the fourth days of the month of November 2014 recorded the highest barometric pressures. This means that the weather of New York was cloudy on average (The Weather Channel par. 1). This means that the barometric pressure of a given geographical zone affects the weather of that given region because it makes it cloudy.
Wind is a significant variable that determines the weather of a given place. For the first four days of the month of November 2014, the wind direction of the New York City varied significantly within certain durations. The wind directions in the New York City for the first four days of the month of November were South West, South South-West, South West, and West South-West respectively (The Weather Channel par. 2).
South West wind direction of the New York City often associates with a partial Sunny weather, the South South-West direction often associate with the partial cloudy weather. This means that the readings of the barometer that indicated high barometric pressure were accurate based on the wind direction of the New York. Similar weather conditions applied to the West South-West direction although there was more sun.
Humidity is a great variable that determines the weather of a given place. The percentage of relative humidity is what depicts the amount of water vapor or simply the water vapor content found in the atmosphere. The relative humidities of the New York City in the first four days of the month of November were 45%, 48%, 53%, and 47% respectively (The Weather Channel par. 6). High relative humidity indicates that the weather of a place is cloudy or rainy, or partially sunny. On average, the four days of the month of November recorded high relative humidities. The results match the recording of the wind direction and the barometric pressure, an indication that the overall weather of New York for the first four days of the month was cloudy and rainy.
The temperature of a place is an important variable that determines the weather of a region. Low temperatures indicate calm, cloudy and a rainy weather, while high temperatures indicate a sunny weather. The temperature recordings for the New York City in the first four days of the month were 56 degrees Fahrenheit, 53 degrees Fahrenheit, 57 degrees Fahrenheit, and 59 degrees Fahrenheit (The Weather Channel par. 3). This is an indication that the weather of New York City has had slightly low temperatures, which is an indication that the weather has been cool. Cool weather in relation to the recordings of the barometric pressure and humidity reflect a cloudy and a rainy weather.
Berton, Pierre. Niagara: A History of the Falls, New York, United States: SUNY Press, 2010. Print.
Genis, Robert. “Latest Burma News.”The gemstone forecaster 25.3 (2007): 1-9. Print.
Jones, Lucile, and Mark Benthien. Putting down roots in earthquake country, South California, United States: Southern California Earthquake Center, 2011. Print.
Kovats, Sari, Bettina Menne, Anthony McMichael, Roberto Bertollini, and Colin Soskolne. Climate change and stratospheric ozone depletion: Early effects on our health in Europe, Denmark: WHO Regional Publications, 2000. Print.
Levison, Lara. Federal Policy and Funding relating to ocean acidification. 2014. Web.
Somavia, Juan. A Skilled Workforce for Strong, Sustainable and Balanced Growth. 2014. Web.
The Weather Channel. New York Weather. 2014. Web. | <urn:uuid:6bca74c8-8626-444d-a9ea-152d14a5415d> | CC-MAIN-2021-21 | https://studycorgi.com/earth-science-and-geographical-features/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989012.26/warc/CC-MAIN-20210509183309-20210509213309-00214.warc.gz | en | 0.916723 | 5,120 | 3.03125 | 3 |
- Количество слайдов: 86
Arms and Disarmament
n The conventional logic underpinning normal practices of states – and of non-state forces resorting to use of force to achieve political aims u Peace is not always good, war is not always bad u “Just war” and “unjust peace” u Weapons are neutral, what matters is who uses them and for what purpose u You can’t obtain and secure peace and justice without resort to violence as the final argument u Use of force in politics will always be with us u The best we can do is limit it
n The antimilitarist position: The destructiveness of modern warfare u Weapons of mass destruction u In wars, most casualties are now civilian Use of force – both by states and by non-state forces - is often politically counterproductive u If we address root causes of conflict and work for just solutions by political means, weapons may not have to be used u Peace works - if it is based on justice
n n To make the world more peaceful, it is necessary to change the existing social conditions which breed conflict and violence How to change it? Various proposed solutions: u Facilitate replacement of authoritarian regimes by democracies u Promote social and economic development to eliminate poverty and suffering u Strive for equality and social justice u Replace capitalism with some form of socialism
n n While recognizing the need to address the root causes of conflict, antimilitarism focuses on the means of political struggle Arms buildups themselves make war more likely Military budgets are a burden on the economy The incidence of warfare can be reduced if states cut their armaments to a minimum
n n n The idea of disarmament Traditional: compelling a defeated state to disarm In the 20 th century: a new international practice - mutual arms control and disarmament by international treaties Natural reaction to the Era of Global Conflict, which threatens the very existence of humanity u Limit the scale of wars u Respond to public antiwar sentiment Opposition to arms buildups dates back to late 19 th century
n n n Lord Welby, British Secretary of the Treasury, March 1914: u “We are in the hands of an organization of crooks. They are politicians, generals, manufacturers of armaments and journalists. All of them are anxious for unlimited expenditure, and go on inventing scares to terrify the public. ” Sir Edward Grey, British Foreign Secretary: u “Great armaments lead inevitably to war. ” Quotes from David Cortright. Peace: A History of Movements and Ideas. Cambridge University Press, 2008, p. 98
n n n After WWI Covenant of the League of Nations, Article 8: u “The maintenance of peace requires the reduction of national armaments to the lowest point consistent with national safety. ” 1922: the Five Power Naval Limitation Treaty, extended and Conferences of 1922 and 1930 A historic precedent was set World Disarmament Conference of 1932 – no success, buildup of international tensions, new wars
n n n After WWII Demobilization everywhere; strong desire for peace Creation of the United Nations Organization But the Cold War generated a new arms race Its cutting edge were nuclear weapons And the conventional (non-nuclear) arms race continued
n The First Nuclear Age: 1945 -1991
Trinity, history’s first nuclear explosion, Alamogordo, NM, July 16, 1945
Robert Oppenheimer, father of the atomic bomb
http: //www. youtube. com/watch? v= n 8 H 7 Jibx-c 0&feature=related
World’s first nuclear weapon: The Little Boy, explosive yield 12 -15 kilotons (1/100 of B 83 bomb)
Hiroshima, August 6, 1945
Father of the Soviet bomb: Igor Kurchatov
Young Andrei Sakharov played a key role in the Soviet nuclear weapons program, later became a dissident
1961, Soviet Union: The biggest nuclear bomb ever built: “Tsar-bomba”, “Big Ivan”. Power – 57 megatons (40, 000 more than Little Boy of 1945)
The US-Russian nuclear arms race
USAF Gen. Curtis B. Le. May, Chief of the Strategic Air Command, advocated all-out nuclear war to destroy the Soviet Union and Red China
Chinese Communist leader Mao Zedong advocated waging nuclear war on the US “to free the world from imperialism”
n n n Late 1950 s: birth of the international movement for nuclear disarmament First diplomatic moves toward arms limitation 1961: US and Russian diplomats design a joint proposal for general and complete disarmament 1961: The Antarctic Treaty is signed banning the use of Antarctica for military purposes. See the full text: http: //www. nuclearfiles. org/menu/library/treaties/antarctic/t rty_antarctic_1961 -06 -23. htm
October 1962: the Cuban Missile Crisis, the turning point
n n The shock of the 1962 Cuban Missile Crisis compels 3 nuclear weapons states into joint measures to reduce the nuclear threat 1963: The first arms control treaty signed in Moscow. The Partial Test Ban Treaty banning nuclear tests on the ground, in atmosphere and in outer space. Underground tests remain legal. See the full text: http: //www. nuclearfiles. org/menu/library/treaties/partial-test -ban/trty_partial-test-ban_1963 -10 -10. htm
The paradox of the nuclear arms race n Nuclear weapons are unfit for warfighting n They can only serve as deterrents n But once deterrence becomes mutual, a new situation emerges n A powerful interest in mutual survival and security between the opposing sides n That becomes a basis for joint actions for stability, security, disarmament n On that basis, a global system of arms control has been erected
n Means of delivery: u Ballistic missiles (IC, I, SR) – ground-based, sea-based u SLBMs u Aerial bombs u Cruise missiles (air-, sea-, ground-launched) u A special category: human-delivered devices
US B 83 nuclear bomb, explosive yield – 1. 2 megatons
Launch of a Minuteman III ICBM (US)
Topol-M ICBM (Russia)
Tu-95 strategic bomber (Russia)
B-52 strategic bomber (US)
”The White Swan”: Tu-160 strategic bomber (Russia)
B-2 A strategic bomber (US)
Ballistic missile defence system, space-based (design)
A “suitcase bomb” W 54 Special Atomic Demolition Munition (SADM) was produced in the United States until 1988. The W 54 was a very small 0. 01 or 0. 02 -1 kiloton suitcase nuke with the entire unit weighing in at under 163 pounds
Destructive Effects n Nuclear explosions produce both immediate and delayed destructive effects. u Immediate n Blast, thermal radiation, prompt ionizing radiation are produced and cause significant destruction within seconds or minutes of a nuclear detonation. u Delayed n radioactive fallout and other possible environmental effects, inflict damage over an extended period ranging from hours to years u u u Hiroshima and Nagasaki, Aug. 1945: 0. 25 million lives Total destructive power of existing NWs: 150, 000 times the bombs which destroyed Hiroshima and Nagasaki 2, 000 times the firepower used in all of WWII including the nuclear bombs dropped on Japan
Main existing arms control treaties Partial Test Ban Treaty of 1963 INF, signed in 1987 START-I, signed in 1991 SORT, signed in 2002 CTR agreements The Outer Space Treaty NPT, signed in 1968, went into effect in 1970 CTBT, signed in 1996, still not fully in effect
n n n 1967: The Outer Space Treaty limits the use of outer space for military purposes http: //www. nuclearfiles. org/menu/library/treaties/weapons-in -space/trty_weapons-in-space_1967 -10 -10. htm 1970: The Nuclear Nonproliferation Treaty. States without nuclear weapons agree not to acquire them – in exchange for the commitment of nuclear-armed states to move towards full nuclear disarmament – http: //www. nuclearfiles. org/menu/library/treaties/nonproliferation-treaty/index. htm 1972: The Seabed Treaty prohibiting the emplacement of weapons of mass destruction on the seabed http: //www. nuclearfiles. org/menu/library/treaties/seabed/trty _seabed_1972 -05 -18. htm
n n 1972: US and USSR sign SALT-I agreements (the ABM Treaty and the Interim Agreement on Strategic Offensive Weapons). Ban on ballistic missile defenses and limitation of offensive nuclear arsenals – http: //www. fas. org/nuke/control/abmt/text/abm 2. htm http: //www. nuclearfiles. org/menu/library/treaties/usaussr/trty_us-ussr_interim-agreement-icbms_1972 -0526. htm
n 1979: US and USSR sign the SALT-II Treaty to strengthen and finalize the provisions of SALT-I. But the US Senate refuses to ratify the document. http: //www. fas. org/nuke/control/salt 2/index. html
n 1987: US and USSR sign the Intermediate Nuclear Forces Treaty banning all nuclear-armed ground-launched ballistic and cruise missiles with ranges between 500 and 5, 500 kilometers (about 300 to 3400 miles) and their infrastructure. The INF Treaty was the first nuclear arms control agreement to actually reduce nuclear arms, rather than establish ceilings that could not be exceeded. Under its provisions, about 2, 700 nuclear weapons were destroyed. http: //www. fas. org/nuke/control/inf/index. html
n n 1991: US and USSR sign the Strategic Arms Reductions Treaty (START-I), which leads to the reduction of the two sides’ strategic arsenals by 30 -40%. The Treaty expires in December 2009. http: //www. fas. org/nuke/control/start 1/index. html
n n 1993: US and Russia sign the second Strategic Arms Reductions Treaty (START-II), providing for further reductions in strategic offensive arsenals – down to 30003500 warheads. The Russian Parliament ratified the Treaty with a condition that the ABM Treaty of 1972 banning ballistic missile defenses must remain in force. In 2002, after President George Bush declared that the US was pulling out of the ABM Treaty in order to clear the way for the deployment of US ballistic missile defense systems, Russia withdrew from START-II. http: //www. fas. org/nuke/control/start 1/index. html
n 2002: US and Russia sign the Moscow Treaty on Strategic Offensive Reductions (SORT), which will reduce the numbers of operationally deployed strategic offensive weapons of the two sides to 1700 -2200 by the year 2012. The Treaty is currently in force. http: //www. fas. org/nuke/control/sort/fs-sort. html
n April 2009: Presidents Obama and Medvedev declared that the US and Russia will move toward complete elimination of nuclear weapons. Negotiations on a new US -Russian treaty to further reduce their strategic nuclear arms are in progress. http: //www. carnegieendowment. org/publications/index. cfm ? fa=view&id=24254
n The Nuclear Weapons Archive: http: //nuclearweaponarchive. org/
Results of international efforts to tame the nuclear threat n No nuclear weapon used since 1945 n Almost no testing (with a few exceptions) n The arsenals have been reduced by 2/3 n Most treaties work, compliance assured n Proliferation has been minimal n The Cold War is over – one of the causes being the nuclear arms race and the emergence of a sense of common interest in preventing it
n In the 21 st century, the Second Nuclear Age began…
The four threats n 1. Nuclear terrorism n 2. Nuclear proliferation n 3. Existing nuclear arsenals u u u n Their size and posture The NPT linkage Policies of US and Russia in the past decade 4. Climate change linkages u u u New interest in nuclear power generation and trade in nuclear fuels Climate change will undermine international security and raise the risks of nuclear power disasters Environmental impact of the use of nuclear weapons
Nuclear terrorism n The threat is real, the main source is Al Qaeda u u A radiological attack with or without a conventional explosion (use of chemical or biological agents also possible) A real nuclear weapon Steal or buy « Pakistan as the key state of concern « n Can a government knowingly provide terrorists with a nuclear weapon? u u n Highly unlikely: governments protect their power, a state caught doing this will be severely punished Rogue elements, organized crime networks Solutions: u u u Smart anti-terrorist policies Better security of storing nuclear weapons and materials Better security to forestall and prevent terrorist acts
Nuclear proliferation n 3 pillars of Non-Proliferation Treaty (NPT), which went into effect in 1970: n 1. NON-PROLIFERATION u n 2. DISARMAMENT u n Commitment of non-nuclear weapons states not to acquire NWs Commitment of nuclear weapons states to give up their nuclear weapons 3. RIGHT TO PEACEFUL USE u Every state has a right to use nuclear energy for peaceful purposes
How effective is the Treaty? n 189 of the world’s 193 countries are parties to NPT n Only 3 states acquired nuclear weapons after the treaty was signed: India, Pakistan, North Korea n Neither India nor Pakistan have signed the Treaty n Israel developed nuclear weapons secretly before the Treaty and never signed n North Korea did sign, but violated and withdrew in 2003 n Libya did sign, violated, but then came clean n South Africa canceled its program and signed n Ukraine, Belarus, Kazakhstan became de facto nuclear weapons states by default after the dissolution of the Soviet Union, but they gave up the Soviet weapons – and signed
The problems n 1. How effective is the monitoring? u n 2. How to prevent weapons programs evolving from peaceful programs? u u n Fairly effective, but can be made better International nuclear fuel bank Fissile materials ban 3. How to remove rationales for nuclearization? u u Responsibility of the main nuclear powers A renewed serious push for disarmament Reform of the international order to reduce potential for conflict No nation should have this kind of power
Threats from existing nuclear arsenals
The numbers – over 23, 000? n The US: the requirement for this many weapons arises from the Nuclear Weapons Employment Policy, signed by then–defense secretary Donald Rumsfeld in 2004, which states in part: n “U. S. nuclear forces must be capable of, and be seen to be capable of, destroying those critical war-making and war-supporting assets and capabilities that a potential enemy leadership values most and that it would rely on to achieve its own objectives in a postwar world. ” n Bulletin of the Atomic Scientists, March/April 2009, p. 60
Arguments against reductions – the US n The US needs a large arsenal to defend itself and its interests around the globe u u “Extended deterrence” Dreams of first-strike capability Main targets: Russia and China n For 2, 000 deployed warheads, US needs to have several times more in reserve Russian arguments n Russia cannot defend itself without nuclear weapons n Its defence spending is 1/10 of the US level, while its security challenges are much greater than those faced by US n
Operational status n Dr. Bruce Blair, former Minuteman ICBM Launch Control Officer and now President of the World Security Institute (Washington, DC): n U. S. standard operating procedures still envisage massive retaliation to a presumed strike in timeframes that allow only for rote, lightning-fast, checklist- based decision- making. Such decisions could starkly affect the survival of civilization. n “Both the United States and Russia today maintain about one-third of their total strategic arsenals on launch -ready alert. Hundreds of missiles armed with thousands of nuclear warheads-the equivalent of about 100, 000 Hiroshima bombs-can be launched within a very few minutes. ” n http: //www. reachingcriticalwill. org/legal/npt/prepcom 08/ngostateme nts/Op. Status. pdf
Modernization of weapons n Impact on strategic stability n New types n Small is usable? n Development of missile defence systems n High-accuracy conventional weapons n Space weapons
Ecological impact n The detonation of these weapons in conflict would likely kill most humans from the environmental consequences of their use. Ice Age weather conditions, massive destruction of the ozone layer, huge reductions in average global precipitation, would all combine to eliminate growing seasons for a decade or longer. . . resulting in global nuclear famine. Even a "regional" nuclear conflict, which detonates the equivalent of 1% of the explosive power in the operational US-Russian arsenals, could cause up to a billion people to die from famine (see http: //climate. envsci. rutgers. edu/pdf/Robock. Toon. Sci. Am Jan 2010. pdf and www. nucleardarkness. org )
n If India and Pakistan were to fight a nuclear war: http: //www. encyclopedia. com/video/ZH 6 I mz. Zurt. M-nuclear-war-between-indiapakistan. aspx
Solutions n The main responsibility lies on the US and Russia n Without their joint leadership, nothing can be done n This is why the Obama initiative is so important The new START treaty n Reductions by 30% n Verification n Resumption of serious arms control based on equal security Nuclear Security Conference – Washington, April 2010 NPT Review Conference – New York, May 2010
Further steps Deeper cuts to eliminate potential for first strike De-alerting the weapons Cooperative missile defence Etc.
Can nuclear weapons be prohibited? Yes, they can! n Negotiations toward prohibition of nuclear weapons will by necessity be protracted, but it should be remembered that the NPT was negotiated from 1959 to 1968. n Prohibition could either be negotiated through an analogous protracted international process, or it might alternatively be obtained by a covenant among the existing nuclear weapons states turning over their nuclear weapons to international management.
n n Obviously, this will be come possible only with fundamental changes in the international system – to reduce sources of conflict and promote peaceful ways of resolving differences Nuclear disarmament and reform of the international system must go hand in hand
n n The proposal for an International Nuclear Weapons Convention, to be signed by 2020 The NWC would prohibit development, testing, production, stockpiling, transfer, use and threat of use of nuclear weapons. States possessing nuclear weapons will be required to destroy their arsenals according to a series of phases The Convention would prohibit the production of weaponsusable fissile material and require delivery vehicles to be destroyed or converted to make them incapable of use with nuclear weapons.
10 reasons to ban nukes, by David Krieger n 1. Fulfill Existing Obligations. The nuclear weapons states have made solemn promises to the international community to negotiate in good faith to achieve nuclear disarmament. The United States, Russia, Britain, France and China accepted this obligation when they signed the Non. Proliferation Treaty (NPT), and extended their promises at the 1995 NPT Review and Extension Conference and again at the 2000 NPT Review Conference. India and Pakistan, which are not signatories of the NPT, have committed themselves to abolish their nuclear arsenals if the other nuclear weapons states agree to do so. The only nuclear weapons state that has not made this promise is Israel, and surely it could be convinced to do so if the other nuclear weapons states agreed to the elimination of their nuclear arsenals. The International Court of Justice, the world's highest court, unanimously highlighted the obligation to nuclear disarmament in its 1996 Opinion: "There exists an obligation to pursue in good faith and bring to a conclusion negotiations leading to nuclear disarmament in all its aspects under strict and effective international control. " This means an obligation to reduce the world's nuclear arsenals to zero.
10 reasons to ban nukes, by David Krieger n 2. Stop Nuclear Weapons Proliferation. The failure of the nuclear weapons states to act to eliminate their nuclear arsenals will likely result in the proliferation of nuclear weapons to other nations. If the nuclear weapons states continue to maintain the position that nuclear weapons preserve their security, it is only reasonable that other nations with less powerful military forces, such as North Korea, will decide that their security should also be maintained by nuclear arsenals. Without substantial progress toward nuclear disarmament, the Non-Proliferation Treaty will be in jeopardy when the parties to the treaty meet for the NPT Review Conference in the year 2005.
10 reasons to ban nukes, by David Krieger n 3. Prevent Nuclear Terrorism. The very existence of nuclear weapons and their production endanger our safety because they are susceptible to terrorist exploitation. Nuclear weapons and production sites all over the world are vulnerable to terrorist attack or to theft of weapons or weaponsgrade materials. Russia, due to the breakup of the former Soviet Union, has a weakened command control system, making their substantial arsenal especially vulnerable to terrorists. In addition, nuclear weapons are not helpful in defending against or responding to terrorism because nuclear weapons cannot target a group that is unlocatable.
10 reasons to ban nukes, by David Krieger n 4. Avoid Nuclear Accidents. The risk of accidental war through miscommunication, miscalculation or malfunction is especially dangerous given the thousands of nuclear warheads deployed and on high alert status. Given the short time periods available in which to make decisions about whether or not a state is under nuclear attack, and whether to launch a retaliatory response, the risk of miscalculation is high. In addition, the breakup of the former Soviet Union has weakened Russia's early warning system, since many parts of this system were located outside of Russia, and this increases the likelihood of a nuclear accident. Read more about nuclear accidents.
10 reasons to ban nukes, by David Krieger n n n 5. Cease the Immorality of Threatening Mass Murder. It is highly immoral to base the security of a nation on the threat to destroy cities and potentially murder millions of people. This immoral policy is named nuclear deterrence, and it is relied upon by all nuclear weapons states. Nuclear deterrence is a dangerous policy. Its implementation places humanity and most forms of life in jeopardy of annihilation. 6. Reverse Concentration of Power. Nuclear weapons undermine democracy by giving a few individuals the power to destroy the world as we know it. No one should have this much power. If these individuals make a mistake or misjudgment, everyone in the world will pay for it. 7. Promote Democratic Openness. Decisions about nuclear weapons have been made largely in secrecy with little involvement from the public. In the United States, for example, nuclear weapons policy is set forth in highly classified documents, which are not made available to the public and come to public attention only by leaks. On this most important of all issues facing humanity, there is no informed consent of the people.
10 reasons to ban nukes, by David Krieger n n 8. Halt the Drain on Resources. Nuclear weapons have drained resources, including scientific resources, from other more productive uses. A 1998 study by the Brookings Institution found that the United States alone had spent more than $5. 5 trillion on nuclear weapons programs between 1940 and 1996. The United States continues to spend some $25 -$35 billion annually on research, development and maintenance of its nuclear arsenal. All of these misspent resources represent lost opportunities for improving the health, education and welfare of the people of the world. 9. Heed Warnings by Distinguished Leaders. Distinguished leaders throughout the world, including generals, admirals, heads of state and government, scientists and Nobel Peace Laureates, have warned of the dangers inherent in relying upon nuclear weapons for security. These warnings have gone unheeded by the leaders of nuclear weapons states. Read more about the Nuclear Age Peace Foundation’s Appeal to End the Nuclear Weapons Threat to Humanity and All Life.
10 reasons to ban nukes, by David Krieger n 10. Meet Our Responsibility. We each have a responsibility to our children, grandchildren and future generations to end the threat that nuclear weapons pose to humanity and all life. This is a responsibility unique in human history. If we do not accept responsibility to speak out and act for a world free of nuclear weapons, who will? | <urn:uuid:8d61ace7-cacf-4138-90c3-dfa1e57450e4> | CC-MAIN-2021-21 | https://present5.com/arms-and-disarmament-n-the-conventional-logic/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.40/warc/CC-MAIN-20210514060536-20210514090536-00497.warc.gz | en | 0.906283 | 5,163 | 2.609375 | 3 |
In the very first letter issued by Geevarghese Bava to HH Patriarch Yakub III after the 1958 unity, he used the title of the Catholicos seated on the 'Throne of the East of Apostle St. Thomas'. (source: the Supreme Court majority judgement of 1995) But let's begin at the beginning. In this case, a chronological approach* listing instances where the 'throne of St. Thomas' was referred to, would clarify things: a.) 1st Century CE. From the Gospel According to Matthew, Chapter 19, and Verses 27-28: "Then answered Peter and said unto him, behold, we have foresaken all, and followed thee; what shall we have therefore? And Jesus said unto them, verily I say unto you, that ye which have followed me, in the regeneration when the son of man shall sit in the throne of his glory, ye also shall sit upon twelve thrones, judging the twelve tribes of Israel."
St. Thomas, being one of the 12 apostles, had his own throne according to the scheme of Our Lord Jesus.
b.) Gregorius Bar Hebraeus (1226-1286 CE). In the Hoodoyo Canon (Book of Directions, Paris, 1898), Chapter VII, Section I, Bar Hebraeus uses the term throne with respect to Patriarchs, Metropolitans and Bishops alike. He also refers to the ceremony of "enthronement" for bishops. Additionally, Bar Hebraeus regards St Thomas as the first bishop of the East. Implication: Even bishops had thrones, and since St Thomas was regarded as the first bishop of the East, the use of the term 'Throne of St Thomas' is appropriate.
c.) 1905 (most likely). The judgement of the Royal Court of Appeal, Cochin, has used the expression, "Melpattakaran of the throne in Malayalam". (probably the Arthat case.)
d.) Early 20th Century. In the book, The Indian Church of St Thomas, by E M Philip (second edition, page 253), the following reference is found: "He upheld the contention of Mar Thomas Athanasius, and found that the Syrian Church was independent of the Patriarch of Antioch. Of course, the majority judgment prevailed, and Mar Dionysius V was established on the throne of St. Thomas."
It must be mentioned that E M Philip is an iconic figure for the Patriarchal faction.
e.) 1912. From the Bull issued on September 17, 1912**, by HH Abdul Masih Patriarch from St Mary's Church, Niranam: "...By virtue of the order of the office of the Shephard, entrusted to Simon Peter by our Lord Jesus Messiah, we are prompted to perpetuate for you Catholicos or Mapriyana to serve all spiritual requirements that are necessary for the conduct of the order of the holy true Church in accordance with its faith.... With Geevarghese Mar Dionysius Metropolitan, who is the head of the Metropolitans in Malankara and with other Metropolitans, Ascetics, Deacons and a large number of faithfuls, we have ordained in person our spiritually beloved Evanios in the name of Baselius as Mapriyana, i.e., as the Catholicos on the Throne of St.Thomas in the East, i.e., in India and other places, at the St.Mary's Church, Niranam on Sunday, 2nd Kanni, 1912 A.D. as per your request."
f.) 1934. Reference to the throne of St Thomas was made in the notice issued to vicars, priests, trustees, and parishioners to attend the MD Seminary meeting of the Malankara Association that year. This notice was presented as exhibit A-4 to the honourable Supreme Court of India during the course of the hearing leading to the 1995 judgement.
g.) 1958. After the Supreme Court gave its momentous judgement on September 12, 1958, rejecting the contention of the patriarchal faction that the followers of Vattasseril Thirumeni had become heretics, the efforts towards unity gathered pace. On December 9, 1958, Patriarch HH Yakub III issued a Bull to Geevarghese Bava, which among other things, included the following words: "..We also were longing for peace in the Malankara Church and the unity of the organs of the one body of the Church. We have expressed this desire of ours very clearly in the apostolic proclamation (reference is to the proclamation dated November 11, 1957) we issued to you soon after our ascension on the Throne. This desire of ours gained strength with all vigor day by day without in any way slackened and the Lord God has been pleased to end the dissension through us. Glory be to him. To bring forth the peace in the Malankara Church we hereby accept with pleasure Mar Baselious Gheevarghese as Catholicose. Therefore we send our hearty greetings"
Following many rounds of negotiations, the reply to this letter was exchanged with the Patriarchal representative Mar Julius Elias on December 16, 1958, at the Old Seminary in Kottayam by Geevarghese Bava. This letter (referred to earlier), presented as exhibit A 20 to the honorable Supreme Court of India, found the Catholicos describing himself in the following words: "the meek Baselious Catholicos named as Geevarghese II seated on the Throne of the East of Apostle St. Thomas". He went on to add, "We, for the sake of peace, in the Church, are pleased to accept Moran Mar Ignatius Yakub III as Patriarch of Antioch subject to the constitution passed by the Malankara Syrian Christian Association and now in force."
The Patriarch's reply to the letter came four months later dated April 8, 1959, (exhibit A 23 with the Supreme Court), wherein he made clear his displeasure: "...Your use of the expression holiness with your name is not right. This expression can be used only by the Patriarchs. Your assertion that you are sitting at the Throne of St. Thomas is unacceptable. No one has ever heard of St. Thomas establishing a Throne. Similarly your assumption that yours is the Church of the East and that you are Catholicos of the East is equally untrue and unwarranted..."
From all the above, we can see that the term 'Throne of St Thomas' has a long and illustrious history and was not suddenly sprung from Devalokam in the 1970s. As early as 1959, the Patriarch had objected to its usage. This gives the lie to the urban legend that the reference to the throne of St Thomas was unheard of before the 1970s.
Another Jacobite urban legend that has now acquired the sanctity of truth through constant repetition is that the honourable Supreme Court of India in its 1995 majority judgement said that the title 'Throne of St Thomas' should be understood merely as an honorific. This is another instance of spin, and I intent to use this opportunity to expose it.
Fact 8 ii: by Georgy S. Thomas, Bangalore:
The fact that titles like 'His Holiness', 'Throne of St Thomas' etc. are honorifics was not in debate anywhere. That they were so was stated by Geevarghese Bava himself in one of his letters to HH Patriarch Yakub III which was presented as exhibit A 24*** before the honorable Supreme Court. For those who want more proof, I am quoting the relevant references: "...The propriety of using the title 'His Holiness' along with my name is questioned. Now I must bring to your notice the fact that customarily the same epithets have been attached to the Patriarch and the Catholicos in our church as evinced by our Holy writs and other books..." Here Geevarghese Bava describes the title 'His Holiness' as a mere epithet. He states further, "...The Throne of St. Thomas: Your Holiness says 'It is never heard that St. Thomas established a throne of the Catholicos or the Mapriano, either in India or in any other place'. I must, without presumption, ask your Holiness, whether for that matter, any apostle has established a throne anywhere. Is it not that such honors have been connected with them in latter times..." Here Geevarghese Bava describes the title 'Throne of St Thomas' as an honor or honorific. Therefore, we didn't want the Supreme Court to confirm that the titles are mere honorifics. We ourselves told the court it is so. Instead, what we expected the honorable court to do was to pronounce a verdict on whether it was appropriate for the Catholicos to use such titles. And I am happy to state here that the court ruled in our favor.
Why Did The Supreme Uphold The Usage Of Titles By The Catholicos?
Justice B P Jeevan Reddy who wrote the majority judgement, delivered his verdict through five fundamental facts (findings) and 11 summary findings. I will cite the relevant summary finding, and use quotes from the related fundamental fact to provide clarity.
Here's the relevant summary finding no.7 in full:
"Though the Patriarch raised objections to the honorifics (e.g., use of 'Holiness' with the name of the Catholicos and his assertion that he was seated 'on the Throne of St.Thomas in the East') and to the qualification added by the Catholicos in his Kalpana Ex.A.20 (i.e., accepting the Patriarch subject to the Constitution), the Patriarch must be deemed to have given up and abandoned all those objections when he came to India, pursuant to a canonical invitation from the Malankara Synod and installed and consecrated the new Catholicos on May 22, 1964. It is also worth noticing that a day before such installation/consecration, the Patriarch took care to have the territorial jurisdiction of Catholicate duly defined and de-limited by excluding certain areas in the Middle-East from the jurisdiction of the Catholicos."
Here it is in black and white. What the Supreme Court has said is that though the Patriarch raised objection to the use of the honorifics, his actions indicate that he had given up and abandoned all those objections to the use of the titles 'Holiness' and 'seated on the Throne of St Thomas' with respect to the Catholicos.
What does the Supreme Court mean by its reference to the demarcation of jurisdiction? Let me explain. HH Ougen I was consecrated as the new Catholicos by HH Patriarch Yakub III on May 22, 1964. A day before the consecration, the patriarch demanded that the territorial jurisdiction of the Catholicos should be demarcated. Accordingly, the then united synod of Malankara Sabha resolved the following:
"Hereafter the jurisdiction of the said see shall not be extended to the Arabian countries or Persia and that the see includes only eastern countries situated on the east of them. But H.H., the Patriarch shall agree to continue the present system of sending priests to the Arabian gulf countries from Malankara for ministering to the spiritual needs of the Malayali Parishioners as long as Malayalis stay there." (source: Justice Jeevan Reddy's judgement)
Why did the Patriarch demand the demarcation? Well, when HH Patriarch Abdul Masih consecrated a Catholicos for Malankara in 1912, it was construed as the transfer of the Maphryanite from Tikrit to Malankara. Based on this, the possibility existed that the Malankara Catholicos could at some point raise a claim to all the ancient territories in the Middle-East administered by the Iraqi Maphriyanite. For instance, areas under the present Syriac archbishophorics of Baghdad and Mosul were once under the Maphryanite. Therefore, HH Yakub III, in order to prevent the possibility of such a claim, demanded the demarcation. Is this my imagination running riot or does it find support from the Supreme Court's judgement? Let's turn to the fundamental facts listed by Justice Jeevan Reddy. Here's a relevant quote from fundamental fact number c.):
"As stated supra, the Patriarch came to India pursuant to a canonical invitation from the Malankara Synod and consecrated and duly installed the new Catholicos (Mar Ougen), who was elected by the Malankara Association in accordance with the 1934 Constitution. Before he did so, the Patriarch took care to see that the respective territorial jurisdictions of the Patriarchate and the Catholicate are duly defined and demarcated. The Middle East which was supposed to be hitherto under the jurisdiction of the Catholicos was excluded from his jurisdiction confining his authority to India and East alone."
Please notice the reference to "the Middle East which was supposed to be hitherto under the jurisdiction of the Catholicos was excluded from his jurisdiction..."
Okay, this proves why the Patriarch demanded the demarcation of jurisdiction. But why did the Supreme Court link this event to the usage of titles? Well, what the Supreme Court seems to imply is that if the Patriarch was so fundamentally opposed to use of the honorifics, he could have raised the condition that he wouldn't consecrate HH Ougen unless the Malankara Sabha gave up in writing that it was ready to abandon the use of such titles. After all, did he not raise the demarcation issue as a condition? In the same manner why didn't he raise the honorifics issue? The Supreme Court seems to be asking. Therefore, the court concluded that the Patriarch's actions indicate that "he must be deemed to have given up and abandoned all those objections when he came to India". It seems the Patriarch himself cared more about territory demarcation than about the use of such titles. It's only our Jacobite brothers who seem to be bothered. I hope they will review their position.
* Notes: In the chronology list, a, b, c, d, f are obtained from the minority judgement of Justice R M Sahai. E and g are from the majority judgement. Three judges - Justices B P Jeevan Reddy, S C Sen and R M Sahai - heard the suit. The same documents were made available to all three with the same numberings. Justice Sahai in his judgement has referred to the documents in detail. I found in them a treasure trove of information and have cited them. Justice Sahai's rulings are given vide seven findings and three declarations. Since it's the minority ruling, I have not used any of them.
** During the course of the hearing, the patriarchal faction alleged that this letter (exhibit A 13) was not authentic, and claimed that HH Abdul Masih had issued only one letter (exhibit A 14). The IOC claimed that without the first letter, there couldn't have been the second, since one logically follows the other. The Supreme Court in its majority view agreed with our contention. I quote from the relevant fundamental fact (finding): "Now what do the above facts signify? Do they not show that Patriarch had, by 1964, recognized and accepted the revival of the Catholicate, A.13, A.14 and the 1934 Constitution? Do they not show that the Patriarch had also given up his objections to the use of the words 'seated on the throne of St. Thomas in the East' and to the "qualification" added by Catholicos in A.20? We think, they do." The Supreme Court has clearly stated that the actions of the Patriarch showed that he had accepted both A 13 (first letter) and A14 (second letter) of HH Abdul Masih. Let nobody claim otherwise.
*** Quoted in Justice R M Sahai's verdict.
Georgy's 'Myth' series continues to explore the history and the authority of the 'St. Thomas' throne as claimed by the Catholicose faction. It is really funny to read his conclusion which highlight the fallacy of the 'long' history of the controversial throne attributed to St. Thomas. First read his conclusion.
I quote, "From all the above, we can see that the term `Throne of St Thomas' has a long and illustrious history and was not suddenly sprung from Devalokam in the 1970s. As early as 1959, the Patriarch had objected to its usage. This gives the lie to the urban legend that the reference to the throne of St Thomas was unheard of before the 1970s."
We have to see this statement at different levels on the ground of Georgy's arguments. But before elaborating such a discussion I would like to remind the readers that the Catholicose faction was received into the SO Church and the 'peaceful co existence' came into effect only in 1958. Georgy agrees that in 1959 Patriarch objected to it. Let us look into the time gap between these incidents. It was on Dec.16 that the Catholicose was accepted by the Patriarch. The reply of the Catholicose was also of the same date and it reached the Patriarchate and the reply from the Patriarch reached Kottayam in four months time. The time gap between the acceptance and the questioning of the usage took only four months. Remembering the slow mail moving of those times we can be sure that the Patriarch's disapproval to this usage was comparatively very immediate.
Even though Geevarghese Bava resisted at the beginning to the disapproval of the Patriarch,but we can see that he too slowly withdrew from this usage, I believe for the scope of unity. We see Augen Bava almost completely refrained from using this title till the time of controversy. This being the fact, Georgy tries to 'spin', 'spin' and 'spin' to 'prove' that St.Thomas throne is not a controversy that started in '1970 'but was a controversy even in '1959' onwards !.See the 'long illustrious history' of this throne ! We will see further details below.
Now let us see the threads that he used to spun his myth.
1.He quotes the verse Matt.19:27-28. Georgy says that St. Thomas has a throne because the Lord promised all the twelve apostles twelve thrones. The contention to Georgy's argument on the controversial St. Thomas throne is very evidently stated in that reply to Peter. There it is said about the reward that they receive for following Him forsaking everything. It is said there "in the regeneration... when son of man sit on the throne of his glory..." the apostles also "shall sit upon twelve thrones, judging the twelve tribes of Israel". Georgy concludes this argument with the following words.
"St Thomas, being one of the 12 apostles, had his own throne according to the scheme of Our Lord Jesus."
It would be a correct statement if he could change the statement from past perfect tense to future perfect by changing 'had' to 'will have'. Then it will be 'St. Thomas ...will have his own throne ... ' to sit with the Lord to judge the twelve tribes. This will be the sharing of the glory by the apostles at the final judgment. St. Thomas will have a throne there to sit to judge the twelve tribes of Israel, including the community and society in which Peter and his fellow apostles forsook everything to follow the Lord. This is the promise of the sharing of the eschatological glory. This throne has nothing to do with the apostolic throne of succession of priesthood nor to the controversial claim recently originated in Malankara..
2. I quote fully the next argument raised by Georgy.
"Gregorius Bar Hebraeus (1226-1286 CE). In the Hoodoyo Canon (Book of Directions, Paris, 1898), Chapter VII, Section I, Bar Hebraeus uses the term throne with respect to Patriarchs, Metropolitans and Bishops alike. He also refers to the ceremony of ``enthronement'' for bishops. Additionally, Bar Hebraeus regards St Thomas as the first bishop of the East. Implication: Even bishops had thrones, and since St Thomas was regarded as the first bishop of the East, the use of the term 'Throne of St Thomas' is appropriate."
In the first statement he was saying that St. Thomas had a throne because he will have a throne in the final judgement day. When he quotes Bar Ebroyo and his Nomocanon he justifies his wishful thinking that St. Thomas has a throne because all bishops have a throne. Bishops de facto have thrones by 'Sunthroneeso' (enthronement). By this argument Georgy is 'degrading' the Apostle Thomas to the level of a bishop or is he equating the throne of the Catholicose to the level of an episcopal throne!
He also affix a note to the Nomocanon, known as Hoodaya canon in brackets, 'Book of Directions, Paris 1898'. This note is a big twist to revert the authentic version of the canon. Paul Bedjan a Roman Catholic Syriac scholar priest has printed an edited version of the Nomocanon in 1898 and was published in Paris. Thus came the name Paris canon. He has written an introduction to this edition in French saying that he has edited (modified) the manuscript of Bar Ebroyo to suit the Roman ecclesiology. Examples are the directions in this version authorizing all bishops to consecrate Holy Mooron, proclaiming Patriarch of Rome as the general head of all Patriarchs. (Reesh Patriarch) Upholding the Roman edition of the Nomocanon and repudiating the directions in it is another 'spin' and a twist of fact by the Catholicose faction. The whole 1934 constitution becomes a paradox if someone takes an affirmative stand that this Paris canon is to be followed seriously in IO administration.
Finally coming back to Bar Ebroyo, I would like to transliterate the reference from him quoted by Georgy to state that St.Thomas was referred to as the first bishop of the east. 'Thooma Sleeho reesh kohne kadmoyo de madanho'. Reesh Kohne means high priest, kadmoyo= first and de madanho = of the East. (St. Thomas the first high priest of the East) Payne Smith dictionary differentiates between 'Reesh Kohne' and 'Reesh Abohoso' (=Patriarch). This difference is having very important meaning in reference to our topic of discussion. Reesh Kohne is a bishop, not a Patriarch. What does it mean? There is no reference here that he consecrated his successor Adai nor he has established a succession line . Bar Ebroyo himself says later in this book that the bishops of the east received investiture from Antioch. His Canon testifies that the Council of Nicaea confirmed the authority of the Patriarch of Antioch over the East. The Church in the east was included in the Patriarchate of Antioch and hence there cannot be any to duplication of authority in one church. This excludes all possibility of the existence of a throne in the church of the east. Throne in the ecclesial meaning is the apostolic priestly succession. It is not at all mentioned there. Surely, Bar Ebroyo gives no hint to the apostolic throne of St. Thomas. I have also noted from IO propagandists that they are interested only in this one and single sentence from Bar Ebroyo. They are totally ignoring all other direct references by him on the relations between the Patriarchate and the Catholicate.
2.Georgy's reference to the judgment of the Kochi Royal Court and E.M. Philip are discussed together in this reply. In both cases the reference is to the episcopal throne. The 'throne in Malayalam' is referring to the episcopate in Malankara. All episcopos are 'enthroned' to the episcopate and no one has ever protested to the use of throne in this context. All bishops have thrones of their episcopal sees. E.M. Philip also uses a figurative language to refer the church in India as the church of St. Thomas. His book is also titled the 'Indian Church of St. Thomas'. He refers by this usage only to the St. Thomas tradition, apostolic origin and antiquity of this Church to which the SOC upholds with high esteem. This honor is given to the Metropolitan in Malankara. It is never intended to refer to any equal status with the Patriarch or refers to any autocephalous church in Malankara. The Tablet at Rakkad Church gives this honor to none other than the delegate of the Patriarch. The 1972 declaration of St. Thomas throne was not at all in this line. It was the declaration of independence from the Patriarch and the SOC. This is the core issue. You have to address this point in relation to the autocephaly and St. Thomas throne. Do any of your reference on St. Thomas throne prove to the autocephaly of Malankara Church or the equal status of the Malankara Metropolitan/Catholicose to the Patriarch?. St. Thomas throne was equated to the claim of autocephaly. It is in this context that all the judgments in the recent church case flatly denied the issue of autocephaly. Georgy yourself has stated that the Malankara Church is not autocephalous de jurie.
3. Next is another controversial document which has not been ratified by any other contemporary translation or even proper publication. H.G. Dr. Thomas Mor Athanasius has discussed in detail about this in his book, 'Ithu Viswasathinte Karyam'. (See page 40,41 or for relevant quotation on page No.218 and 219 of my book, 'Perumpilly Thirumeni'.) Before going into the details of this I would like to refer to the more known Abdul Messiah document of Kumbhom 8, 1913. It admonishes all not to "slacken your Petrine faith". Here he says not to the 'Thomite Apostolic faith' . For the better unprejudiced understanding of my readers I am quoting Georgy in full.
"From the letter issued on September 17, 1912**, by HH Abdul Masiah Patriarch from St Mary's Church, Niranam: '...By virtue of the order of the office of the Shepherd entrusted to Simon Peter by our Lord Jesus Messiah, we are prompted to perpetuate for you Catholicos or Mapriyana to serve all spiritual requirements that are necessary for the conduct of the order of the holy true Church in accordance with its faith.... With Geevarghese Mar Dionysius Metropolitan, who is the head of the Metropolitans in Malankara and with other Metropolitans, Ascetics, Deacons and a large number of faithfuls, we have ordained in person our spiritually beloved Evanios in the name of Baselius as Mapriyana, i.e., as the Catholicos on the Throne of St. Thomas in the East, i.e., in India and other places, at the St. Mary's Church, Niranam on Sunday, 2nd Kanni, 1912 A.D. as per your request.'
Here Georgy is attracted only to the mention of St. Thomas throne in this document. Forgetting the apprehensions about the authenticity of this document I am convinced to say that it is really arguing all against Georgy's claims. See my points listed below.
1. The alleged author (!) of this letter writes this on his authority as the Shepherd in virtue of the Petrine authority. Here it is evident that, even if we agree to all the 'rights and privileges' of this DEPOSED Patriarch who acted without any knowledge of the Synod and the Church at large, had no authority over the 'independent, autocephalous' Thomite Church in India as claimed recently by IOC . Even if he refers to a St. Thomas throne what he can do is nothing beyond his capacity as a (deposed) Patriarch of Antioch. He cannot transfer a St. Thomas throne from his Church because there is no such a throne there. He cannot consecrate in Malankara a Mafrian from the SOC without the knowledge of its synod and the ruling Patriarch. He cannot again de facto consecrate anyone to a Catholicate in the line of the Nestorian Church. He cannot also create a St.Thomas throne all by himself here in Malankara. Whatever he could do was, even though illegally and illicitly, act to the whims and fancies dictated to him at Niranam and the result was to create a 'moth eaten' and illegitimate Mafrian with the title Catholicose. He did that and at the same time strongly admonished not to 'slacken' its bond with the Patriarchate.
2. Abdul Messiah in this Kalpana equates the Catholicose to a ('moth eaten') Mapfrian all against the arguments made by Georgy earlier that the Malankara Catholicose is not in the line of the mafrianate..
3. Abdul Messiah in the above quoted document instructs the Catholicose to perform his duties 'in accordance with the faith and the Malankara Metropolitan who is the head of the Metropolitans..." Here the cat is out! It clearly says about the concept of Metran faction of that time. Here it is evident that the Malankara Metropolitan is the actual head of the Church and it makes very clear that the Catholicose at that time was only a titular position. The fact was that the Catholicose of the IO faction was above the Malankara Metropolitan on Sundays and the vice versa on all other days of the week.
4. Here again this document says that he has ordained "in person ...Mor Ivanios.. on the Throne of St. Thomas". Even if we agree to all claims of Georgy, this document clearly says that it was the deposed Patriarch who "in person" ordained Mor Ivanios on the alleged throne. The 'apostolicity, long history and antiquity' of this 'illustrious' throne is well exposed in this document. Thank you Georgy for referring to this.
5. A few of the next citations are from the Notice Kalpana and peace Kalpana
from Geevarghese 1 Bava bearing the St. Thomas throne. His argument is that
these were all exhibits in the SC court of India and so they all have sanctity
and legal appeal. The curse of even our intelligentsia is the fallacy of the
notion about the approval of the courts and the sanctity it gets being an
exhibit in the court. But here I am happy that Georgy has quoted the reference
to the 1957 Kalpana of Yacob III Bava. It makes very clear that Yacob III
intended for peace and as many think the peace initiative was not a surrender
after 1958 judgment. I quote,
"Patriarch HH Yakub III issued a Bull to Geevarghese Bava, which among other things, included the following words: '...We also were longing for peace in the Malankara Church and the unity of the organs of the one body of the Church. We have expressed this desire of ours very clearly in the apostolic proclamation (reference is to the proclamation dated November 11, 1957) we issued to you soon after our ascension on the Throne. This desire of ours gained strength with all vigor day by day without in any way slackened and the Lord God has been pleased to end the dissension through us. Glory be to him. To bring forth the peace in the Malankara Church we hereby accept with pleasure Mar Baselious Gheevarghese as Catholicose. Therefore we send our hearty greetings...'
Following many rounds of negotiations, the reply to this letter was exchanged with the Patriarchal representative Mar Julius Elias on December 16, 1958, at the Old Seminary in Kottayam by Geevarghese Bava. This letter (referred to earlier), presented as exhibit A 20 to the honorable Supreme Court of India, found the Catholicos describing himself in the following words: 'the meek Baselious Catholicos named as Geevarghese II seated on the Throne of the East of Apostle St. Thomas'. He went on to add, 'We, for the sake of peace, in the Church, are pleased to accept Moran Mar Ignatius Yakub III as Patriarch of Antioch subject to the constitution passed by the Malankara Syrian Christian Association and now in force.'' The Patriarch's reply to the letter came four months later dated April 8, 1959, (exhibit A 23 with the Supreme Court), wherein he made clear his displeasure: '...Your use of the expression holiness with your name is not right. This expression can be used only by the Patriarchs. Your assertion that you are sitting at the Throne of St. Thomas is unacceptable. No one has ever heard of St. Thomas establishing a Throne. Similarly your assumption that yours is the Church of the East and that you are Catholicos of the East is equally untrue and unwarranted...'
The above quotation from Georgy proves the following:
1. The 1958 Kalpana of H.H. Yacoob III came to India much before the actual exchange took place on Dec.18,1958. It was signed on Dec.9 at Damascus. We all know that the negotiation here was on the draft of the "Kanthari" Kalpana. Even the exchange of letters was prolonged till midnight and several calls and drafts passed in between Chingavanam and Devalokam. It was for the 'sake of peace' that the Kanthari Kalpana* was received at the eleventh hour.
2. Patriarch was willing for an acceptance even before the 1958 SC judgment. See the 1957 Kalpana's full text in my book, Perumpilly Thirumeni Pages153-55.
3. The genuine approach for the sake of peace was subdued in the efforts to uphold the 'Kanthari' spirit of the reply by the Catholicose. The Two Kanthari were the Constitution and the the St. Thomas throne.
4. The controversial usage of the St. Thomas throne as well as the honorific 'His Holiness' were first used in the united church in this 'Kanthari' Kalpana and was challenged by the Patriarch on April 8, 1959, after a few months of time , leaving beside the delivery time of that time we can say it was challenged 'immediately'. Even though the attitude of Geevarghese II Bava was tough at the beginning but he also turned mild and we can see him using the heading paper without the controversial throne in the peace times. Then after Augen Bava ascended he was famous for not using this till the controversy took momentum.
1. Georgy himself proved that the St. Thomas throne was nothing beyond the honorific throne of the episcopate. But the claims in the controversy was that it was equal to the Apostolic Petrine office and the symbol of autocephaly.
2. The claim of the apostolicity of the St. Thomas throne is negated by its alleged founder Mor Abdul Messiah himself. He says that he established the Mafrian in his authority as the shepherd on the Petrine throne. (Actually he was deposed and hence had no authority)
3.In the Church we have thrones for the Patriarch, Catholicose and the Metropolitan/Episcopa. The sees of these thrones confine to their authority in the Church and its specific jurisdiction. Thrones of authority over the divisions of the universal Church were established by the Holy Ecumenical Synods. To our tradition it cannot be declared unilaterally.
4.Apostolic throne of St. Peter is upheld for the canonical validity of its succession and priesthood. The alleged Abdul Messiah document points to this concept. The Metran faction leaders of that time also held the same view.
5. The reference by Bar Ebraya says nothing about the apostolic succession in India or to the succession line of the St. Thomas Throne anywhere.
6.The documents cited by Georgy before the 1912 incident are merely in honor of the episcopate in Malankara and in the honor of the founder of the Malankara Church. It has nothing to do with the present day claims of the autocephalous throne in Malankara.
7. The alleged succession of the Persian Nestorian Catholicate has nothing to do with the 1912 Catholicate or the commonly accepted 1958/64 Catholicate in Malankara. The Malankara Catholicate is nothing but the resurgent 'moth eaten' (Sorry to repeat this quote from Georgy) Mafrianate in the Syrian Orthodox Church. The list of the Catholicoi succession published by the IOC is also of the Mafrianate line. During the period of the unity and peace this title was not used by Augen Bava. The working committee of the united church has resolved and requested Augen Bava not to use this title. The united church never agreed to the use of this title from 1958 onwards.
8. The Mafrianate has no apostolic succession from our Patron Saint St. Thomas. All the Mafrians in history were second in rank to the Patriarch of Antioch and were usually consecrated by the Patriarchs.
9.The biblical reference to the throne promised to St. Thomas and other apostles are the thrones to share the glory of the judgment of the 12 tribes of Israel.
10. Absence of any reference to the St. Thomas throne and autocephaly in the 1934 constitution itself points that these issues are not part of the original ideology of its factional founders. It only refers to the declaration that the primate of this church is the Patriarch of Antioch and also declares that the Church is a division of the Eastern Orthodox Church. The 1934 constitution itself speaks against the claims of autocephaly and St. Thomas throne.
'Kanthari' is the very hot but small chilly of Kerala. When Mor Julius jokingly
remarked that the Kalpana handed over to the Catholicose was 'heavy' and the one
given to him by the Catholicose was comparatively very 'light', the Catholicose
replied that it is like the Kanthari, small but very hot. This statement proves
even in the argument of Georgy.
Next: Myth 9: HH Patriarch Abdul Masih II didn't have the authority to consecrate a Catholicos for Malankara in 1912.
Previous: Myth 7: The Patriarch enjoys only spiritual overlordship and has no temporal authority over the Indian wing of the Syriac Church.
Faith Home | History | Inspirational Articles | Essays | Sermons | Library - Home | Baselios Church Home
A service of St. Basil's Syriac Orthodox Church, Ohio
Copyright © 2009-2020 - ICBS Group. All Rights Reserved. Disclaimer
Website designed, built, and hosted by International Cyber Business Services, Inc., Hudson, Ohio | <urn:uuid:06d91f1f-251e-4593-bb3a-27161bfe7201> | CC-MAIN-2021-21 | http://www.malankaraworld.com/Library/Faith/faith_myths-8.htm | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991870.70/warc/CC-MAIN-20210517211550-20210518001550-00455.warc.gz | en | 0.963616 | 8,153 | 2.71875 | 3 |
In its first 75 years, North Omaha was home to no fewer than four Jewish synagogues, at least 15 Catholic parishes and more than 150 Protestant congregations. These churches reflected the community’s diversity, including ethnic churches where only Italian, German, Norwegian, Danish and other languages were spoke. Within 25 years of Omaha’s founding, there were also several Black churches in the neighborhood north of downtown. This is a history of churches in North Omaha.
How They All Began
In order to see where we’re going, I think its important to understand where churches have been.
The first church in Omaha was the Methodist Church opened by a circuit rider from Council Bluffs, Iowa, named Peter Cooper in 1854, the year the city was founded. As the town grew north in the next decades, churches moved that way, too. Downtown Omaha was the original Omaha, and houses and churches were originally there.
A lot of churches opened in North Omaha between the 1860s and 1900. They moved into the community because more homes were being built there. As houses and apartments were being built for working class, middle class and upper class people throughout North Omaha, churches were built to serve people from different ethnic groups and races, and later different social classes.
Ethnic Churches in North Omaha
European ethnic groups started moving into North Omaha in the 1860s. First, Irish people built their homes on the north side of Omaha; then Italians and Scandinavians moved in, along with Black people moving from the South. When each of these groups moved into the community, they brought their languages, histories, cultural practices and religious heritage.
One example of an ethnic group in North Omaha were the Swedes, who moved in en masse starting in the 1860s. Coming straight from Sweden, they originally only spoke their home language, worked where other Swedes worked, and often spent their money where their fellow countrymen owned businesses. They also started their own churches. Danes built their own churches, too. The Norwegian-Danish Evangelical Lutheran Church was built at North 26th and Hamilton Streets in the early 1880s. The Danish Methodist Church was located in the Near North Side neighborhood at 1713 North 25th Street, and the First Danish Baptist Church was at 2511 Decatur in 1888. St. Mark’s Evangelical Lutheran was a Danish congregation organized in 1886 that built a church at North 21st and Burdette Streets in 1887. In 1907, they built a new church at North 20th and Burdette Streets, the church’s home for another four decades. Pella Lutheran Church was a Danish congregation started in 1886, eventually building a church at North 30th and Corby Streets in 1894, where they remained until the 1930s. The first Swedish Methodist Church met in Omaha starting in 1869, and in 1894, the first Western Swedish Conference met in Omaha. This was not a separate church, but part of the mainline Methodist Episcopal Church.
Founded to serve the surrounding Irish neighborhood in 1883, North Omaha’s Holy Family Catholic Church is the oldest existing Catholic church building in Omaha. Becoming an Italian congregation after that, and then serving the entire community for the last 75 years, today Holy Family continues to stand strong at N. 18th and Izard Streets.
Germans opened several churches in North Omaha. One denomination that doesn’t exist anymore was called the Evangelical Association, and they opened two German churches: Zion’s Church, which built a structure at North 25th and Caldwell Streets in 1888; and Salem Church at North 18th and Cuming Streets in 1904. The German Immanuel Baptist Church opened at 26th and Seward in 1888 and later moved to 24th and Miami. The Church of the Brethren (Dunkard Society) built a church at 2123 Miami Street in 1915. In 1887, St. Paul’s German-English Lutheran opened at north 28th and Parker Streets. It was demolished by the Easter Sunday Tornado of 1913. The First German Presbyterian Church opened at North 18th and Cuming Streets in 1882. By 1910 they had built another church at 20th and Willis; it became known as Bethany Presbyterian Church and remained there for several decades.
North Omaha was also home to the First Danish Baptist Church starting in 1884. They eventually built a church at 2511 Decatur Street in the Long School neighborhood, and the congregation kept operating into the 1910s. Located on the site of the present-day Blackburn High School, the German Baptist Church was at North 26th and Seward Street from 1886 through the 1910s.
Black Churches in North Omaha
The first Black man in the Omaha area was a slave named York. He was owned by Meriwether Lewis on the 1804-05 Corps of Discovery Expedition. Blacks moved to Omaha from the South starting in the 1860s, and today Black churches are a shining beacon of hope, positivity and empowerment throughout the community.
St. John African Methodist Episcopal Church was organized in 1867. Five people first met at a house at 9th and Capitol, and then built a church at 18th and Webster in 1867, where they stayed for almost 50 years. It continues as a powerful institution in the Near North Side at 2402 N. 22nd Street in a beautiful building. St. John’s grew three other A.M.E. congregations in Omaha: Bethel AME at North 25th and Franklin Streets; Allen Chapel in South Omaha; and Primm Chapel, formerly at North 18th and Emmet Streets.
The largest African American church in Nebraska for decades, Zion Baptist Church in North Omaha, was founded in 1884. Located at 2215 Grant Street, its current home was designed by North Omaha’s African American architect, Clarence Wigington. Founded in 1887, the Mt. Moriah Baptist Church moved several times before 1927, when they moved to 2602 North 24th Street.
Hillside Presbyterian Church was founded by Harrison J. Pinkett in 1918. After building a church that burnt down in the 1920s and the congregation struggling for 20 years, in 1946 the Omaha Presbytery was going to close it. However, members rallied and a new building was constructed at North 30th and Ohio Streets. When that burnt down, members built a new building but outgrew it by the time it was done. Members eventually merged their congregation with Bethany Presbyterian Church and moved to North 24th Street to integrate, and they took over the old North Presbyterian Church to become Calvin Memorial Presbyterian Church. That congregation folded in the 1990s.
The Peoples’ Church was founded in downtown Omaha in 1892. It moved to 1708 N. 26th Street in North Omaha by the 1920s, and stayed open for several years afterwards. The Tabernacle Church of Christ Holiness opened in the 1950s at 1521 North 21st Street in the former synagogue of Beth Hamedrosh Adas Jeshuran. Founded in 1922, St. Benedict the Moor Catholic Church is Nebraska’s only Black Catholic congregation when it was opened at 2423 Grant Street in the Near North Side neighborhood. Beginning in 1913, Clair Methodist Episcopal Church served the Near North Side for several decades. Originally called Grove Methodist Church, they built at North 22nd and Seward Streets. In 1927, the congregation was renamed in honor of a local Methodist bishop, and opened a new building at North 22nd and Miami. They had purchased the former First Church of the Brethren, a German church built in 1915. Clair stayed there for 30 years until moving to the former St. Paul’s Evangelical Lutheran Church at 2443 Evans Street. The church moved to 5544 Ames Avenue in 1983 and has been there since.
St. Phillip the Deacon was an Episcopal church built in the early 1890s, and was located at 1119 N. 21st Street. As Omaha’s segregated Black Episcopal church, St. Philip the Deacon grew and built a new structure on Binney Street in Kountze Place in 1949. In 1986, they joined St. John’s Episcopal Church to form an integrated congregation called the Episcopal Church of the Resurrection.
Along with many historic Black churches, North Omaha is also home to many newer African American congregations, too.
Growing Churches in Growing Neighborhoods
Before 1900, almost all of the mainline denominations had congregations in North Omaha. The Lutherans were the largest denomination; Methodists, Episcopalians, Baptists and Congregationalists each had churches in North Omaha. In the century since then, many other congregations and non-denominational churches have emerged across the community. Here is some of the history of growing churches in North Omaha’s history.
Trinity Methodist Episcopal Church built a new building in Kountze Place in the 1890s, rebuilt it after the 1913 Easter Tornado, and moved to another North Omaha neighborhood in the 1940s. Plymouth Congregational Church, built in 1915 at 1802 Emmet Street, was sold to Primm Chapel African Methodist Episcopal Church in 1961. Primm Chapel closed at some point in the 1980s, and is now home to the Second Baptist Church.
Trinity Lutheran Church was started as a “child church” of the city’s Immanuel Lutheran Church in 1915, and was specifically called Trinity English Church because that was the only language allowed. Located at 6340 N. 30th St, it celebrated its 100 year anniversary this year! Near to Trinity is the Episcopal Church of the Resurrection, which was opened as St. John Episcopal Church in 1927. Another neighbor was built in 1923. Miller Park Presbyterian Church was located at North 30th and Huntington Avenue, next door to Trinity. Today, it is home to the World Fellowship Christian Center.
One of the strongest Black churches in Omaha today is Salem Baptist Church, which has become vital for all of North Omaha. Salem was founded in 1922 as an offshoot of an Interdenominational Church that was located near 26th and Franklin Streets in the Near Northside Neighborhood. In 2000, the congregation finished building a beautiful new church where the Hilltop Housing Projects were located. Omaha’s Second Presbyterian Church was originally opened at North 24th and Nicholas Streets.
Churches were among the first establishments founded in Florence in 1854. St. Philip Neri Church, located at 8200 North 30th Street, has a long history in the Florence neighborhood of North Omaha. Established at N. 31st and Grebe in 1904, the parish opened a school in 1922.
Established to serve several neighborhoods in what was regarded as west Omaha at the time, the first Saint Cecilia Parish church was constructed on a high ridge to the west of the Walnut Hill neighborhood, at present-day North 40th and Burt Streets. A tiny wooden building was finished in 1888 and served for several decades. It was demolished in a windstorm in 1917.
In 1902, in a small chapel on N. 36th and Charles Streets that is still located there, a new congregation called Zion Lutheran Church started. For a decade, all of the services were held in German. The church built a new huge new building at N. 36th and Lafayette in 1919. However, in 1936 it was forced to merge with Trinity Lutheran church because of the Great Depression.
Becoming Augustana Lutheran Church, today the congregation is housed in a 1951 building in the Walnut Hill neighborhood. In 1966, a documentary about a Augustana was nominated for an Oscar award. Called A Time for Burning, it featured a young Ernie Chambers speaking plainly about race and racism in Omaha. In 2005 the film included in the National Film Registry of the United States Library of Congress.
Catholic Parishes in North Omaha
There have been a LOT of Catholic parishes in the history of North Omaha.These are the current and former Catholic high schools in North Omaha. They include have included St. John’s parish; Holy Name; Blessed Sacrament; St. Cecilia Cathedral, and; many others.
The Notre Dame Academy and Convent was built in North Omaha’s Florence neighborhood in the 1920s. Its nuns were Czechs who were intended to serve Omaha’s large Czech community. After identifying their need to serve Omaha, the Sisters of Notre Dame bought Father Flanagan’s Seven Oaks Farm, and hired architects to design a large, E-shaped building to serve as a school. The Notre Dame Academy closed in the 1970s, and today the building serves as housing for the elderly.
Opening in 1919, the Holy Name Catholic Church is located at 2901 Fontenelle Boulevard in North Omaha. In addition to their church, they host a school that serves students from across the city. St. Bernard Catholic Church began as a white frame church at 61st and Miami Streets in 1905, with a parish consisting of “town-folk” from Benson and its surrounding farmers. Today it is located at 3601 N. 65th Street, and supports a school also.
Blessed Sacrament Catholic Church was founded in 1919 on the northwest corner of North 30th and Curtis Avenue. The first church on the site was a wooden building that served as a church at Fort Omaha. Moved from there to the new site, the church built its first permanent structure in 1921. After operating a school, convent and outreach programs for years, the church closed in 2014.
There have been literally dozens of Catholic schools located in North Omaha through the years.
In 1897, Herman Kountze donated land to the Sacred Heart Catholic Church to relocate their church to Kountze Place. They quickly moved their old church from N. 26th and Sprague to N. 24th and Binney, but their old building stood on the site a few years. In 1902, popular Omaha architects Fischer and Lawrie designed a grand gothic, traditionally-laid out building. The church also hosts a school across the street, and a rectory next door.
Former Churches in North Omaha
So many churches have started, thrived, emptied out and closed throughout North Omaha that I can’t possibly include all of them here. However, here are some of the ones I’ve found. If you know of a former North Omaha church that should be here, please share the information with me in the comments section below.
On the corner of North 24th and Ogden Avenue sits the former Pearl Memorial United Methodist Church. This building opened in 1906, and closed in the 1990s.
Other historic churches in North Omaha included Immanuel Baptist Church in Kountze Place at North 24th and Pinkney Streets. Our Savior Lutheran Church was at 1001 North 30th Street. Today, that building is home to St. Matthew’s Mission Baptist Church. The integrated congregation of Hope Lutheran Church bought Pella Lutheran Church’s building at 2723 North 30th Street in 1946, and stands there today. Asbury United Methodist Church was at 2319 Ogden Street for more than 50 years.
One denomination went above all others in its commitment to North Omaha. In the early 1900s, the Omaha Presbyterian Theological Seminary was opened in Kountze Place. Its goal was to educate Presbyterian ministers for growing rural populations in Nebraska, Iowa, South Dakota, North Dakota and Kansas. It closed permanently in the 1940s, and the building was demolished in the 1970s.
Another Black congregation in North Omaha was St. Paul’s Presbyterian Church at North 26th and Seward Streets. Organized in 1920 by community leader Rev. Russell Taylor (1871-1933), it was an important location for the Omaha Civil Rights movement in the 1920s. St. Paul’s was burnt down in 1930 and not reorganized.
With a beautiful building constructed in 1919, the St. Therese of the Child Jesus Catholic Church at 5314 N. 14th Avenue was a bastion of East Omaha for more than 75 years. It closed and merged with Sacred Heart.
St. John’s Episcopal Church was founded in 1885 at North 26th and Franklin Streets. A white-only congregation, they moved to North 25th and Browne by 1900. In 1927, their new building opened at 3004 Belevedre Boulevard. After floundering for a decade, St. John’s merged with St. Phillip the Deacon Episcopal Church, a segregated Black church, in 1987 to form the Episcopal Church of the Resurrection.
The First Universalist Church was started in a social hall and built a large, fine church at 1823 Lothrop Street in the Kountze Place neighborhood in the 1890s. In 1906, the Hartford Memorial Church of the Bretheren bought the building, and in the 1950s they sold it to the congregation that became Rising Star Baptist Church. They are there today.
Finally, the Holy Angels Catholic Church and School was located on the northeast corner of N. 27th and Fowler Avenue in North Omaha. A larger church was built by 1920, but because of white flight both the church and school dwindled steadily in numbers from the mid-1960s through the 1970s. It also merged with Sacred Heart parish, and the entire complex was demolished in 1980. Today, the site of the church abuts the North Freeway / Sorenson Parkway interchange.
White flight drove many churches away from North Omaha. Either by following their flocks or because of dwindling numbers of congregants, several churches established in the Near North Side and Kountze Place neighborhoods moved westward to follow their congregants.
One such church is Covenant Presbyterian Church which began as Bedford Place Presbyterian Church in 1893. In 1904 the name changed to Church of the Covenant, and in 1906 the church moved to North 27th and Pratt Street, and in 1957, it moved to North 51st and Ames Avenue. They eventually moved to west Omaha. Another example is St. Paul Lutheran Church. In 1887, St. Paul German Lutheran Church was started at North 26th and Hamilton Streets. Just five years later, in 1892, the church moved to a new building at North 28th and Parker Streets. When that church was demolished by the Easter Sunday Tornado of 1913, the congregation built a new church at North 25th Avenue and Evans Street in the Kountze Place neighborhood. After adding a school in 1930, the church remained here until 1958. They moved to North 50th and Grand Avenue in 1959, and built a new building there in 1966, where they’ve remained since.
Official Omaha Landmarks
Several churches in North Omaha feature notable architecture, and eleven of the community’s churches are designated as official Omaha Landmarks or listed on the National Register of Historic Places, or NRHP.
Originally called North Presbyterian Church, then Calvin Memorial Presbyterian Church, and now home to the Church of Jesus Christ Whole Truth, the building at 3105 N. 24th Street has been noted as, “architecturally significant to Omaha as a fine example of the Neo-Classical Revival Style of architecture.” The building is listed on the NRHP and is designated as an Omaha Landmark.
The St. John’s AME Church built a proud Prairie style building designed by notable L.A. architect Frederick Stott. Its two previous buildings were located nearer to present-day downtown Omaha, with the second one designed by an African American architect in North Omaha named Clarence Wigington. The building is designated as an Omaha Landmark and listed on the NRHP.
Holy Family Church is listed on the NRHP and is designated as an Omaha Landmark. Holy Family Church is the oldest existing Catholic church building in Omaha and the oldest remaining brick church structure in the city.
Built in 1902 in Kountze Place, the Sacred Heart Catholic Church was originally an upper class celebration of Catholic influence and growth. As the neighborhood around it changed, the church morphed to serve local needs and today continues supporting a neighborhood school and several other ministries. It is both an Omaha Landmark and listed on the NRHP.
The site of the Robinson Memorial Church of God in Christ at 2318 N. 26th Street celebrates one of the strongest legacies of any church leader in Omaha history. For more than 20 years, Lizzie Robinson traveled the country on behalf of the denomination to establish new congregations. Her legacy continues today as the churches keep flourishing in their second century. The site has been designated an official Omaha Landmark.
In 1905, the Roman Catholic Diocese of Omaha broke ground on a new Saint Cecilia’s Cathedral, located at N. 40th and Burt Streets in North Omaha. Ranked as one of the United States’ ten largest cathedrals, it was designed by Thomas Rogers Kimball in the Second Spanish Colonial style. It was built on the edge of the Walnut Hill neighborhood, and took 50 years to complete construction. It is listed on the NRHP, and is designated as an Omaha Landmark.
The most recent addition to the list of North Omaha churches on the National Register of Historic Places is also the newest building. St. Richard’s Catholic School and Rectory was constructed in 1961 at 4318 Fort Street. Designed in the Mid-Century Modern style to meet its once-suburban neighborhood’s needs, the parish closed in the 2000s. Today, it serves as a senior home, youth center and social services facility.
Changing with Neighborhoods
In the late 1980s, my neighborhood grocery store became a church. Phil’s Foodway once had a store at N. 24th and Fort where my family shopped regularly after we moved to the Miller Park neighborhood. At some point, all the kids in the neighborhood started talking about the store’s closing, and sure enough, one day everyone knew they could get ice cream there cheap! I bought four half-gallon boxes for $.50 apiece and hauled them home, and Phil’s was closed after that. Within a few years, the Tabernacle of Faith Church of God In Christ opened in the old supermarket at 2404 Fort Street.
There are many newer churches serving North Omaha. Many congregations of the Church of God in Christ serve North Omaha, including Second Advent COGIC on N. 30th; Power House COGIC on Browne Street and N. 25th Avenue; and the St. Timothy COGIC at N. 24th and Himebaugh Avenue.
The Christ-Love Unity Church is at N. 29th and Ellison Avenue, and the Mount Carmel Baptist Church is located at N. 27th and Camden Avenue. Mount Olive Evangelical Lutheran Church was open in 1949, and continues serving the Minne Lusa and Florence neighborhoods at 7301 N. 28th Street today.
The church building at 2502 North 51st Street in the Benson neighborhood has an interesting and transitional history. Opened in 1929 as the First Church of the Brethren, it closed in 1965. In 1978, it became the God’s Missionary Baptist Church, and then in 2005 it opened as the Saint Vincent of Lerins Antiochian Orthodox Church.
Forever strong in their faith, North Omaha’s Christian community has many faces, names, denominations, congregations and groups. Hopefully, they’ll learn how to work together to support each other and build the community as a whole. Towards that goal, I am sharing the following directory of Christian congregations. Please let me know if you have any corrections or additions in the comments section.
North Omaha Church Directory
These are active churches in North Omaha today. Please share any corrections with me using the comments section!
Other Churches in North Omaha
- St. Vincent of Lerins Western Rite Orthodox at 2502 N 51st St
- Christ-Love Unity Church at 2903 Ellison Avenue
- Faith Deliverance Church at 2901 North 30th Street
- Episcopal Church of the Resurrection at 3004 Belvedere Blvd
- Cleaves Temple CME at 2431 Decatur Street
African Methodist Episcopal Churches in North Omaha
- Allen Chapel African Methodist Episcopal at 2842 Monroe Street
- St. John’s African Methodist Episcopal at 2402 N. 22nd Street
Apostolic Churches in North Omaha
- Apostolic Oblates at 6762 Western Avenue
- Bethlehem Apostolic at 6910 Maple Street
- Grace Apostolic at 2216 Military Ave
Assembly of God in North Omaha
- Freedom Assembly of God at 4224 N 24th Street
- Royal Assembly of God at 2864 State St
Baptist Churches in North Omaha
- Community Baptist at 8019 N. 31st Street
- Cross Road Baptist at 6068 Ames Avenue
- Jehovah Shammah Baptist at 2537 N. 62nd Street
- Karen Street Baptist at 6109 Karen Street
- Mt Moriah Baptist Church at 2602 North 24th Street
- St. Mark Baptist Church at 3616 Spaulding Street
- Pilgrim Baptist at 2501 Hamilton Street
- Salem Baptist at 3131 Lake Street
- Mount Nebo Missionary Baptist at 5501 North 50th Street
- Second Baptist at 1802 Emmet Street
- Rising Star Baptist Church at 1823 Lothrop Street
Catholic Churches in North Omaha
- Blessed Sacrament Catholic at 3020 Curtis Street (closed)
- Holy Family Catholic at 1715 Izard Street
- Holy Name Catholic at 3014 N. 45th Street
- Holy Angels Catholic at 2720 Fowler Avenue (closed)
- Mother of Perpetual Help Catholic at 5215 Seward Street
- Sacred Heart Catholic at 2218 Binney Street
- St. Benedict the Moor Catholic at 2423 Grant Street
- St. Bernard Catholic at 3601 N. 65th Street
- St. Cecilia Catholic at 701 N. 40th Street
- St. John’s Parish Catholic at 2500 California Plaza
- St. Philip Neri Blessed Sacrament Parish at 8201 North 30th Street
- St. Richard Catholic at 4320 Fort Street (closed
- St. Therese of the Child Jesus Catholic at 5314 N. 14th Avenue (closed)
Christian Churches in North Omaha
- Benson Christian at 2704 N. 58th Street
- Christian Discipleship Christian at1823 Lake Street
- City Church Christian at 6051 Maple Street
- Florence Alliance Christian at 8702 N. 30th Street
- Florence Christian at 7300 Northridge Drive
- Fort Street Christian at 5116 Terrace Drive
- Freedom Christian at 4606 N. 56th Street
- Northside Family Christian at 4102 Florence Boulevard
- Pilgrim Christian at 2818 N. 70th Street
- Shiloh Christian at 1501 N. 33rd Street
- Sonrise Christian at 4623 N. 54th Circle
- Benson Christian at 2704 N. 58th Street
- Christ Temple Christian at 2124 N. 26th Street
Church of Christ in North Omaha
- Church Of Christ at 5922 Fort Street
- Church Of Christ at 5118 Hartman Avenue
- Church of Christ at 4628 Grand Avenue
- Faith Temple Church of Christ at 3049 Curtis Avenue
- Friends Of Christ Evangelical Church of Christ at 3208 Corby Street
- Jesus Christ Church of Christ at 1517 N. 30th Street
- New Life Church of Christ at 1712 N. 24th Street
- Tabernacle Church of Christ at 1521 N. 25th Street
- Antioch Church of Christ at 3654 Miami Street
Church of God in Christ in North Omaha
- Cathedral of Love Church of God in Christ at 2816 Ames Avenue
- Church of God in Christ at 2025 N. 24th Street
- International Church of God in Christ at 4628 Grand Avenue
- Church Of The Living God Church of God in Christ at 2029 Binney Street
- Church Of The Living God Church of God in Christ at 3805 Bedford Avenue
- Faith Temple Church of God in Christ at 3049 Curtis Avenue
- Faith Temple Church of God in Christ 2108 Emmet Street
- Freedom Church Assembly Church of God in Christ at 4430 Florence Blvd
- Gethsemane Church of God in Christ at 5720 N. 24th Street
- New Bethel Church of God in Christ at 1710 N. 25th Street
- New Life Church of God in Christ at 1712 N. 24th Street
- Power House Church of God in Christ at 2553 Browne Street
Church of Jesus Christ of Latter-Day Saints in North Omaha
- Church of Jesus Christ of Latter-day Saints – Florence Ward at 5217 North 54th Street
- Winter Quarters Nebraska Temple Church Of Jesus Christ Of Latter-day Saints at 8283 N. 34th Street
Lutheran Churches in North Omaha
- American Lutheran at 4140 N. 42nd Street
- Augustana Lutheran at 3647 Lafayette Avenue
- Bethany Lutheran at 5151 Northwest Radial Highway
- Deaf Bethlehem Lutheran at 5074 Lake Street
- Garden-Gethsemane Wisconsin Evangelical Lutheran at 4543 Camden Avenue
- Hope Lutheran at 2723 N. 30th Street
- Immanuel Lutheran at 2725 N. 60th Avenue
- Lutheran Metropolitan Ministries at 4205 Boyd Street
- Mount Olive Lutheran at 7301 N. 28th Avenue
- Northside Community Lutheran at 1511 N. 20th Street
- Lutheran Church of Our Redeemer at 4757 N. 24th Street
- St. John’s Lutheran Church at 11120 Calhoun Road
- Shepherd Of The Hills Lutheran at 6201 N. 60th Street
- St Paul Lutheran at 5020 Grand Avenue
- Trinity Lutheran Church at 6340 North 30th Street
United Methodist Churches in North Omaha
- Asbury United Methodist at 5226 N. 15th Street (closed)
- Clair Memorial United Methodist at 5544 Ames Avenue
- Olive Crest United Methodist Church at 7180 North 60th Street
- Pearl Memorial United Methodist, originally at 757 N. 24th Street then at 2319 Ogden Street (closed)
- Trinity United Methodist at 6001 Fontenelle Boulevard
- Florence Methodist Church, Bluff Street (closed)
Presbyterian Churches in North Omaha
- Benson Presbyterian at 5612 Corby Street
- Clifton Hill Presbyterian, N. 45th and Grant Streets (closed, demolished)
- Covenant Presbyterian Church at N. 27th and Pratt Streets (closed, demolished)
- Florence Presbyterian at 8314 N. 31st Street
- Harvest Community Presbyterian at 4932 Ohio Street (closed)
- Lowe Avenue Presbyterian at 1023 N. 40th Street (closed)
- Miller Park Presbyterian at 3020 Huntington Avenue (closed)
- Mount View Presbyterian at 5308 Hartman Avenue
- New Life Presbyterian at 4060 Pratt Street
- St. Paul Presbyterian at 2531 Seward Street (closed)
- Calvin Memorial Presbyterian Church (closed)
- Hillside Presbyterian Church in North Omaha (closed)
You Might Like…
- Directory of Historic Churches in North Omaha
- History of Black Churches in Omaha
- History of Churches in Florence, Nebraska
- Pearl Memorial United Methodist Church
- Mount Moriah Baptist Church
- Holy Family Catholic Parish
- Calvin Memorial Presbyterian Church
- St. Phillip the Deacon Episcopal Church
- St. John’s AME Church
- Hillside Presbyterian Church
- Bethel AME Church
- Cleaves Temple CME Church
- New Bethel Church of God in Christ
- Mt. Calvary Community Church
- St. Benedict the Moor Catholic Parish and the Bryant Resource Center
- Hope Lutheran Church
- St. John’s Catholic Parish
- Zion Baptist Church
- St. Clare’s Monastery / Starlight Chateau
- Omaha Presbyterian Theological Seminary
- A History of Covenant Presbyterian Church
- Immanuel Baptist Church
- History of North Omaha’s Jewish Community
- History of Catholic Schools in North Omaha | <urn:uuid:2ed1ff17-2319-4557-8219-a0cd6c62eb0e> | CC-MAIN-2021-21 | https://northomahahistory.com/2015/12/21/a-history-of-churches-in-north-omaha/?like_comment=3404&_wpnonce=9fbf875937 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988858.72/warc/CC-MAIN-20210508091446-20210508121446-00256.warc.gz | en | 0.970707 | 6,608 | 3.140625 | 3 |
adaptation Actions taken to help communities and ecosystems better cope with potential negative effects of climate change or take advantage of potential opportunities.
adaptive capacity The inherent ability of a system (e.g., ecosystem or social system) to adapt to a changing environment; for example, a plant species that can survive a broader range of temperatures may have greater adaptive capacity compared to a plant that can only tolerate a narrow range of temperatures.
agribusiness An industry engaged in the production operations of a farm, the manufacture and distribution of farm equipment and supplies, and the processing, storage, and distribution of farm commodities.
agronomy The science of crop production and soil management.
annual streamflow The cumulative quantity of water for a period of record, in this case a calendar year.
anthropogenic Originating in human activity.
aquifer A body of permeable rock that can contain or transmit groundwater.
atmospheric carbon dioxide (CO2) The amount of CO2 in Earth’s atmosphere. Although the proportion of Earth’s atmosphere made up by CO2 is small, CO2 is a potent greenhouse gases and directly related to the burning of fossil fuels. Atmospheric carbon dioxide levels in Earth’s atmosphere are at the highest levels in an estimated 3 million years and these levels are projected to increase global average temperatures through the greenhouse effect.
attribution Identifies a source or cause of something.
basis The difference between the futures market price and the local price for an agricultural commodity, measured in dollars per bushel.
base flow The portion of streamflow that is not runoff and results from seepage of water from the ground into a channel slowly over time. The primary source of running water in a stream during dry weather.
basin A drainage basin or catchment basin is an extent or an area of land where all surface water from rain, melting snow, or ice converges to a single point at a lower elevation, usually the exit of the basin, where the waters join another body of water, such as a river, lake, reservoir, estuary, wetland, sea, or ocean.
biocontrol Short for biological control; the reduction in numbers or elimination of pest organisms by interference with their ecology (as by the introduction of parasites or disease).
biodiversity The variety of all native living organisms and their various forms and interrelationships.
biomass The total amount of organic matter present in an organism, population, ecosystem, or given area.
bushel A unit for measuring an amount of fruit and grain that is equal to about 35.2 liters in the US.
C3 and C4 plants Plants use different photosynthetic pathways (termed C3 photosynthesis or C4 photosynthesis). C4 plants evolved as an adaptation to high-temperature, high-light conditions. C4 plant growth rates increase more under hot, high-CO2 conditions than that of C3 plants and exhibit less water loss.
climate versus weather The difference between weather and climate is a measure of time. Weather is what conditions of the atmosphere are over a short period of time, and climate is how the atmosphere behaves over relatively long periods of time (i.e., multiple decades).
climate change Changes in average weather conditions that persist over multiple decades or longer. Climate change encompasses both increases and decreases in temperature, as well as shifts in precipitation, changing risk of certain types of severe weather events, and changes to other features of the climate system.
climate oscillation See teleconnections.
commercial crops Agricultural crops that are grown for sale to return a profit, purchased by parties separate from a farm (note: not all commercial crops are commodity crops).
commodity crops Crops that are traded, and typically include crops that are non-perishable, easily storable, and undifferentiated.
commodity futures Buying or selling of a set amount of a commodity at a predetermined price and date.
confined aquifer A confined aquifer is an aquifer below the land surface that is saturated with water. Layers of impermeable material are both above and below the aquifer, causing it to be under pressure so that when the aquifer is penetrated by a well, the water will rise above the top of the aquifer.
cow-calf operations Livestock operations in which a base breeding herd of mother cows and bulls are maintained. Each year’s calves are sold between the ages of 6 and 12 months, along with culled cows and bulls, except for some breeding herd replacements.
crop rotation System of cultivation where different crops are planted in consecutive growing seasons to maintain soil fertility.
cultivar A contraction of cultivated variety. It refers to a plant type within a particular cultivated species that is distinguished by one or more characteristics.
direct effect A primary impact to a system from shifts in climate conditions (e.g., temperature and precipitation), such as direct mortality to species from increased heat extremes.
direct runoff The runoff entering stream channels promptly after rainfall, exclusive of base flow. Direct runoff equals the volume of rainfall excess (e.g., total precipitation minus losses).
disturbance regime The frequency, severity, and pattern of events that disrupt an ecosystem or community; for example, a forest’s fire disturbance regime may be the historical pattern of frequent, low-intensity fires versus infrequent, high-severity fires.
drought For this report, drought is categorized in four ways: 1) meteorological drought, defined as a deficit in precipitation; 2) hydrological drought, characterized by reduced water levels in streams, lakes, and aquifers; 3) ecological drought, defined as a prolonged period over which an ecosystem’s demand for water exceeds the supply; and 4) agricultural drought, commonly understood as a deficit in soil moisture.
dryland farming A system of producing crops in semiarid regions (usually with less than 20 inches [0.5 m] of annual rainfall) without the use of irrigation.
El Niño-Southern Oscillation (ENSO) A periodic variation in wind and sea-surface temperature patterns that affects global weather; El Niño (warming phase where sea-surface temperatures in the eastern Pacific Ocean warm) generally means warmer (and sometimes slightly drier) winter conditions in Montana. In contrast, La Niña (cooling phase) generally means cooler (and sometimes wetter) winters for Montanans.The two phases each last approximately 6-18 months, and oscillate between the two phases approximately every 3-4 yr.
ensemble of general circulation models (GCMs) Succinctly: When many different forecast models are used to generate a projection, and outputs are synthesized into a single score or average. This type of forecast significantly reduces errors in model output and enables a level of certainty to be placed on the projections. More broadly: Rather than relying on the outcome of a single climate model, scientists run ensembles of many models. Each model in the ensemble plausibly represents the real world, but as the models differ somewhat they produce different outcomes. Scientists analyze the outputs (e.g., projected average daily temperature at mid century) over the entire ensemble. Those analyses provide both the projection of the future resulting from the ensemble of models, and define the level of certainty that should be placed on that projection.
ephemeral stream A stream that flows only briefly during and following a period of rainfall in the immediate locality.
evaporation The change of a liquid into a vapor at a temperature below the boiling point. Evaporation takes place at the surface of a liquid, where molecules with the highest kinetic energy are able to escape. When this happens, the average kinetic energy of the liquid is lowered and its temperature decreases.
evapotranspiration The combined effect of evaporation and transpiration (by plants) of water, which is one of the most important processes driving the hydrologic cycle. Evapotranspiration is often analyzed in two ways, as potential evapotranspiration, which is a measure of demand for water from the atmosphere regardless of how much water is available, and actual evapotranspiration, which is how much water is actually used by plants and evaporated from water surfaces. Generally, actual evapotranspiration is driven by water availability, solar radiation, and plant type, but also affected by wind and vapor pressure. Transpiration is affected by vegetation-related factors such as leaf area and stomatal conductance, the exchange of CO2 and water vapor between leaves and the air.
fallow Cultivated land that is allowed to lie idle during the growing season; or to plow, harrow, and break up (land) without seeding to destroy weeds and conserve soil moisture.
feeder cattle Growing beef cattle between the calf stage and sale to finishing operations.
fire behavior The manner in which wildfire ignites and spreads, and characterizing the burning conditions within a single fire.
fire regime The frequency, severity, and pattern of wildfire.
fire risk The likelihood of a fire ignition.
fire severity The magnitude of effects from a fire, usually measured by the level of vegetation or biomass mortality or the area burned.
flood An overflowing of a large amount of water beyond its normal confines, especially over what is normally dry land.
flood plain An area of low-lying ground adjacent to a river, formed mainly of river sediments and subject to flooding.
frost days The annual count of days where daily minimum temperature drops below 32°F (0°C).
futures trading An agreement between two people, one who sells and agrees to deliver and one who buys and agrees to a certain kind, quality, and quantity of product to be delivered during a specified delivery month at a specified price. More simply, a contract to buy specific quantities of a commodity at a specified price with delivery set at a specified time in the future.
general circulation models (GCMs) Numerical models representing physical processes in the atmosphere, ocean, cryosphere, and land surface. They are the most advanced tools currently available for simulating the response of the global climate system to increasing greenhouse gas concentrations.
grain filling The period of wheat development from pollination to seed production.
greenhouse gas A gas in Earth’s atmosphere that absorbs and then re-radiates heat from the Earth and thereby raises global average temperatures. The primary greenhouse gases in Earth’s atmosphere are water vapor, carbon dioxide, methane, nitrous oxide, and ozone. Earth relies on the warming effect of greenhouse gases to sustain life, but increases in greenhouse gases, particularly carbon dioxide from the burning of fossil fuels, can increase average global temperatures over historical norms.
greenhouse gas emissions The discharge of greenhouse gases, such as carbon dioxide, methane, nitrous oxide and various halogenated hydrocarbons, into the atmosphere. Combustion of fossil fuels, agricultural activities, and industrial practices contribute to the emissions of greenhouse gases.
green manure Crops grown to be incorporated into the soil to increase soil quality, fertility and structure.
global warming The increase in Earth’s surface air temperatures, on average, across the globe and over decades. Because climate systems are complex, increases in global average temperatures do not mean increased temperatures everywhere on Earth, nor that temperatures in a given year will be warmer than the year before (which represents weather, not climate). More simply: Gobal warming is used to describe a gradual increase in the average temperature of the Earth’s atmosphere and its oceans, a change that is believed to be permanently changing the Earth’s climate.
groundwater Water held underground in the soil or in pores and crevices in rock.
growing degree-days A weather-based indicator for assessing crop development. It is a calculation used by crop producers that is a measure of heat accumulation used to predict plant and pest development rates such as the date that a crop reaches maturity.
hardiness zone A geographically-defined zone in which a specific category of plant life is capable of growing, as defined by temperature hardiness, or ability to withstand the minimum temperatures of the zone. The zones are based on the average annual extreme minimum temperature during a 30-yr period in the past, not the lowest temperature that has ever occurred in the past or might occur in the future.
human agency The capacity possessed by people to act of their own volition.
hydrograph A hydrograph is a graph showing the rate of flow (discharge) versus time past a specific point in a river, or other channel or conduit carrying flow. The rate of flow is typically expressed as cubic feet per second, CFS, or ft3/s (the metric unit is m3/s).
hydrologic cycle The sequence of conditions through which water passes from vapor in the atmosphere through precipitation upon land or water surfaces and ultimately back into the atmosphere as a result of evaporation and transpiration.
hydrology The study of water. Hydrology generally focuses on the distribution of water and interaction with the land surface and underlying soils and rocks.
indirect effect A secondary impact to a system from a change that was caused by shifting climate conditions, such as increased fire frequency, which is a result of drier conditions caused by an increase in temperature.
infiltration The movement of water from the land surface into the soil.
interception The capture of precipitation above the ground surface, for example, by vegetation or buildings.
IPCC SRES Intergovernmental Panel on Climate Change Special Report on Emissions Scenarios.
irrigation Application of water to soil for the purpose of plant production.
legume Any of a large family (Leguminsoae syn. Fabaceae, the legume family) of dicotyledonous herbs, shrubs, and trees having fruits that are legumes or loments, bearing nodules on the roots that contain nitrogen-fixing bacteria, and including important food and forage plants (as peas, beans, or clovers).
metrics Quantifiable measures of observed or projected climate conditions, including both primary metrics (for example, temperature and precipitation) and derived metrics (e.g., projected days over 90°F [32°C ] or number of consecutive dry days).
microclimate The local climate of a given site or habitat varying in size from a tiny crevice to a large land area. Microclimate is usually, however, characterized by considerable uniformity of climate over the site involved and relatively local when compared to its enveloping macroclimate. The differences generally stem from local climate factors such as elevation and exposure.
mitigation Efforts to reduce greenhouse gas emissions to, or increase carbon storage from, the atmosphere as a means to reduce the magnitude and speed of onset of climate change.
model A physical or mathematical representation of a process that can be used to predict some aspect of the process.
organic A crop that is produced without: antibiotics; growth hormones, most conventional pesticides, petroleum-based fertilizers or sewage sludge-based fertilizers, bioengineering, or ionizing radiation. USDA certification is required before a product can be labeled organic.
oscillation A recurring cyclical pattern in global or regional climate that often occurs on decadal to sub-decadal timescales. Climate oscillations that have a particularly strong influence on Montana’s climate are the El Nino Southern Oscillation (ENSO) and the Pacific Decadal Oscillation (PDO).
Pacific Decadal Oscillation (PDO) A periodic variation in sea-surface temperatures that is similar to El Niño-Southern Oscillation, but has a much longer duration (approximately 20-30 yr). When the PDO is in the same phase as El Niño-Southern Oscillation, weather effects are more pronounced. For example, when both are in the warming phase, Montanans may experience an extremely warm winter, whereas if PDO is in a cooling phase, a warm phase El Niño-Southern Oscillation may have a reduced impact.
Palmer Drought Severity Index (PDSI) A measurement of dryness based on recent precipitation and temperature. The Palmer Drought Severity Index is based on a supply-and-demand model of soil moisture.
Palmer Z Drought Index One of the Palmer Drought Indices; it measures short-term drought on a monthly scale. The Z-value is also referenced to the specific location, climate, and time of year.
parameter A variable, in a general model, whose value is adjusted to make the model specific to a given situation.
pathogen Microorganisms, viruses, and parasites that can cause disease.
peak flow The point of the hydrograph that has the highest flow.
permeability A measure of the ability of a porous material (often, a rock or an unconsolidated material) to allow fluids to pass through it.
phenology The study of periodic biological phenomena with relation to climate (particularly seasonal changes). These phenomena can be used to interpret local seasons and the climate zones.
physiography The subfield of geography that studies physical patterns and processes of the Earth. It aims to understand the forces that produce and change rocks, oceans, weather, and global flora and fauna patterns.
primary productivity The total quantity of fixed carbon (organic matter) per unit area over time produced by photosynthesis in an ecosystem.
pulse cropv Annual leguminous crops yielding from 1-12 grains or seeds of variable size, shape, and color within a pod. Limited to crops harvested solely for dry grain, thereby excluding crops harvested green for food, oil extraction, and those that are used exclusively for sowing purposes.
radiative forcing The difference between the amount of sunlight absorbed by the Earth versus the energy radiated back to space. Greenhouse gases in the atmosphere, particularly carbon dioxide, increase the amount of radiative forcing, which is measured in units of watts/m2. The laws of physics require that average global temperatures increase with increased radiative forcing.
rangeland Land on which the historical climax plant community is predominantly grasses, grasslike plants, forbs, or shrubs. This includes lands re-vegetated naturally or artificially when routine management of the vegetation is accomplished through manipulation of grazing. Rangelands include natural grasslands, savannas, shrublands, most deserts, tundra, alpine communities, coastal marshes, and wet meadows.
RCP (representative concentration pathways) Imagined plausible trends in greenhouse gas emissions and resulting concentrations in the atmosphere used in climate projection models. This analysis uses the relatively moderate and more severe scenarios of RCP4.5 and 8.5. These scenarios represent a future with an increase in radiative forcing of 4.5 or 8.5 watts/m2, respectively. The RCP4.5 scenario assumes greenhouse gas emissions peak mid century, and then decline, while the RCP8.5 scenario assumes continued high greenhouse gas emissions through the end of the century.
resilience In ecology, the capacity of an ecosystem to respond to a disturbance or perturbation by resisting damage and recovering quickly.
resistance In ecology, the property of populations or communities to remain essentially unchanged when subject to disturbance. Sensitivity is the inverse of resistance.
resistance gene A gene involved in the process of resistance to a disease or pathogen; especially a gene involved in the process of antibiotic resistance in a bacterium or other pathogenic microorganism.
ruminants Mammals that have four stomachs and even-toed hooves.
runoff Surface runoff (also known as overland flow) is the flow of water that occurs when excess stormwater, meltwater, or other sources flows over the Earth’s surface.
scenario Climate change scenarios are based on projections of future greenhouse gas (particularly carbon dioxide) emissions and resulting atmospheric concentrations given various plausible but imagined combinations of how governments, societies, economies, and technologies will change in the future. This analysis considers two plausible greenhouse gas concentration scenarios: a moderate (stabilized) and more severe (business-as-usual) scenario, referred to as RCP4.5 and RCP8.5, respectively.
shallow aquifer Typically (but not always) the shallowest aquifer at a given location is unconfined, meaning it does not have a confining layer (an aquitard or aquiclude) between it and the surface. The term perched refers to ground water accumulating above a low-permeability unit or strata, such as a clay layer.
silage Any crop that is harvested green and preserved in a succulent condition by partial fermentation in a nearly airtight container such as a silo.
specialty crop Fruits and vegetables, tree nuts, dried fruits and horticulture and nursery crops, including floriculture.
spring wheat A general term for wheat sown in the early spring and harvested in the late summer or early autumn of the same year.
Snow Water Equivalent (SWE) A common snowpack measurement that is the amount of water contained within the snowpack. It can be thought of as the depth of water that would theoretically result if you melted the entire snowpack instantaneously.
soil moisture A measure of the quantity of water contained in soil. Soil moisture is a key variable in controlling the exchange of water and energy between the land surface and the atmosphere through evaporation and plant transpiration.
storage The volume of water contained in natural depressions in the land surface, such as a snowpack, glaciers, drainage basins, aquifers, soil zones, lakes, reservoirs, or irrigation projects.
streamflow (also known as channel runoff) The flow of water in streams, rivers, and other channels. It is a major element of the water cycle.
teleconnection A connection between meteorological events that occur a long distance apart, such as sea-surface temperatures in the Pacific Ocean affecting winter temperatures in Montana. Also referred to as climate oscillations or patterns of climate variability.
test weight A measure of grain bulk density, often used as a general indicator of overall quality and as a gage of endosperm hardness to alkaline cookers and dry millers.
tillage The traditional method of farming in which soil is prepared for planting by completely inverting it with a plow. Subsequent working of the soil with other implements is usually performed to smooth the soil surface. Bare soil is exposed to the weather for some varying length of time depending on soil and climate conditions.
transpiration The passage of water through a plant from the roots through the vascular system to the atmosphere.
unconfined aquifer A groundwater aquifer is said to be unconfined when its upper surface (water table) is open to the atmosphere through permeable material.
velocity The rate of climate changes occurring across space and time.
warm days Percentage of time when daily maximum temperature >90th percentile.
warm nights Percentage of time when daily minimum temperature >90th percentile.
water quality The chemical, physical, biological, and radiological characteristics of water. It is a measure of the condition of water relative to the requirements of one or more biotic species and/or to any human need or purpose.
watershed An area characterized by all direct runoff being conveyed to the same outlet. Similar terms include basin, subwatershed, drainage basin, catchment, and catch basin.
water year A time period of 12 months (generally October 1 of a given year through September 30 of the following year) for which precipitation totals are measured.
weather versus climate See climate versus weather.
winter wheat A general term for wheat sown in the fall, persisting through the winter winter as seedlings, and harvested the following spring or summer after it reaches full maturity. | <urn:uuid:13ee98e3-0eac-4a4d-a68f-a8afbd37fb83> | CC-MAIN-2021-21 | https://www.montanaclimate.org/chapter/glossary | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988758.74/warc/CC-MAIN-20210506144716-20210506174716-00217.warc.gz | en | 0.920288 | 4,908 | 3.421875 | 3 |
The collective processes of production, consumption, and diffusion of information on social media are starting to reveal a significant portion of human social life, yet scientists struggle to get access to data about it. Recent research has shown that social media can perform as ‘sensors’ for collective activity at multiple scales (Lazer et al., 2009). As a consequence, data extracted from social media platforms are increasingly used side-by-side with—and sometimes even replacing—traditional methods to investigate hard-pressing questions in the social, behavioral, and economic (SBE) sciences (King, 2011; Moran et al., 2014; Einav & Levin, 2014). For example, interpersonal connections from Facebook have been used to replicate the famous experiment by Travers & Milgram (1969) on a global scale (Backstrom et al., 2012); the emotional content of social media streams has been used to estimate macroeconomic quantities in country-wide economies (Bollen, Mao & Zeng, 2011; Choi & Varian, 2012; Antenucci et al., 2014); and imagery from Instagram has been mined (De Choudhury et al., 2013; Andalibi, Ozturk & Forte, 2015) to understand the spread of depression among teenagers (Link et al., 1999).
A significant amount of work about information production, consumption, and diffusion has been thus aimed at modeling these processes and empirically discriminating among models of mechanisms driving the spread of memes on social media networks such as Twitter (Guille et al., 2013). A set of research questions relate to how social network structure, user interests, competition for finite attention, and other factors affect the manner in which information is disseminated and why some ideas cause viral explosions while others are quickly forgotten. Such questions have been addressed both in an empirical and in more theoretical terms.
Examples of empirical works concerned with these questions include geographic and temporal patterns in social movements (Conover et al., 2013b; Conover et al., 2013a; Varol et al., 2014), the polarization of online political discourse (Conover et al., 2011b; Conover et al., 2011a; Conover et al., 2012), the use of social media data to predict election outcomes (DiGrazia et al., 2013) and stock market movements (Bollen, Mao & Zeng, 2011), the geographic diffusion of trending topics (Ferrara et al., 2013), and the lifecycle of information in the attention economy (Ciampaglia, Flammini & Menczer, 2015).
On the more theoretical side, agent-based models have been proposed to explain how limited individual attention affects what information we propagate (Weng et al., 2012), what social connections we make (Weng et al., 2013), and how the structure of social and topical networks can help predict which memes are likely to become viral (Weng, Menczer & Ahn, 2013; Weng, Menczer & Ahn, 2014; Nematzadeh et al., 2014; Weng & Menczer, 2015).
Broad access by the research community to social media platforms is, however, limited by a host of factors. One obvious limitation is due to the commercial nature of these services. On these platforms, data are collected as part of normal operations, but this is seldom done keeping in mind the needs of researchers. In some cases researchers have been allowed to harvest data through programmatic interfaces, or APIs. However, the information that a single researcher can gather through an API typically offers only a limited view of the phenomena under study; access to historical data is often restricted or unavailable (Zimmer, 2015). Moreover, these samples are often collected using ad-hoc procedures, and the statistical biases introduced by these practices are only starting to be understood (Morstatter et al., 2013; Ruths & Pfeffer, 2014; Hargittai, 2015).
A second limitation is related to the ease of use of APIs, which are usually meant for software developers, not researchers. While researchers in the SBE sciences are increasingly acquiring software development skills (Terna, 1998; Raento, Oulasvirta & Eagle, 2009; Healy & Moody, 2014), and intuitive user interfaces are becoming more ubiquitous, many tasks remain challenging enough to hinder research advances. This is especially true for those tasks related to the application of fast visualization techniques.
A third, important limitation is related to user privacy. Unfettered access to sensitive, private data about the choices, behaviors, and preferences of individuals is happening at an increasing rate (Tene & Polonetsky, 2012). Coupled with the possibility to manipulate the environment presented to users (Kramer, Guillory & Hancock, 2014), this has raised in more than one occasion deep ethical concerns in both the public and the scientific community (Kahn, Vayena & Mastroianni, 2014; Fiske & Hauser, 2014; Harriman & Patel, 2014; Vayena et al., 2015).
These limitations point to a critical need for opening social media platforms to researchers in ways that are both respectful of user privacy requirements and aware of the needs of SBE researchers. In the absence of such systems, SBE researchers will have to increasingly rely on closed or opaque data sources, making it more difficult to reproduce and replicate findings—a practice of increasing concern given recent findings about replicability in the SBE sciences (Open Science Collaboration, 2015).
Our long-term goal is to enable SBE researchers and the general public to openly access relevant social media data. As a concrete milestone of our project, here we present an Observatory on Social Media—an open infrastructure for sharing public data about information that is spread and collected through online social networks. Our initial focus has been on Twitter as a source of public microblogging posts. The infrastructure takes care of storing, indexing, and analyzing public collections and historical archives of big social data; it does so in an easy-to-use way, enabling broad access from scientists and other stakeholders, like journalists and the general public. We envision that data and analytics from social media will be integrated within a nation-wide network of social observatories. These data centers would allow access to a broad range of data about social, behavioral, and economic phenomena nationwide (King, 2011; Moran et al., 2014; Difranzo et al., 2014).
Our team has been working toward this vision since 2010, when we started collecting public tweets to visualize, analyze, and model meme diffusion networks. The IUNI Observatory on Social Media (OSoMe) presented here was launched in early May 2016. It was developed through a collaboration between the Indiana University Network Science Institute (IUNI, iuni.iu.edu), the IU School of Informatics and Computing (SoIC, soic.indiana.edu), and the Center for Complex Networks and Systems Research (CNetS, cnets.indiana.edu). It is available at osome.iuni.iu.edu.1
Social media data possess unique characteristics. Besides rich textual content, explicit information about the originating social context is generally available. Information often includes timestamps, geolocations, and interpersonal ties. The Twitter dataset is a prototypical example (McKelvey & Menczer, 2013b; McKelvey & Menczer, 2013a). The Observatory on Social Media is built around a Terabyte-scale historical (and ongoing) collection of tweets. The data source is a large sample of public posts made available by Twitter through elevated access to their streaming API, granted to a number of academic labs. All of the tweets from this sample are stored, resulting in a corpus of approximately 70 billion public tweets dating back to mid-2010.2
An important caveat about the use of these data for research is that possible sampling biases are unknown. When Twitter first made this stream available to the research community, it indicated that the stream contains a random 10% sample of all public tweets. However, no further information about the sampling method was disclosed. Other streaming APIs have been shown to provide a non-uniform sample (Morstatter et al., 2013). Even assuming that tweets are randomly sampled, it should be noted that the collection does not automatically translate into a representative sample of the underlying population of Twitter users, or of the topics discussed. This is because the distribution of activity is highly skewed across users and topics (Weng et al., 2012) and, as a result, active users and popular topics are better represented in the sample. Sampling biases may also have evolved over time. For example, the fraction of public tweets with exact geolocation coordinates has decreased from approximately 3% in the past to approximately 0.3% due to the recent change of location privacy settings in Twitter mobile clients from “opt-out” to “opt-in.” This change was motivated by public privacy concerns about location tracking. This and other platform changes may significantly affect the composition of our sample in ways that we are unable to assess.
The high-speed stream from which the data originates has a rate that ranges in the order of 106 − 108 tweets/day. Figure 1 illustrates the growth of the Twitter collection over time.
Performing analytics at this scale presents specific challenges. The most obvious has to do with the design of a suitable architecture for processing such a large volume of data. This requires a scalable storage substrate and efficient query mechanisms.
The core of our system is based on a distributed storage cluster composed of 10 compute nodes, each with 12 × 3TB disk drives, 2 × 146 GB RAID-1 drives for the operative system, 64 GB RAM, and 2× Xeon 2650 CPUs with 8 cores each (16 total per node). Access to the nodes is provided via two head nodes, each equipped with 64 GB RAM, and 4× Xeon 2650 CPUs with four cores each (24 total per node), using 1GB ethernet infiniband.
The software architecture the Observatory builds upon the Apache Big Data Stack (ABDS) framework (Jha et al., 2014; Qiu et al., 2014; Fox et al., 2014). Development has been driven over the years by the need for increasingly demanding social media analytics applications (Gao, Nachankar & Qiu, 2011; Gao & Qiu, 2013; Gao & Qiu, 2014; Gao et al., 2014; Gao, Ferrara & Qiu, 2015; Wu et al., in press). A key idea behind our enhancement of the ABDS architecture is the shift from standalone systems to modules; multiple modules can be used within existing software ecosystems. In particular, we have focused our efforts on enhancing two well-known Apache modules, Hadoop (The Apache Software Foundation, 2016b) and HBase (The Apache Software Foundation, 2016a).
The architecture is illustrated in Fig. 2. The data collection system receives data from the Twitter Streaming API. Data are first stored on a temporary location and then loaded into the distributed storage layer on a daily basis. At the same time, long-term backups are stored on tape to allow recovery in case of data loss or catastrophic events.
The design of the NoSQL distributed DB module was guided by the observation that queries of social media data often involve unique constraints on the textual and social context such as temporal or network information. To address this issue, we leveraged the HBase system as the storage substrate and extended it with a flexible indexing framework. The resulting IndexedHBase module allows one to define fully customizable text index structures that are not supported by current state-of-the-art text indexing systems, such as Solr (The Apache Software Foundation, 2016c). The custom index structures can embed contextual information necessary for efficient query evaluation. The IndexedHBase software is publicly available (Wiggins, Gao & Qiu, 2016).
The pipelines commonly used for social media data analysis consist of multiple algorithms with varying computation and communication patterns. For example, building the network of retweets of a given hashtag will take more time and computational resources than just counting the number of posts containing the hashtag. Moreover, the temporal resolution and aggregation windows of the data could vary dramatically, from seconds to years. A number of different processing frameworks could be needed to perform such a wide range of tasks. To design the analytics module of the Observatory we choose Hadoop, a standard framework for Big Data analytics. We use YARN (The Apache Software Foundation, 2016d) to achieve efficient execution of the whole pipeline, and integrate it with IndexedHBase. An advantage deriving from this choice is that the overall software stack can dynamically adopt different processing frameworks to complete heterogeneous tasks of variable size.
A distributed task queue, and an in-memory key/value store implement the middleware layer needed to submit queries to the backend of the Observatory. We use Celery (Solem & Contributors, 2016) and Redis (Sanfilippo, 2016) to implement such layer. The task queue limits the number of concurrent jobs processed by the system according to the task type (index-only vs. map/reduce) to prevent extreme degradation of performance due to very high load.
The Observatory user interface follows a modular architecture too, and is based on a number of apps, which we describe in greater detail in the following section. Three of the apps (Timeline, Network visualization, and Geographic maps) are directly accessible within OSoMe through Web interfaces. We rely on the popular video-sharing service YouTube for the fourth app, which generates meme diffusion movies (Videos). The movies are rendered using a fast dynamic visualization algorithm that we specifically designed for temporal networks. The algorithm captures only the most persistent trends in the temporal evolution, at the expense of high-frequency churn (Grabowicz, Aiello & Menczer, 2014). The software is freely available (Grabowicz & Aiello, 2013). Finally, the Observatory provides access to raw data via a programmatic interface (API).
Storing and indexing tens of billions of tweets is of course pointless without a way to make sense of such a huge trove of information. The Observatory lowers the barrier of entry to social media analysis by providing users with several ready-to-use, Web-based data visualization tools. Visualization techniques allow users to make sense of complex data and patterns (Card, 2009), and let them explore the data and try different visualization parameters (Rafaeli, 1988). In the following, we give a brief overview of the available tools.
It is important to note that, in compliance with the Twitter terms of service (Twitter, Inc., 2016), OSoMe does not provide access to the content of tweets, nor of Twitter user objects. However, researchers can obtain numeric object identifiers of tweets in response to their queries. This information can then be used to retrieve tweet content via the official Twitter API. (There is one exception, described below.) Another necessary step to comply with the Twitter terms is to remove deleted tweets from the database. Using a Redis queue, we collect deletion notices from the public Twitter stream, and then feed them to a backend task for deletion.
The Trends tool produces time series plots of the number of tweets including one or more given hashtags; it can be compared to the service provided by Google Trends, which allows users to examine the interest toward a topic reflected by the volume of search queries submitted to Google over time.
Users may specify multiple terms in one query, in which case all tweets containing any of the terms will be computed; and they can perform multiple queries, to allow comparisons between different topics. For example, let us compare the relative tweet volumes about the World Series and the Superbowl. We want our Super Bowl timeline to count tweets containing any of #SuperBowl, #SuperBowl50, or #SB50. Since hashtags are case-insensitive and we allow trailing wildcards, this query would be “ #superbowl*, #sb50.” Adding a timeline for the “ #worldseries” query results in the plot seen in Fig. 3. Each query on the Trends tool takes 5–10 s; this makes the tool especially suitable for interactive exploration of Twitter conversation topics.
Diffusion and co-occurrence networks
In a diffusion network, nodes represent users and an edge drawn between any two nodes indicates an exchange of information between those two users. For example, a user could rebroadcast (retweet) the status of another user to her followers, or she could address another user in one of her statuses by including a mention to their user handle (mention). Edges have a weight to represent the number of messages connecting two nodes. They may also have an intrinsic direction to represent the flow of information. For example, in the retweet network for the hashtag #IceBucketChallenge, an edge from user i to user j indicates that j retweeted tweets by i containing the hashtag #IceBucketChallenge. Similarly, in a mention network, an edge from i to j indicates that i mentioned j in tweets containing the hashtag. Information diffusion network, sometimes also called information cascades, have been the subject of intense study in recent years (Gruhl et al., 2004; Weng et al., 2012; Bakshy et al., 2012; Weng et al., 2013; Weng, Menczer & Ahn, 2013; Romero, Meeder & Kleinberg, 2011).
Another type of network visualizes how hashtags co-occur with each other. Co-occurrence networks are also weighted, but undirected: nodes represent hashtags, and the weight of an edge between two nodes is the number of tweets containing both of those hashtags.
OSoMe provides two tools that allow users to explore diffusion and and co-occurrence networks.
Interactive network visualization
The Networks tool enables the visualization of how a given hashtag spreads through the social network via retweets and mentions (Fig. 4) or what hashtags co-occur with a given hashtag. The resulting network diagrams, created using a force-directed layout (Kamada & Kawai, 1989), can reveal topological patterns such as influential or highly-connected users and tightly-knit communities. Users can click on the nodes and edges to find out more information about the entities displayed—users, tweets, retweets, and mentions—directly from Twitter. Network are cached to enable fast access to previously-created visualizations.
For visualization purposes, the size of large networks is reduced by extracting their k-core (Alvarez-Hamelin et al., 2005) with k sufficiently large to display 1,000 nodes or less (k = 5 in the example of Fig. 4). The use of this type of filter implies a bias toward densely connected portions of diffusion networks, where most of the activity occurs, and toward the most influential participants.
The tool allows access to Twitter content, such as hashtags and user screen names. This content is available both through the interactive interface itself, and as a downloadable JSON file. To comply with the Twitter terms, the k-core filter also limits the number of edges (tweets).
Because tweet data are time resolved, the evolution of a diffusion or co-occurrence network can be also visualized over time. Currently the Networks tool visualizes only static networks aggregated over the entire search period specified by the user; we aim to add the ability to observe the network evolution over time, but in the meantime we also provide the Movies tool, an alternative service that lets users generate animations of such processes (Fig. 5). We have successfully experimented with fast visualization techniques in the past, and have found that edge filtering is the best approach for efficiently visualizing networks that undergo a rapid churn of both edges and nodes. We have therefore deployed a fast filtering algorithm developed by our team (Grabowicz, Aiello & Menczer, 2014). The user-generated videos are uploaded to YouTube, and we cache the videos in case multiple users try to visualize the same network.
Online social networks are implicitly embedded in space, and the spatial patterns of information spread have started to be investigated in recent years (Ferrara et al., 2013; Conover et al., 2013a). The Maps tool enables the exploration of information diffusion through geographic space and time. A subset of tweets contain exact latitude/longitude coordinates in their metadata. By aggregating these coordinates into a heatmap layer superimposed on a world map, one can observe the geographic signature of the attention being paid to a given meme. Figure 6 shows an example. Our online tool goes one step further, allowing the user to explore how this geographic signature evolves over a specified time period, via a slider widget.
It takes at least 30 s to prepare one of these visualizations ex novo. We hope to reduce this lead time with some backend indexing improvements. To enable exploration, we cache all created heatmaps for a period of one week. While cached, the heatmaps can be retrieved instantly, enabling other users to browse and interact with these previously-created visualizations. In the future we hope to experiment with overlaying diffusion networks on top of geographical maps, for example using multi-scale backbone extraction (Serrano, Boguná & Vespignani, 2009) and edge bundling techniques (Selassie, Heller & Heer, 2011).
An important caveat for the use of the maps tool is that it is based on the very small percentage of tweets that contain exact geolocation coordinates. Furthermore, as already discussed, this percentage has changed over time.
We expect that the majority of users of the Observatory will interact with its data primarily through the tools described above. However, since more advanced data needs are to be expected, we also provide a way to export the data for those who wish to create their own visualizations and develop custom analyses. This is possible either within the tools, via export buttons, and through a read-only HTTP API.
The OSoMe API is deployed via the Mashape management service. Four public methods are currently available. Each takes as input a time interval and a list of tokens (hashtags and/or usernames):
tweet-id: returns a list of tweet IDs mentioning at least one of the inputs in the given interval;
counts: returns a count of the number of tweets mentioning each input token in the given interval;
time-series: for each day in the given time interval, returns a count of tweets matching any of the input tokens;
user-post-count: returns a list of user IDs mentioning any of the tokens in the given time frame, along with a count of matching tweets produced by each user.
In the first several weeks since launch, the OSoMe infrastructure has served a large number of requests, as shown in Fig. 7. The spike corresponds to May 6, 2016, the date of a press release about the launch. Most of these requests complete successfully, with no particular deterioration for increasing loads (Fig. 8).
To evaluate the scalability of the Hadoop-based analytics tools with increasing data size, we plot in Fig. 9 the run time of queries submitted by users through OSoMe interactive tools, as a function of the number of tweets matching the query parameters. We observe a sublinear growth, suggesting that the system scales well with job size. A job may take from approximately 30 s to several hours depending on many factors such as system load and number of tweets processed. However, even different queries that process the same number of tweets may perform differently, depending on the width of the query time window. This is partly due to “hotspotting”: the temporal locality of our data layout across the nodes of the storage cluster causes decreases in performance when different Hadoop mappers access the same disks. A query spanning a short period of time runs slower than one matching the same number of tweets over a longer period. These results suggest that our data layout design may need to be reconsidered in future development. An alternative approach to improve performance of queries is to entirely remove the bottleneck of Hadoop processing by indexing additional data. For example, in the Networks and Movies tools, we could index the retweet and mention edges. The resulting queries would utilize the indices only, resulting in response times comparable to those of the Trends tool.
Finally, we tested the scalability of queries using the HBase index with the load. Figure 10 shows that the total run time is not strongly affected by the number of concurrent jobs, up to the size of the task queue (32). For larger loads, run time scales linearly as expected.
The IUNI Observatory on Social Media is the culmination of a large collaborative effort at Indiana University that took place over the course of six years. We hope that it will facilitate computational social science and make big social data easier to analyze by a broad community of researchers, reporters, and the general public. The lessons learned during the development of the infrastructure may be helpful for future endeavors to foster data-intensive research in the social, behavioral, and economic sciences.
We welcome feedback from researchers and other end users about usability and usefulness of the tools presented here. In the future, we plan to carry out user studies and tutorial workshops to gain feedback on effectiveness of the user interfaces, efficiency of the tools, and desirable extensions.
We encourage the research community to create new social media analytic tools by building upon our system. As an illustration, we created a mashup of the OSoMe API with the BotOrNot API (Davis et al., 2016), also developed by our team, to evaluate the extent to which Twitter campaigns are sustained by social bots. The software is freely available online (Davis, 2016).
The opportunities that arise from the Observatory, and from computational social science in general, could have broad societal impact. Systematic attempts to mislead the public on a large scale through “astroturf” campaigns and social bots have been uncovered using big social data analytics, inspiring the development of machine learning methods to detect these abuses (Ratkiewicz et al., 2011a; Ferrara et al., 2016; Subrahmanian et al., 2016). Allowing citizens to observe how memes spread online may help raise public awareness of the potential dangers of social media manipulation. | <urn:uuid:c93e440d-41a8-4b30-a9f8-738a6612007b> | CC-MAIN-2021-21 | https://peerj.com/articles/cs-87/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988759.29/warc/CC-MAIN-20210506175146-20210506205146-00577.warc.gz | en | 0.893315 | 5,422 | 2.59375 | 3 |
Over the last few decades, there have been numerous advancements in cancer treatments and technologies. Historically, cancer has been treated through three modalities:
used to excise visible tumors
used to prevent the proliferation of tumors using drugs
used to stop the dividing of cells through radiation
To address the fact that these procedures are relatively nonspecific, a new field of cancer treatment called immunotherapy has emerged. Immunotherapy harnesses the immune system by helping it to recognize and destroy tumor cells without attacking normal cells. Immunotherapy accomplishes these tasks by stimulating the body’s own immune cells to work smarter and eliminating some of the damage that other treatment modalities may incur.
From a theoretical perspective, developing immunotherapy tools by utilizing T cells has been of interest because:
- T cell responses are specific in their ability to distinguish between healthy and cancerous tissues.
- T cells have robust responses and upon activation can proliferate dramatically to launch a strong immune response.
- T cells have the ability to travel to remote sites, which is a crucial feature in cases of metastases and cancers that cannot be detected with current imaging technology.
- T cells can confer memory with the help of B cells that can maintain the therapeutic effect for many years after the initial treatment.
Based on the specific anti-tumor responses, many immunotherapy approaches have been proposed and techniques to isolate tumor-specific lymphocytes have been developed.
Adoptive Cell Therapy (ACT)
Adoptive Cell Therapy (ACT) is an immunotherapy approach in which anti-tumor lymphocytes (most of which are T cells)Adoptive Cell Therapy (ACT) is an immunotherapy approach in which anti-tumor lymphocytes (most of which are T cells)are harvested from the cancer patient, expanded in vitro, and then re-infused back into the patient. This procedure is often performed in conjunction with vaccines, growth factors that can enhance the in vivo impact of the transferred cells, and lymphodepleting chemotherapy in order to ensure that the body tolerates the tumor targeting cells. Since adoptive therapies physically separate the emerging anti-tumor cells from their host, it is possible to manipulate the cells and their response mechanisms in ways that are clinically relevant. In order for T cells to mount a specific response to tumor cells, they need to be able to recognize and target antigens on the tumor that are non-existent or poorly expressed in healthy tissue. Tumor-associated antigens (TAAs) were identified in the 1990s and provided definitive proof that immune cells can distinguish between cancerous and noncancerous tissue.There are several main cell sources that are able to recognize these tumor-specific antigens or exhibit anti-tumor activity that can be used for adoptive immunotherapy:
1. Tumor-Infiltrating lymphocytes (TILs). Naturally occurring T cells located within the tumors, TILs can be isolated and grown in vitro to be reinfused to the patient after sufficient growth.
2. In a patient who lacks culturable TILs, or in which growing TILs is difficult, one can genetically engineer T cells that can perform the same anti-tumor function by introducing tumor-targeting receptors or other attributes that can help the cells to efficiently eliminate cancerous cells. Since this process occurs in vitro, it is possible to engineer qualities that do not occur naturally in these cells. Thus, there are two types of receptors that are used to redirect T cells: physiological T cell receptors (TCRs) and synthetic receptors, known as chimeric antigen receptors (CARs).
3. Dendritic cells, which exhibit an extremely potent ability to present antigens to T cells, have been used as a potential therapy via a vaccine, as they can independently mount a robust immune response.
4. A subset of the population of natural killer (NK) cells known as cytokine-induced killer cells has also been discovered and targeted as a potential immunotherapy method because the cells can be readily grown in vitro and show major histocompatibility complex (MHC)-unrestricted activity against tumors.
5. NK cells have recently advanced as a model for immunotherapy because of their ability to induce antibody-dependent cellular cytotoxicity (ADCC), manipulate receptor-mediated activation, and function as a form of an adoptive immunotherapy with CAR modifications.
Tumor-Infiltrating Lymphocytes (TILs)
Tumor-infiltrating lymphocytes are the most natural source of cells that can provide the desired immune response for cancer therapies. These cells were among the first cells utilized for ACT in the 1980s, at which point it was demonstrated that TILs, cultured with IL-2, a lymphotrophic cytokine, exhibited cytotoxic activity against cancer cells in vitro. Since these cells were originally located within the tumors, they were found to be more potent than other lymphocytes in the body.
The technologies used to isolate and manipulate TILs are mostly geared toward preparing the patient’s own TILs, growing them, and reintroducing them into the patient to kill cancer cells.The adoptive cell transfer of autologous TILs has been shown to effectively mediate tumor regression in the majority of patients with metastatic melanoma, for example, and thus shows promise in terms of being able to achieve complete regression, which has been observed in a subset of patients with epithelial tumors.
In order to produce therapeutic TILs, resected tumor tissue is cultured in media containing IL-2 for approximately four to six weeks. Once enough TILs are grown from these cultures, they undergo rapid expansion during a two-to-four-week period with the aid of feeder cells, a higher concentration of IL-2, and soluble anti-CD3 antibody for two to four weeks. Next, these TILs are incubated with autologous tumor in order to select for the ones that actually react to the tumor. Once this is done, the levels of interferon (IFN)-γ secreted into the media can be measured using an IFN-γ enzyme-linked immunosorbent assay (ELISA).
Sometimes, the autologous tumor is unavailable or difficult to grow in culture, posing a problem in t selection process. To mitigate this issue, researchers have developed new methods that utilize deep-sequencing technology to identify neoantigens that are presented by the tumor, which can then be synthesized as short peptides and used to identify tumor-reactive TILs.
T Cell Receptors (TCRs)
Since not all tumors yield readily available TILs, the use of TCRs that exhibit tumor- antigen peptides-recognizing properties has emerged as an important strategy for adoptive T cell therapies. Most of the clinical efforts in this realm to date have focused on self-peptides that are upregulated in some cancers, such as the WT1 antigen, differentiation antigens such as gp100 and MART-1, and cancer/testis antigens such as NY-ESO and MAGE-A3. Most of these human “tumor-associated” antigens that are targeted by TCR-engineered T cell therapy are also expressed in normal tissues, albeit at a lower density than on the surface of cancer cells. Therefore, there is a challenge to determine what TCR affinity is necessary to confer therapeutic activity without posing a threat to normal or unrelated tissues, which is hard to anticipate. Many investigational efforts are focused on developing methods to capture neoantigen-reactive TCR genes from the patient’s peripheral blood or other samples.
Though this particular approach allows for the generation of tumor-specific T cells without the need to isolate TILs from tumors, it has a few limitations. The major drawback of this approach is HLA-restriction, where a given T cell will only recognize and respond to an antigen when it is bound to a particular MHC molecule. Another issue is the competition for pairing with endogenous TCR chains, which can lead to lower levels of tumor-specific TCRs or possible off-target reactivities of mispaired TCRs that can result in graft-versus-host reactions. To combat mispairing, scientists have started to use cysteines in exogenous TCR-constant domains that promote preferential pairing or gene editing strategies that limit the expression of endogenous TCR chains. Though this concern exists in theory, there have been no reported adverse events related to mispaired TCR formation in clinical trials.
Chimeric Antigen Receptors (CARs)
One alternative to obtaining T cells with anti-tumor reactivity while avoiding the complications that can arise from HLA-restriction is to genetically engineer T cells to express chimeric antigen receptors (CARs). CARs are receptors that have been engineered to give T cells the ability to target a specific protein by combining antigen-binding and T cell- activating functions into a single receptor. More specifically, CARs are hybrid receptors formed by the fusion of an extracellular tumor antigen-binding domain, typically a single-chain variable fragment (scFv) of an antibody, fused with intracellular T cell signaling and costimulatory domains.
CARs were originally generated by Zelig Eshhar and colleagues in the late 1980s in order to study TCR signalling. Due to their chimeric construction, CARs can provide non-MHC restricted recognition of cancer cell antigens, which ultimately results in targeted T cell activation. By incorporating chimeric molecules that recognize tumor antigens as well as actively promoting a cascade of signals that could induce further damage to tumor cells, CAR therapy can give patients an alternative that breaks the acquired tolerance of immune cells and bypasses the restrictions of HLA-mediated antigen recognition that are present with TLR-based therapies.
Typically, in order to generate CAR T cells, activated leukocytes are first removed from the patient and then processed in order to isolate the autologous peripheral blood mononuclear cells (PBMCs). In order to activate T cells that can effectively fight against cancer, the cells are incubated with IL-2, anti-CD3, and anti-CD28. Subsequently, the T cells are transfected with CAR genes through integration of a gamma retrovirus or lentiviral vectors and expanded using cytokines such as IL-7, IL-15 and IL-21. Since these CAR T cells are further divided into CD4+ and CD8+ subsets, these markers can be used to select these cells; the optimal ratio of CD4+ to CD8+ CAR T cells is of interest for maximum efficacy of this line of treatment. Prior to the introduction of the engineered T cells, the patient often undergoes lymphodepletion chemotherapy. Lymphodepletion serves the purpose of depleting endogenous T cells, including Tregs, which promotes the expansion and survival of the CAR T cells once they have been reinfused.
Due to the success of CAR T cells targeted at CD19 in patients with B cell hematologic malignancies, the U.S. Food and Drug Administration recently approved two CAR T cell therapies. Tisagenlecleucel (Kymriah™ by Novartis), is indicated for the treatment of advanced leukemia in children and young adults up to 25 years of age who have large B cell acute precursor lymphoblastic leukemia (ALL) that has either relapsed or failed to respond to previous conventional treatment. The other, axicabtageneciloleucel (Yescarta™ by Kite Pharma), is approved for treating adults who have either relapsed or refractory cancer that has not responded to previous conventional treatment(s), high-grade lymphoma, diffuse large B cell lymphoma (DLBCL), or DLBCL resulting from follicular lymphoma.
Despite these breakthroughs in the treatment of hematological malignancies, it has been difficult to use CAR T-cell therapy against solid tumors. The poor specificity and efficacy of CAR T against these tumors can be at least partially attributed to the lack of specific targetable antigens. In addition, it is difficult for CAR T cells to navigate in the hostile microenvironment of solid tumors, so future efforts are focused on alleviating these problems so that solid tumors can be better treated with this form of immunotherapy.
Recently, researchers have utilized the same CAR technology to equip other immune cells, such as NK cells and even macrophages to recognize tumors. Although these cells are probably not going to replace CAR T-cell therapy, these alternative approaches to fighting cancer could add to the arsenal of therapies that are currently being developed. NK cells, which belong to the innate immune system, act as a first line of defense against cancer cells, scanning the other cells in the body and destroying those that are defective or infected, such as tumor cells.
Preliminary studies conducted on chimeric antigen receptor-natural killer cells (CAR-NK cells) have shown that they perform as well as CAR T cells against ovarian tumors and substantially better than unaltered NK cells. In addition, CAR-NK cells have shown less toxicity compared to CAR T cells, which is a significant benefit for this new therapy.
Additionally, scientists have observed that NK cells harvested from a donor, engineered with CARs, and then administered to patients do not appear to cause the fatal immune complication of graft-versus-host disease. This phenomenon opens up the possibility to eliminate some of the expenses associated with therapies that rely on the extraction of immune cells from the patient’s blood in favour of approaches that can harvest these cells from umbilical cord blood donations, for example. Thus, one batch of human NK cells derived from induced pluripotent stem cells (iPSCs) could potentially be used to treat thousands of patients, while preventing the need to create a new product for each patient.
Learn more about CAR T-Cell Immunotherapy
Dendritic Cell Vaccinations
Dendritic cells (DCs) are leukocytes that are uniquely potent in their ability to present antigens to T c serving as a bridge between the innate and
adaptive immune systems. Due to this property, dendritic cells have been selected as a potential target for therapeutic cancer vaccines.
Dendritic cells were originally described in the 1970s by Steinman and Cohn, and they are often referred to as “nature’s adjuvant” because of the fact that they are the most potent antigen-presenting cells (APCs) and are capable of activating both naïve and memory immune responses. Since DCs
are able to independently mount a comprehen immune response, they are of particular interest in the formation of vaccines.
In order to form these vaccines, immature DCs are generated from immune cells that are removed from the patient’s blood, using IL-4 and GM-CSF, loaded with tumor antigen ex vivo, and matured. Once the dendritic cells are grown, the loaded DCs are then reinfused into the patient in order to induce protective and therapeutic anti-tumor response by allowing the vaccine DCs to present to T cells in the body.
Pilot clinical trials for patients with non-Hodgkin’s lymphoma and melanoma have shown an induction of anti-tumor immune responses and subsequent tumor regression.
Currently, there are more trials underway for DC vaccination for several other human cancers and some groups are exploring methods for in vivo targeting of tumor antigens to DCs. In addition, it has been shown that pre-conditioning the vaccine site with a potent recall antigen, such as tetanus/diphtheria toxoid, can significantly improve the efficacy of DC vaccines.
Thus, by utilizing the antigen-presenting mechanism of DCs, there are several opportunities to develop effective cancer immunotherapy. In fact, Sipuleucel-T (APC8015, trade name Provenge), developed by Dendreon Corporation, was the first DC-based cancer vaccine approved by the Food and Drug Administration in 2010 for the treatment of asymptomatic or minimally symptomatic metastatic castration-resistant prostate cancer.
Although this vaccine has been shown to be effective, as it improves median survival by 4.1 months, it is still an expensive mode of treatment due to its personalized nature. Additionally, none of the phase III clinical trials found a significant difference in the time to disease progression. These circumstances indicate the need for a significant improvement of this mode of cancer immunotherapy to become widespread.
Natural Killer (NK) Cell Immunotherapy
NK cells are a part of the innate immune system and are characterized by their lack of CD3/TCR molecules and by the surface expression of CD16 and CD56. As such, they have the distinct ability to mediate cytotoxicity in response to stimulating a target cell. In addition, NK cells interact with other cells of the immune system in several ways: For example, by producing cytokines, such as tumor necrosis factor (TNF)-α and interferon (IFN)-γ, they mediate downstream adaptive immune responses by influencing the magnitude of T cell responses. On the other hand, NK cells themselves are regulated by cytokines, such as IL-2, IL-12, IL-15, IL-18, and IL-21, and by interactions with other cells, such as dendritic cells and macrophages.
Initially, studies of adoptive NK cell therapy were oriented toward enhancing the anti- tumor activity of the NK cells. Doing so involved using CD56+ beads to select for NK cells and infusing the autologous CD56+ cells into patients, followed by the administration of cytokines IL-2 or IL-15 to encourage additional in vivo stimulation and support their expansion, but this method was found to be ineffective.
NK cell-based immunotherapy can potentially be used as a therapeutic option for solid tumors, which is more difficult for other immunotherapies; however, challenges exist such as trafficking to sites of tumors and penetrating the tumor capsule in order to exert their function. Some strategies to mitigate these setbacks are to target regulatory T cells in order to target the immunosuppressive tumor microenvironments, which could potentially help to treat solid tumors.
The most recent approaches have used a methodology in which NK cells are isolated from a patient and treated with a number of cytokines, initially IL-2, and more recently, other cytokines such as IL-12, IL-15, and IL-18. Once these NK cells have been expanded and activated ex vivo, they are infused into the patient. Studies in experimental models have shown that these cytokine- induced, memory-like (CIML) NK cells have significant activity against tumors once they are infused. Expansion of NK cells isolated from PBMCs generally includes using feeder cells in order to provide the NK cells with a stimulatory signal, however, it was also shown that NK cells isolated from cord blood could be efficiently expanded by a feeder-free system.
Cytokine-Induced Killer (CIK) Cells
Cytokine-induced killer (CIK) cells are a heterogeneous population of effector CD3+ CD56+ NK cells that can be used in a similar fashion to other immunotherapy methods because they can be easily expanded in vitro from PBMCs. They are an ideal candidate for immunotherapy approaches since they exhibit MHC-unrestricted anti-tumor activity that is both safe and effective.
CIK cells were first developed in 1991 by growing PBMCs in the presence of IFN-γ, an anti-CD3 monoclonal antibody, and IL-2. Subsequent studies showed that besides using IL-2, CIK cells could also be generated by using exogenous IL-7 or IL-12. Studies in which DCs were co-cultured with CIK cells showed that they interact with one another which resulted in changes to the surface molecule expression of both cell types and led to an increase in IL-12 expression. From a treatment perspective, combining DCs and CIK cells can be more effective than either one alone in therapies.
In order to generate CIK cells, PBMCs are first separated from the blood through centri- fugation and then treated with IFN-γ to activate macrophages. This step promotes the IL-12- and CD58/LFA-3-mediated signaling, both of which enhance the cytotoxicity of CIK cells. After one day, the anti-CD3 antibody and IL-2 are added to the cells. Every 2 days, fresh IL-2 is added to the media; after three to four weeks of culture, the generated CIK cells can be infused back into the patient.
In the last few years, treatment prospects for CIK cells have improved, and a number of therapies have been developed in order to increase cytolytic activity and safety.
Numerous CIK clinical trials are ongoing or completed, and overall the results of these studies seem promising. CIKs can be combined with additional cytokines such as IL-6 and IL-7, DCs, immune checkpoint inhibitors such as CTLA-4 and PD-1, antibodies such as anti-CD20 or anti-CD30, and CARs, which can all improve the efficiency of CIK therapy. However, there are still many avenues of CIK therapy, especially in conjunction with other technologies, which need to be explored.
Immune Checkpoint Inhibitors
Inherently, the immune system has a system of inhibitory and stimulatory pathways that adjust their response to inflict the most damage on the pathogenic targets, while preventing collateral tissue damage and autoimmunity. Immune checkpoints are often manipulated by tumors in order to escape the protective immune response. Since many of these checkpoint molecules are mediated through ligand-receptor interactions, they can be easily targeted by antibodies or recombinant proteins. By inhibiting the inhibitory checkpoint, one can amplify antigen-specific T cell responses, which ultimately generates a more robust immune response.
Cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) and programmed cell death protein 1 (PD-1) are two of the many inhibitory receptors that have been used for clinical benefit. CTLA-4, which counteracts CD28 activity, was the first receptor of this nature to be targeted for clinical use. CD28 and CTLA-4 share identical ligands CD80 and CD86, but have opposite effects; when CTLA-4 is present, it competes with CD28 to bind to these ligands, thus dampening T cell activation, which is detrimental to the immune response. Therefore, it seems reasonable that blocking CTLA-4 could result in an increased immune response.
Preliminary studies with CTLA-4 antibodies showed that mice with partially immunogenic tumors demonstrated significant anti-tumor responses when treated with CTLA-4, and eventually led to the production of clinical agents. Out of several CTLA-4 antibodies tested in clinical trials, ipilimumab was the first therapy to demonstrate a survival benefit for patients, especially with regards to long-term survival with metastatic melanoma and was approved by the U.S. Food and Drug Administration in 2010.
As an immune-checkpoint receptor, PD-1 is another promising target for immune checkpoint therapy. PD-1 limits the activity of T cells peripheral tissues during inflammatory responses to infection. This particular mechanism is exploited by many tumors that express the ligand, PD-L1, in order to evade an effective immune response by binding the PD-1 that is expressed on TILs from many cancers.
Thus, therapies targeting checkpoint molecules are promising. Currently, more than 2000 clinical trials are underway for therapies that block this pathway or combine it with some of the other aforementioned therapies.
Immunotherapy Against Other Diseases
Although the majority of articles discuss immunotherapy approaches against cancer, it is important to note that a wide variety of other diseases can be addressed with immunotherapy. Immunotherapy offers the potential to treat numerous conditions in addition to cancer because many diseases invoke immune responses and can manifest in many ways, such as through inflammation. For example, HIV-1 peptide-loaded DCs have been shown to be safe and to induce immunogenicity in individuals with HIV-1.
Regarding diseases caused by heightened inflammatory responses to otherwise harmless allergens, such as asthma and allergies, allergen immunotherapy has proven effective in controlling symptoms. By repeatedly exposing an individual to the relevant allergen, one can suppress and sometimes resolve the inflammatory response to the offending allergens. Through similar mechanisms to those discussed, the tau protein implicated in Alzheimer’s disease can be targeted and eliminated through the use of antibodies and may therefore potentially improve cognition in those who exhibit signs of dementia. All in all, there are numerous applications of immunotherapy that build on the same principles that have driven the development of immunotherapy with regards to cancer.
Immunotherapy harnesses the immune system to fight a variety of diseases by suppression or activation of the immune response. The ubiquity of cancer has made the disease a target for innovations in this field, including the development of vaccines and unique therapies that harness the exceptional abilities of immune cells.
Not only can immunotherapy treat cancers, but it can also address several chronic diseases, autoimmune disorders, and allergies. Such therapies generally provide long-term protection, have fewer side effects, and are more targeted than conventional therapies.
However, several challenges still exist for employing immunotherapy treatments. Major challenges include safety issues, developing personalized combination therapies, dose refinement, cost reduction, target specificity, treatment duration, and disease management. Further research and advances may overcome many of these challenges and the future seems to be very bright for the field of immunotherapy. | <urn:uuid:57f9ece7-d4cf-4290-82de-b015edd12003> | CC-MAIN-2021-21 | https://lab-a-porter.com/2021/01/immunotherapy-awakens-the-immune-system-to-fight-disease/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992516.56/warc/CC-MAIN-20210516075201-20210516105201-00336.warc.gz | en | 0.947898 | 5,674 | 3.359375 | 3 |
HOPE IN A BOX
Hope in a Box aims to help rural public schools cultivate empathy and combat prejudice by representing LGBTQ people, stories, and history in school libraries and classrooms.
How? They donate “Hope in a Box” — literature featuring LGBTQ protagonists and themes, bespoke curricula for these materials, and ongoing coaching for educators on LGBTQ themes and terminology. Hope in a Box works with rural public schools to ensure every student feels welcome, included, and safe at school -- regardless of gender identity or sexual orientation. Not only do they provide curated boxes of books with LGBTQ characters, they include detailed curriculum for these books and mentorship and training for educators. In 18 months, they've grown from a small pilot into a national program supporting hundreds of schools in 45 states
FOR MORE INFORMATION
I've always believed the genius words of Rudine Sims Bishop, a researcher and university
professor who wrote that books should serve as mirrors, windows, and sliding glass doors
highlighting the importance of books to reflect our world and our identity. Today's podcast
features Hope in a Box, a brilliant organization that aims for every student to feel safe, welcome, and included in school regardless of their sexual orientation or gender identity.
Welcome to End Book Deserts, the podcast, featuring the innovative people and programs who work to provide book access to our nation's under-resourced areas or overlooked populations. I'm Dr. Molly Ness, lifelong reader, book nerd, teacher, educator. I've created the End Book Deserts podcasts so that all children have access to books and reading culture. The End Book Deserts podcast, a part of the education podcast network, just like the show you're listening to now. Shows on the network are individually owned and opinions expressed may not reflect others. Find other interesting education podcasts at edupodcastnetwork.com. Before I begin my chat with Joe English, founder and executive director of Hope in a Box. Let me give a bit of context. In schools, LGBTQ narratives or elements of narratives are rarely discussed. This lack of representation has proven consequences for our LGBTQ youth. At school, 85% of LGBTQ students experience discrimination. One in three skip school, at least one day of school per month because of bullying. Hope in a Box, a program that provides high-quality, positive LGBTQ literature to students in rural communities aims to improve the emotional wellbeing of LGBTQ students and to better support teachers in using this inclusive literature.
I'm joined today by Joe English, who is the founder and executive director of Hope in a Box. Joe, thanks for joining me today
Thanks for having me.
So, tell us what is Hope in a Box? What is your mission and how did you guys get started?
Yeah, so we have a very simple vision, uh, which is that every single student deserves to feel safe and welcome and included at school regardless of their gender identity or sexual orientation. Um, it should be that simple. Um, and we see literature really is a key piece of that puzzle where in the hands of a caring teacher, books and stories can really open hearts and minds, right? It can cultivate empathy. So, our program, we donate something called Hope in a Box, which is really three things. The first is curated set of books that have LGBTQ characters. Um, we have a box that's designed for middle schools and boxes designed for high schools. The second is a whole set of detailed curriculum for those books to make it as easy as possible for teachers to actually use the books in their classroom. And these are written by experienced, uh, English educators that are tied to common core standards and as of this fall, they're adapted for both in-person and remote learning environments. And the third piece is peer to peer mentorship, community building, training for educators that helps them really work through all of the questions and scenarios that come up in the classroom as they work with these materials for the first time. Um, it is very important to us as an organization that we're not just dropping materials on the front door and leaving, but rather like building a relationship with our educators in our schools over time and giving them the support that they- they want to meet. So that's a little bit of the mission
of what we do. Um, we started the program, uh, with a pilot of about 30 schools last fall, um, to see, okay, what is resonating with educators and students? What do they want more of, um, how could we improve our program? And since then, we've really deepened and updated our program. And as of this fall, we're working about 300 schools across 45 States. The ultimate goal of working with all 7,000 rural school districts in the United States.
So, let me pause for a second because, um, first of all, you're totally speaking my language in terms of the importance of getting books into the hands of kids who need to recognize
themselves who need, um, to honor their identities through the power of literature. We always hear, um, in teacher education and work around books about books as the metaphor of sliding glass doors, windows, and mirrors. And so as somebody who is a reader themselves, and truly believes in the importance of literature, explain to us, you're not a typical, you're not sort of serving the typical book desert in the sense of necessarily access, but you are serving a unique population of kids who need books that represent themselves. So why is that so important and why also is it so important to do the second component, which is teacher support? I'm a former classroom teacher. When I talk with teachers, um, these are often tough topics for people to, to, to bring up in their classroom, um, and people need support in that. So talk to us about why it's so important, these books for these kids, and the teacher education component.
Absolutely. I, let me actually answer that with the story of why we started this in the first place. Um, I mean, I grew up in a 1900 person farm town, way, way, upstate New York, um, kind of a classic small town in the United States. And when I was growing up, I never watched or saw or read anything that really spoke to me as a gay person. And for me, it would have meant the absolute world to have even one book, one gay character that would have been assigned to me in school to say, hey, you're not alone. This isn't an illness. You know, you're not doing anything wrong. This is something that you can be proud of and accept rather than, you know, shrink away from and be scared up. Um, but I, I didn't have that. I ended up leaving high school feeling pretty frustrated, pretty, uh, alone. And I didn't go back to my high school for a number of years until I decided after college, hey, I should go back and just have an open conversation with some of my teachers just to share that experience and ask, you know, now what would it take to bring some of that positive representation of LGBTQ people to school? Um, so the next generation of kids would have a healthier relationship with their identity and see themselves in their literature. And it was interesting. That conversation really was a revelation for me because my teacher said, hey, we hear you and we absolutely want to do this. We act, we care about this a lot, but we don't
really know where to start. You know, a lot of my teachers have grown up in that same school. They hadn't read a lot of this literature themselves. It was very little if there was actually no teacher training on how to speak to these topics. Um, so there's, uh, you know, there's uncertainty around them. And then the second issue, which I think addresses a lot of rural schools, not just mines that our school was chronically underfunded, so to go and ask, okay, let's buy 50 LGBTQ inclusive books and do all this training. Our roof is falling in. There was no money for even the basics. So the idea behind Hope in a Box is let's do some of the, a little bit of the legwork curating these materials, um, running some of the teacher training, doing the peer to peer mentorship, and also do it at very low cost to make sure that it's accessible and working hand in hand with educators the whole way, right. Not, not being top down, but making it a really collaborative experience, developing the materials that are right for any given context. Um, so that's a little bit of that, that backstory that I think touches on your points are why is it important, but also why, why is it necessary to also be helping educators along the way and not just dropping the materials in the front door?
So, um, first of all, I want to say kudos to the teachers that, um, that received you with open arms and also kudos to you for having the courage to go back and say, this was a gap in my
educational experience, and it was isolating for me. And so what can we do to improve this
experience, future kids? Um, I'm, I'm interested in how you have been received generally
speaking. I mean, your growth just in one year, going from 30 schools to 300, obviously speaks about, um, speaks volumes about just how much need there is for this, but how have you been received and what has, um, what has that process been like?
The reception has been very warm, which is reassuring, uh, and exciting. Um, you know, even in very, uh, traditional or homogenous communities that typically don't have a lot of LGBTQ representation in general, educators are very, uh, supportive. Like there's no educator I think that ever wants to see any student bullied. There's, there's no one that wants to see a kid ostracized. And unfortunately, a lot of teachers, they see that in their classroom and these kids are trying to understand their identity. So when we're bringing a resource to them, trying to be as open as possible and making it low cost, generally, they've been very receptive, very excited. And we see actually for any given school that we support, um, we get, you know, three to five more referrals to the program. Right, which is really exciting. So, and often you see that in like neighboring towns and counties of the central schools. So, um, I think it really does speak volumes. I think the other interesting piece in here is educators. Um, they realize that this isn't just about the LGBTQ students. Um, it's not all students. Um, yes, it helps with the school climate and making sure LGBTQ kids feel safer and more included in their schools, but for all other students, it also means that you know, you're building empathy and you're giving them a richer and more nuanced understanding of the culture and the world that they live in. So, in that sense too, it's, I think it's easier to make an argument that, hey, we should have an inclusive curriculum. Um, it's actually, you know, the strange thing should be excluding this identity from the classroom, not including it. And again, educators always get that. Um, so they're excited generally to see our program.
Yeah, I'm a teacher/educator and one of my favorite classes at my university to teach is the
children and young adult literature course. And, um, initially some of my students will say to me, I teach in a pretty upper-middle class homogenous- homogenous town, small community. So I don't need these books about diversity because my population isn't diverse. And my argument is actually you need those books more than anybody because kids are going to go out into a world that doesn't look like their little pocket or bubble, or what have you. And, um, need that, those, that understanding and that empathy, not just of, um, gender identity and sexual orientation, but race, ethnicity, religion, socioeconomic status, all of those things of diversity. So, talk to me a little bit more, I'm curious about, um, how you compile these, these, the actual book box or the Hope and the Box itself. What's it look like? What are some of the titles that you are featuring? I imagine that you're working with some pretty great authors, um, of middle grade and young adult. Um, so explain the box a little.
Yeah. Um, pulling together the box is so much fun and it's, it's an ongoing process, right? So we created the first version of this list. Um, last fall we worked with about 50, uh, educators, English teachers, librarians, and also university professors to understand, okay if we were to pull together a list of 50 of the most excellent LGBTQ inclusive books for middle and high school audiences, what would you choose? Um, it's actually very hard to boil it down to just 50, but, um, we were able to do that. Again, we have, we then take a subset of those titles. We do 25 books that are for high school and 25 for middle school from that list. And that's what we actually send them to schools. There's obviously new literature that comes out every year and we take the feedback from teachers and students on what they like, what they want to see more of. So in the year, following that first version, um, we actually refreshed that list, I think by probably 40, 50% of the titles and released a brand new version of that list, a 2.0 this fall, uh, which is actually on our website, hopeinabox.org/books and in designing the list um, diversity was very important to us and I mean, I mean that in a number of senses. So, diversity within the LGBTQ umbrella, right? Often a lot of literature just focuses on like the gay and the lesbian piece, but we wanted to make sure there was also representation of trans identities, non-binary identities, and other sexual orientations, um, diversity in terms of race, race, ethnicity, religion, nationality. It's also
important. But also in terms of the books themselves, we wanted to give a range of time periods and formats. So we have, you know, the Virginia Woolfs, the Oscar Wildes, that's on the list, but also much more contemporary work um, by say, Jacqueline Woodson, uh, Read at the Bone is a very popular title. It just came out last year. So really giving educators choice in which books, which formats, but what really works in their own contexts and for different students. Um, and that's been very exciting, I'll say are the most popular book, maybe this is not a surprise to some people that, uh, James Baldwins Giovanni's Room is always just such a hit, is beautifully written. James Baldwin himself is just an incredible author and, and an icon really in the culture. So that's always one of the most popular ones. Um, but there are others. I mean The Picture of Dorian Gray also is popularly taught in schools, but often the LGBTQ themes and undercurrents aren't taught. Right? So as we work with educators, that's often also a very interesting book, uh, to start highlighting some of the LGBT themes. Um, there, I, I, there's so many books that are exciting and resonate. I can talk about them for hours, but, um, that, that list, I believe we can like get the on the website, so we share that with folks cause it's a good list.
And how are teachers using these books? Are these whole class read alouds? Are they just books that go in the class library and kids can do independent reading of them? I would imagine there's a mix of the applications.
Yeah. It's a great question. And it varies quite a bit school to school. So, um, typically it will start by being like a classroom library or a library builder, um, and students and educators can have on a one-off basis, we'll say, hey, this, this could be an interesting book for you to read or students will check them out. Um, but over time we see that a lot of the educators we work with will then, you know, assign one of the books is an option for free reading. It's like three options that their class can choose from. Um, and then, you know, over time in a number of our schools, then they will request a class set from us. We can provide for a given book and then work that into their curriculum more formally. Um, so it's really up to the educator, but we support them at all in all cases.
So you guys are a new organization and have grown leaps and bounds in just a year or two. Um, where do you see yourself going down the road? And I'm curious if, um, I know you're working at the middle-grade level and the, um, young adult level. I'm curious if there is thoughts for picture books, um, early elementary, early childhood, and then just where you see yourself going in the down- in down the road.
Yeah. Um, so this spring actually, uh, you're reading my mind. Um, this spring, we're going to be really saying, uh, K-12, uh, offering as well because there's just, there's been so much excitement among elementary level teacher, but we just, we don't have something for them yet. So it'll be a parallel offering, will be the book list. It'll be the library builder. And then a set of curriculum and training as well. So you've heard it here first, um, keep an eye out. That'll be the spring. Um, the longer-term aspiration I think is really a combination of breadth and depth. Um, as I mentioned earlier, our hope really is to support all 7,000 rural school districts in the United States. Um, that's probably something that's not going to happen in the next year or two. It's probably more in like the five to 10 year, um, range, but it is our hope to make LGBTQ inclusive education, the norm, not the exception. Um, so scale is really important to us. And I think the other piece is breadth or sorry, depth. Um, so much of the classroom experience and expectations for teachers and students are in flux right now. Um, as you know so making sure that we are responsive to what educators need and are feeling in their classrooms and updating our curriculum, the type of training we offer maybe even the materials that we provide, whether that's, you know, a combination of e-books and physical books, um, but making sure that we are being responsive, uh, as we grow the organization is very, very important.
Well, um, I know that I, um, both in my work, in my community itself, um, there has been a
recent organization that has grown sort of from parents and community members meant to
support issues of gender identity and sexual orientation. I know I'm going to refer them to your booklist as well and share out in my work with pre-service and early career teachers. Um, I will link all of those resources on the End Book Deserts, um, website. So I'm, I'm looking forward to seeing that. And I wanted to wrap up by asking you the question that I ask all of my guests. In the spirit of, um, building, reading culture, and embracing our reading selves and our reading lives, I ask every guest on the podcast to talk to me about a book that has had a profound impact on them. It can be a book from your past or present. Um, it is a little bit different than your favorite book. It's just that book that sort of has shaped you in some capacity. And I would imagine like me, you're a lifelong reader, so it's hard to narrow it down to one, but what is that book that just really has, has influenced you?
Yeah, it's an almost impossible question because there's, there's so many, but I'll actually go back to the one that I mentioned earlier, which is Giovanni’s Room by James Baldwin. Um, it was the first book that I read with a gay character. Um, and this is when I was 22. Um, it's crazy to think, you know, you, you go through your entire childhood and adolescence relating to stories and narratives that you can, you can kind of understand and relate to a little bit, but there's never really sole guidance first really into someone's story and understanding them. It's such a fundamental level and it's, it's an editor for people who read it. It's a tragic story, but I think beautifully talks about the joys of relationship that have the pressure of not being the norm in their society, but also the challenges that are thrown at you by society, but also thrown at you by yourself and all of the internalized fears that you've accumulated over many years. And it was just it's, um, being able to relate to a character in a story that deeply was profound. Um, so that's one that's important to me.
Well, um, for obviously, um, listeners out there, can't see, I'm looking at Joe right now and you look like you're like a day over 22. So, you are not somebody who went through education that long ago. And the fact that you didn't meet a character like you, that you could identify with until 22 is really pretty, pretty profound and staggering. And the work of Hope in a Box is, um, making sure that no more children have that experience. So, we're so grateful for the work that you do and to have connected with you. Um, I will be sharing out your resources on my website, and, um, thank you for making the time to speak with us today.
Excellent. Thanks for having me.
It's now time for the portion of today's episode called Related Reading, where I feature a book from my personal or professional shelf that relates to the topic at hand. A few weeks ago, we lost a pioneer inequality, Ruth Bader Ginsburg. I'm still reeling from her death, particularly in 2020 a year where the hits just keep on coming. In my house, we've been celebrating her life and legacy with a picture book by Debbie Levy called I Dissent. Ruth Bader Ginsburg makes her mark and the follow-up graphic novel titled Becoming RBG: Ruth Bader Ginsburg's Journey to Justice. We cannot overestimate Ginsburg's impact on LGBTQ rights as demonstrated by her 2015 writing, “the constitution promises liberty to all within its reach and same-sex couples ask for equal dignity in the eyes of the law. The constitution grants them that right.” Debbie Levy's picture book and graphic novel make the complexity of our legal system approachable and inspiring for young readers. My daughter and I have returned to these books many times in the past few weeks to understand Ruth’s legacy, to celebrate her steadfast work towards equality, and to appreciate the power of a petite woman armed with a pen and legal pad. That wraps it up for this episode of End Book Deserts. If you know of a person or program doing innovative work to get books into the hands of young readers, email me at email@example.com. For more about my work and for more about the program featured on this episode, check out our webpage www.endbookdeserts.com. Follow me on social media at End Book Deserts and share out your stories and reactions with the hashtag end book deserts. Thanks to Dwayne Wheatcroft for graphics and copy and to Benjamin Johnson for sound editing. Until the next episode, happy reading. | <urn:uuid:b741864f-aab6-49ea-9bb2-ca7730038800> | CC-MAIN-2021-21 | https://www.endbookdeserts.com/hopeinabox | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989690.55/warc/CC-MAIN-20210516044552-20210516074552-00454.warc.gz | en | 0.980024 | 4,984 | 2.59375 | 3 |
When recorded with specialized equipment, giraffes can be observed moving their long necks and listening to each other as these infrasonic sounds are created. This borders the frequencies often referred to as infra-sound. But, did you know that giraffes hum, and only at night? They go Hee-Haw. Adult giraffes squeeze air up their long tracheas, and through their larynx (voice box). They’re also known to produce a mild humming sound during the night, perhaps … And of course, the bigger the male, the bigger the throat, and the more deep and impressive the resonating coughs. A similar system exists in human feet and lower legs (which is why you don't have blood rush to your feet when you are sitting there, but if you turn upside down you do have blood rush to your head). Giraffes have the same number of neck vertebra as humans; seven. How do baby giraffes sleep? That same data collected from zookeepers attributes as many as 12 different sounds to young giraffes. The world's population of wild giraffes are on the African continent, in its deserts, savannas, and grasslands. The responses were hilarious, but no one knew what a giraffe sounded like! That is a big grunt or belch. It is fascinating - thanks for the comments, everyone! Giraffes bleat, brrr. Zoo managers and giraffe keepers say they had never heard this humming until the researchers played the audio recordings, so they can't be certain it isn't just a version of giraffe snoring! The following video shows aggressive male behavior that ends in the defeat of one of the males. However, young giraffes are a different matter. New research in bioacoustics shows that adult giraffes use infrasound: a sound that is too low for human ears to detect. A group of giraffes exhibiting the alarm soun. I didn’t know about the low pitched vocal sounds that are too low a frequency for humans to detect. The louder, and more raucous the cough, the more ardent the desire. this is pretty cool i didn't even know they made noises haha. One of my favorite parts of this video is listening to the cute squawking sounds the baby giraffe makes. You probably didn't expect giraffes to sound like this Scientists have recorded giraffes making low-pitched humming sounds at night. Unlike an antelope's horns, however, the giraffe's ossicones are formed from ossified cartilage and entirely covered in skin and fur. @arunav-sarkar: so agree! Did you know that giraffes make sounds? That depends on the age of the giraffe. A giraffe's neck alone is 6 feet (1.8 meters) long and weighs about 600 lbs. And if the kid needs a scolding? However, as they age, so does their neck, hence becoming less audible. It’s interesting how various animals evolve with different physical makeups that are necessary for survival. Newser staff (NEWSER) – What does the giraffe say? Another fact of interest is that giraffes have prehensile lips. But despite what some sources might say, giraffes do have a well developed larynx, which is located up at the head end of the neck. Susan Hazelton from Sunny Florida on March 18, 2011: Now I know why I never heard a giraffe make a sound. I finally realized I was hearing it when their stomach pumped their cud up that long neck so it could be chewed again. voice box), but rarely use it. The human ear can typically hear sounds from about 125 Hz to about 8000 Hz (young children can hear higher frequencies than this). These are all sounds that may commonly be heard during the day. Giraffes usually only have a single baby, born after a 15-month gestation period. The study did not address the empirical research that has concluded that it is the young giraffes that make the most sounds humans can hear. Giraffes make many sounds and noises. Biologists have long been curious to know whether giraffes produce any substantial sounds. The giraffe's chief distinguishing characteristics are its extremely long neck and legs, its horn-like ossicones, and its distinctive coat patterns.It is classified under the family Giraffidae, along with its closest extant relative, the okapi.Its scattered range extends from Chad in the north to South Africa in the south, and from Niger in the west to Somalia in the east. 2018-10-15T15:57:36Z Comment by Nikki Murphy 6. For example, the horse's voice sounds like a whinny, the cow's voice sounds like a moo, the lambs' and ram's voice sounds like a bleat, and the pig's voice sounds like an oink. Young giraffes make all types of sounds, including grunts, moans, snores, bellows, snorts, coughs, bleats, mews, hissing, whistle-like cries, and flute-like sounds. Part of the reason I'd never heard what a giraffe sounds like is that they usually don't make any sound. I guess there's too much facts about giraffes and we never learn enough of them. Young Giraffe Sounds. Dogs bark, sheep bleat and mice squeak, but what sounds do giraffes make? All giraffes have vocal cords and make sound, but what sounds do they make? Scientists finally know what sound a giraffe makes. But viewers are often left wondering if the mum-to-be is asleep or awake. Giraffes make an audible sound when they are young. Giraffe, (genus Giraffa), any of four species in the genus Giraffa of long-necked cud-chewing hoofed mammals of Africa, with long legs and a coat pattern of irregular brown patches on a light background. How many different sounds do giraffes make? But because they’re able to breed all year round, giraffes don’t need to ‘resynchronise’ with the seasons each time they give birth. Despite their long neck, giraffes have seven cervical vertebrae - the same number of neck bones that all mammals share. I was curious about giraffes too! 0 0. Researchers from the University of Vienna, Austria, reviewed nearly 1,000 hours of sound recordings from three European zoos. I was even told that giraffe sound like a hiccuping horse crossed with a wallowing pig. burst, cough, growl, grunt, low, moan, moo, sneeze, snore and snort. Create the best meme sounds and soundboards using Blerp. phoenixarizona from Australia on March 28, 2012: I cannot thank you enough for answering my question! … Here I talk about what sounds giraffes make. Yes, male giraffes do fight, and yes, it is usually over mating. This is a list of animal sounds.This list contains words used in the English language to represent the noises and vocalizations of particular animals, especially noises used by animals for communication.The words which are used on the list are in the form of verbs, though many can also be used as nouns or interjections, and many of them are also specifically onomatopoeias (labelled "OP"). Nor could they positively conclude that mature adult giraffe sounds are more limited in the sounds they make. The sound, if it were audible to human ears, would probably be a whooshing "PSSHHH!" Elephants do the same thing - they often vocalize, and we can't hear most of their vocalizations. The babies lay down and tuck their legs beneath their bodies, using their rumps to rest their heads. The resident expert consulted with a zoo vet and learned that although giraffes are generally quiet animals, they can make a bleating sound, similar to that of a young calf or sheep. 1 decade ago. They are the tallest animals in the world with bluish tongues and cute faces. A giraffe's tongue can be 18 to 20 inches long. There are some forested giraffe habitats in Kenya, but these are the exception rather than the rule. A giraffe's heart can be up to two feet long and weigh 25 pounds. I was on a church trip and heard the deepest noise I've ever heard. She's now nearly eleven! A giraffe's feet are cloven but shaped like a dinner plate and up to 12 inches across. SunnyDays. WOW this was an awesome hub and I VOTED IT AS SUCH (Contest or no contest). Leah Lefler (author) from Western New York on February 25, 2011: I started wondering after we went to a Disney World show, and they asked "What sound does a giraffe make?" Contrary to the belief of many that giraffes don’t make a sound or noise, giraffes do hiss, snort, make “whistle-like cries,” as well as create low-frequency noises that can’t be heard by the human ear. Spots: Each giraffe has a pattern of spots that is completely unique to them, much like a humans’ fingerprint. Jens Meyer/AP Photo/File. This one sounds for me like a sheep. As giraffes are rarely heard, many people think they are mute. Lv 5. Retrieved from Treknature.com. I have never heard one, other than a baby giraffe at a wildlife park once. Sounds of Silence. Now, with audio evidence from researchers and first-hand knowledge from zookeepers and giraffe managers, we can finally attempt to answer that question. Giraffes are the tallest living animals in the world, according to the Smithsonian National Zoological Park. "What does a GIRAFFE sound like?" The most popular color? Calves (baby giraffes) bleat similar to that of a goat, coughing, snorting, and other sounds have been reported. Young giraffes generally only mew or bleat when they are under a year old. Giraffes indeed do make noise. Giraffe don't have vocal chords, that was on a programe the other day I saw. However it is useless to just go to a zoo in an attempt to hear them, because giraffes don’t tend to make noise. Low-frequency noises are common in many animals, including the big cats. Check out the audio recording below! You've left out a sound I heard giraffes make repeatedly in Africa. Giraffes can run almost 35 mph for a brief time. I heard a giraffe make a fart sound it was so funny that I almost peed my pants lol (:. GA Anderson is a freelance writer for private and commercial publishing platforms. Recorded evidence suggests that as they mature, their vocabulary begins to consist primarily of infra-sound "whooshes" of air, or that nighttime "humming" discovered by researchers. They are generally quiet, but will vocalize by emitting moans, hisses, snores, hisses, coughs, grunts, moos, snorts, bleats (similar to that of a young calf or sheep), low notes, low, fluttering sounds or flute-like sounds or whistles. Adult giraffes do not often make audible noise to human ears, though they certainly have the vocal cords to do so. Why is it that giraffes have the shortest sleep cycle? What do you think of the answers? Is it sound or is it sounds? Leah Lefler (author) from Western New York on April 18, 2018: I agree, Glenn. Giraffes indeed do make noise. It’s absolutely adorable. The knob-like appendages on a giraffe's head are called ossicones and are similar to an antelope's horns. This makes one wonder if there are lifeforms we don’t even know about since they may not be perceived by us. My daughter asked me when she was three what sound a giraffe made and I could not find out! … Giraffes Caught Humming in the Midnight Hour. The giraffe is the only animal born with horns. Thanks for the interesting info. I've actually heard a giraffe before. Do they have a vocabulary? Leah Lefler (author) from Western New York on September 30, 2018: Giraffes are fascinating animals. Denise Handlon from North Carolina on March 07, 2011: It's one of my favorite animals also. In addition, giraffes will grunt, snort, hiss, or make strange flute-like sounds. They hum, but only during the nighttime, a … mythbuster from Utopia, Oz, You Decide on March 08, 2011: I've always wondered... now I know. Discover your favorite sound bites, sonic branding, and voice clips here on blerp. Giraffes will communicate alarm or danger by stamping their feet and emitting loud snorts or grunts. I have seen pictures of the Giraffe Manor in Kenya, Zia, and it looks amazing! Giraffes have muscles in their arterial vascular system that act like "check valves" that keep them from getting dizzy or blacking out when they raise or lower their heads, which could be a distance of 15 to 20 feet. Adult Giraffes Whoosh. Yet there are so many prey animals in the wildlife ecosystem. That same data collected from zookeepers attributes as many as 12 different sounds to young giraffes. You guessed it: white. Ancient people believed the giraffe resembled a camel with leopard spots. Giraffes look like VERY TALL horses! I have not personally heard an adult giraffe make a sound, as the vocalizations are below my range of hearing. The most common giraffe sheep material is cotton. Well you're in luck, because here they come. 2019-09-20T06:03:05Z Comment by toeiii. Late at night, when humans are rarely around, giraffes do something that until now has never been documented. As for the way our blood vessels have valves in our legs for the same reason as discussed, it's interesting how our bodies have evolved to live on a planet with the gravity that we have. Although it isn't really a mystery, the question "what sound does a giraffe make?" Researchers attempting to document the sounds made by giraffes recorded almost 1,000 hours of audio at three different European zoos, even leaving their recording equipment in the enclosures at night. All of this information is great, but the question remains: what sound does a giraffe make? The giraffe lives in the savannas of Africa, with a range extending from Chad to South Africa. Giraffes do have a larynx (voice box), but perhaps they couldn't produce sufficient airflow through their 13-foot long (4 meter) trachea to vibrate their vocal folds and make noises.. Infrasound is able to travel long distances, across the savannas the giraffes must travel in search of food. Giraffes are the tallest of all land animals; males (bulls) may exceed 5.5 metres (18 feet) in height, and the tallest females (cows) are about 4.5 metres. Calves will bleat, moo, or make mewing sounds. And we're not stretching the truth! Nice article. The preferred food is the Acacia tree, which the giraffe reaches with a long neck and a prehensile tongue. Leah Lefler (author) from Western New York on December 12, 2013: You can hear young giraffes, Jeff, but by the time the animals have matured, their vocal cords produce sounds outside of our hearing range. Do giraffes make sounds? Sign in. Like mothers and their children everywhere, mama giraffes have a special set of sounds they use just with their offspring. Odds are you won’t hear a peep even if you watch for hours. Although there are territorial fights and disputes, the most common cause for male-to-male fights and confrontations are over dominance in mating issues. According to audio recordings of giraffes taken at three European zoos, the long-legged animals sometimes produce "a low-frequency vocalization with a rich harmonic structure and of varying duration" at night, the researchers wrote in the study. It might not sound like "whispering sweet nothings" to us, but to female giraffes, those coughs are a real turn-on. Interesting hub. I absolutely love seeing them, and I hope to be able to travel to see them in Africa sometime in the future! Mother giraffes give birth standing up, which means a newborn's introduction to the world is a six-foot drop to the ground! Young giraffes make all types of sounds, including grunts, moans, snores, bellows, snorts, coughs, bleats, mews, hissing, whistle-like cries, and flute-like sounds. Elephants use a similar communication system, inaudible to human ears. BMC Res Notes 8, 425 (2015). They're talking... we just can't hear them! Many people wonder why giraffes do not faint when taking a drink since the animal's head is below its heart for an extended period of time. Long term space travel would be very difficult for humans, since our bodies have evolved with gravity. A young giraffe being restrained for a veterinary exam may call out for its mother in distress, making a mooing type of noise. And we're not stretching the truth! It is really fascinating, Glenn! But they are usually up and walking in minutes. The myth that adult giraffes are silent, however, is false. Leah Lefler (author) from Western New York on November 26, 2014: I would love to hear a giraffe, Nathan! Empirical and anecdotal evidence from zookeepers and giraffe managers supports that mature giraffes primarily snort and grunt, but a recent eight-year-long, three-zoo study recorded over 940 hours of a third sound—humming—heard only at night. I find it fascinating that so many animals we once considered "silent" are quite vocal - we just have difficulty hearing their voices! However, young giraffes are a different matter. Giraffes are fairly quiet beings: They do have a larynx (a.k.a. Our sensory limitations would make recognizing (or even detecting) alien life forms difficult. They don’t have to worry about being on the run as the adults have that responsibility. 0 0. Leah Lefler (author) from Western New York on October 13, 2011: Elephants have a similar system of infrasound vocalizations. Great hub. Thanks for sharing... really enjoyed reading this information and the videos were an added plus! They hum . Dogs bark, sheep bleat and mice squeak, but what sounds do giraffes make? So far, this gentle giraffe humming has only been heard and recorded at night. They even make infra-sound whooshes that are hard for people to hear. New research in bioacoustics shows that adult giraffes use infrasound: a sound that is too low for human ears to detect. ... Another popular explanation is that giraffe do actually make sounds, but we can’t hear them. Love is in the air, and so are raucous coughs emanating from a six-foot throat. From bleats, mews, coughs, grunts and snores from senior giraffes, and even hisses from young giraffes, giraffes make a diverse array of noises to communicate. The blood vessels are quite necessary in giraffes so they can drink without having too much blood rush to their heads. Did you scroll all this way to get facts about giraffe sheep? I love animals and nature and I have never asked myself this question.I've never heard any one discuss it on any TV programme either. Giraffes are the tallest land animal and can grow up to 18 feet tall. Fighting/Confrontations (sometimes used as danger alarm signal), Used by young giraffes indicating alarm, fear, or wants. Well-written and informative. how high am i. Try blerp on iMessage, iOS, Android, Google Assistant, and Discord. They don't oink, moo or roar. The answer to your question is not so uncommon. It's long been assumed that unlike other animals, giraffes are largely silent beasts. It's all the variety of topics and this was a great one. Bone loss, edema, and other medical issues would become worse over time in space. If you don’t believe the video, check it out the next time you head to the zoo. The most amazing thing is what you explained about the valves in their blood vessels to prevent blood from filling the head when they bend down to drink, considering that long neck. Giraffes only live for 15 to 25 years in the wild. A baby giraffe may "moo," especially if it is in a stressful situation. 1 decade ago. ga anderson (author) from Maryland on March 28, 2012: @Phoenix - Thanks for reading "What Sounds does a Giraffe Make?," and I'm glad it answered your question. I love giraffes, too - "giraffe" was one of my son's first signs (before he could talk)! You can sign in to give your opinion on the answer. They hum, but only during the nighttime, a new study finds. The giraffe's fur has a characteristic scent and may act as a defense mechanism due to anti-parasitic and antibiotic properties. As to what the sound actually sounds like, that depends on the breed of goat. giraffes primarily use infra-sound to communicate. Male giraffes can grow up to 18 feet tall and weigh 3,000 pounds. (Mom carries the baby for 14 to 15 months), Baotic, A., Sicks, F. & Stoeger, A.S. Nocturnal “humming” vocalizations: adding a piece to the puzzle of giraffe vocal communication. Sometimes jokingly referred to as being similar to a husband's snore, this sound was described by a Wired article as being at the low-end level of human hearing at a frequency of about 92Hz. So many of the animals we regard as 'silent' are simply vocalizing outside of our hearing range! The giraffe's neck can be 6 feet long, and its tongue can stretch 20 inches. It's long been assumed that unlike other animals, giraffes are largely silent beasts. Don't Forget to Breathe If you look at a giraffe study many scientists have recorded that a giraffe makes a sound most like a sheep. Adult giraffes squeeze air up their long tracheas, and through their larynx (voice box). If they become alarmed, a simple snort is often used to alert the herd of a possible threat. I have a naturally deep voice, but I couldn't go that low if I tried. Question: Have you ever heard a giraffe's sound before? Anonymous. Leah Lefler (author) from Western New York on February 26, 2011: I do believe there is a children's book with a similar title - I saw it when I was researching this hub. Nice hub...I picked it to read because the title caught my attention...would make a good children's story book. And what do those sounds mean? 2017-09-16T06:37:38Z Comment by Danny L Harle. In other words, giraffes have vocal cords. That is how they can get around acacia thorns to eat the tender leaves. The sound is very similar to a young calf calling out to its mother! Elephants are similar and produce complicated vocalizations we cannot detect with human ears. Giraffes are really tall mammals, with 3-meter-long lanky necks. The sound is called a bleat. The sound, if it were audible to human ears, would probably be a whooshing "PSSHHH!" Unfortunately, human ears are simply too insensitive to detect the sounds! By the way, it sounded like a moo or a grunt, not a woosh. Cute. A page from the Indianapolis Zoo confirmed this fact, and stated that giraffes can also make a "low, fluttering sound." Their spots also have far more use than the obvious camouflage they provide and around each spot is a large blood vessel which branches off in a complex system of blood vessels underneath the spot. Each animal's coat has a unique pattern, much like human fingerprints. Always wondered what sound a Giraffe makes; now I know. For the first time in human history, we are finally able to hear an adult giraffe vocalize! Astronauts in the ISS need to do various physical exercise to compensate for the lack of gravity. The giraffe is also related to deer and cattle and was originally called the camelopard by ancient English speakers. Females will call their young by whistling or bellowing. Newborns are approximately six feet tall when born. Researchers (such as Liz von Muggenthaler) are able to record the infrasound and present it in a visual fashion. There are 403 giraffe sheep for sale on Etsy, and they cost $12.64 on average. 0 0. Giraffes give birth standing up, so the baby giraffe enters the world with a drop to the ground. This is why i love HP . is an audio clip, sound button, sound meme, found on blerp! Do giraffes make a noise? Arden Dier . Retrieved from SanDiegoZoo.org, TrakNature.com, 2011. The only thing I can compare it to is a ship creaking, but it's still much deeper than that. The giraffes in this scene are shown "swinging" their heads at each other. E Jay from Colorado Springs, Colorado on February 26, 2011: Giraffe's have always been one of my favorite animals. Thanks for sharing! If you see baby giraffes sleeping, you’ll notice that they always sleep in the position that their adult counterparts choose to avoid. Giraffe fighting sounds are loud snorts and moans, with grunts thrown in, (using a "danger" sound), to intimidate the other male. They use loud bellows when searching for the kid(s) which can be heard as much as a mile away, and whistling or flute-like sounds for other communications, like calling them home. Our understanding of nature is often restricted by our own sensory limitations. If you look at a giraffe study many scientists have recorded that a giraffe makes a sound most like a sheep. Giraffes use loud coughs to court the females they want to mate with. giraffes sound kinda like horses. has been a bit of a puzzle to animal behavior researchers. Others have suggested giraffes use low frequency “infrasonic” sounds – sounds below the level of human perception – much like elephants and other large animals do for long-range communication. Somebody has to know. Along with that research came this sound clip from BMC. They have baige mixed with orange skin and brown patches. EL SONIDO DE LA JIRAFA BREVE AMPLIFICADO - THE SOUND OF SHORT GIRAFFE AMPLIFIED - visita mi web carlosmanfacebookx.blogspot.com Sadly, about 50% of giraffe calves do not survive their first year. So well done and its awesome/beautiful/useful/up here. Colonizing low gravity planets would lead to the same issues. As well as their orchestral horns and trumpets, elephants make low-pitched infrasound that are far below the range of human ears. I have never heard one make a sound. How do they communicate? https://doi.org/10.1186/s13104-015-1394-3, San Diego Zoo. | <urn:uuid:3e11a64a-9869-403c-bbd1-b30598e1fb94> | CC-MAIN-2021-21 | http://neda.psdeg-psoe.org/9jv2ji/d1fd03-do-giraffes-sound-like-sheep | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990551.51/warc/CC-MAIN-20210515161657-20210515191657-00216.warc.gz | en | 0.957725 | 5,898 | 2.90625 | 3 |
Economic thought may be roughly divided into three phases: premodern (Greco-Roman, Indian, Persian, Islamic, and Imperial Chinese), early modern (mercantilist, physiocrats) and modern (beginning … In English speaking countries, the Historical school is perhaps the least known and least understood approach to the study of economics, because it differs radically from the now-dominant Anglo-American analytical point of view. Historical school of economics, branch of economic thought, developed chiefly in Germany in the last half of the 19th century, that sought to understand the economic situation of a nation in the context of its total historical experience. Economic exchanges were regulated by feudal rights, such as the right to collect a toll or hold a fair, as well as guild restrictions and religious restrictions on lending. Further details may exist on the, Saltwater economists are generally associated with, Freshater economists generally hail from the interior of the nation, represented by the. June 1933. pp. , Within the macroeconomic mainstream in the United States, distinctions can be made between saltwater economists[a] and the more laissez-faire ideas of freshwater economists. New Classical ... and around the world relating to economic and political issues as well as sports and entertainment. The Journal des Économistes was instrumental in promulgating the ideas of the School. The opportunity cost expresses an implicit relationship between competing alternatives. The first four schools have in common that they regard monetary mechanisms as a key part of the engine determining the level of economic activity while the last three schools all adopt essentially non‐monetary perspectives. The Stockholm School is a school of economic thought. In a planned economy comparable shadow price relations must be satisfied for the efficient use of resources, as first demonstrated by the Italian economist Enrico Barone. Hayek economic theory and Keynesian economic theory are both schools of thought that employ different approaches to defining economic concepts. 274-277. Neo-classical economics differs from classical economics primarily in being utilitarian in its value theory and using marginal theory as the basis of its models and equations. Predecessors included Friedrich List. Mainstream economics encompasses a wide (but not unbounded) range of views. It was this school that heavily critiqued the deductive approach of the classical economists, especially the writings of David Ricardo. Speaker: If you need a moderator or speaker for an… | There should be less government intervention. as well as other partner offers and accept our, the shortest economic textbook in the world, 9 Formulas You Have To Know To Pass Wall Street's Hardest Exam. It was founded on free and unhindered circulation of wealth so as to handsomely reach even the lowest echelons of society. Niccolò Machiavelli in his book The Prince was one of the first authors to theorize economic policy in the form of advice. Get it now on Libro.fm using the button below. The American School owes its origin to the writings and economic policies of Alexander Hamilton, the first Treasury Secretary of the United States. Mainstream economics also acknowledges the existence of market failure and insights from Keynesian economics, most contemporaneously in the macroeconomic new neoclassical synthesis. I like to break economic theory down into seven schools of thought: fascism, neoclassical economics, socialism, Keynesianism, monetarism, Austrianism, and supply-side economics. There are now over 1,000 economist profiles, 100 schools of thought and some 50+ surveys of topics with links to … Feel free to review our top-left column, and top-right sidebar materials, links, URLs and related websites, too. It emphasized high tariffs on imports to help develop the fledgling American manufacturing base and to finance infrastructure projects, as well as National Banking, Public Credit, and government investment into advanced scientific and technological research and development. The guys at Zero Hedge posted this useful summary of the various economic schools. Monetarist 4. It seeks to pursue a third way between capitalism and socialism, desiring to order society according to Christian principles of justice while still preserving private property. Its name and core elements trace back to a 1919 American Economic Review article by Walton H. Economic Schools of Thought CLASSICAL | NEOCLASSICAL | HETERODOX | INSTITUTIONAL | Classical economics focuses on the tendency of markets to move to equilibrium and on objective theories of value. School of thought that has a distinct theory of value, distribution, and growth.Classical economics tended to stress the benefits of trade. The book is an outstanding economist XX V. Ludwig von Mises, the founder neoavstriyskoy school of economic theory is a systematic exposition of the epistemology, methodology and theory of economics from the basics (theory of value) to economic policy. Thus, in Marxian economics, the labour theory of value is a method for measuring the exploitation of labour in a capitalist society rather than simply a theory of price.. D.R. Economic policy, such as it was, was designed to encourage trade through a particular area. The Chicago School is a neoclassical school of economic thought associated with the work of the faculty at the University of Chicago, notable particularly in macroeconomics for developing monetarism as an alternative to Keynesianism and its influence on the use of rational expectations in macroeconomic modelling. In order to help, the simple table below should help you overcome your initial fear. Marxian economics descended from the work of Karl Marx and Friedrich Engels. Economic Schools of Thoughts are divided into three classes: Schools of Political Economy (Ancient times – 1871 A.D.), Neoclassical Schools (1871 A.D. – today), and; Alternative Schools. The French Liberal School (also called the "Optimist School" or "Orthodox School") is a 19th-century school of economic thought that was centered on the Collège de France and the Institut de France. Austrian economists advocate methodological individualism in interpreting economic developments, the subjective theory of value, that money is non-neutral, and emphasize the organizing power of the price mechanism (see Economic calculation debate) and a laissez faire approach to the economy.. Neoclassical economics is the dominant form of economics used today and has the highest amount of adherents among economists. Subscriber An important area of growth was the study of information and decision. Tyler Durden, Zero Hedge 2014-06-26T12:29:00Z The letter F. An envelope. Heterodox economics is the analysis and study of economic principles considered outside of mainstream or orthodox schools of economic thought. John Eatwell, Murray Milgate, and Peter Newman, ed. The Stockholm School had—like John Maynard Keynes—come to the same conclusions in macroeconomics and the theories of demand and supply. This school focuses on the labor theory of value and what Marx considered to be the exploitation of labour by capital. Take this quiz to discover which school of economic thought you agree with most based on your views on the economy! | Market should be self-regulating and resources are efficiently distributed by the “invisible hand”. In the history of economic thought, a school of economic thought is a group of economic thinkers who share or shared a common perspective on the way economies work. In fact there are no less than nine different kinds, or schools, as they are often known. The Classical school, which is regarded as the first school of economic thought, is associated with the 18th Century Scottish economist Adam Smith, and those British economists that followed, such as Robert Malthus and David Ricardo. Despite Marx's challenge, market-based economic theory continued to dominate through the end of the 19th century, with contributions from French, British, and American economics. These ‘classical economists’ believed that competition was self-regulating and that governments should take no part in business be it through tariffs, taxes or any other means, unless it was to protect free-market competition. They concentrate on macroeconomic rigidities and adjustment processes, and research micro foundations for their models based on real-life practices rather than simple optimizing models. | Studied equilibrium and market … The most significant are Institutional economics, Marxian economics and the Austrian School. Examples of this school included the work of Joseph Stiglitz. Most survive to the present day as self-consciously dissident schools, but with greatly diminished size and influence relative to mainstream economics. The historical school of economics was an approach to academic economics and to public administration that emerged in the 19th century in Germany, and held sway there until well into the 20th century. The first chapter considers at length the basis for Keynes's break from classical economics. Although not nearly as famous as its German counterpart, there was also an English Historical School, whose figures included William Whewell, Richard Jones, Thomas Edward Cliffe Leslie, Walter Bagehot, Thorold Rogers, Arnold Toynbee, William Cunningham, and William Ashley. Classical 2. He did so by stating that princes and republics should limit their expenditures and prevent either the wealthy or the populace from despoiling the other. Although his writings could be critical of the School, Schumpeter's work on the role of innovation and entrepreneurship can be seen as a continuation of ideas originated by the Historical School, especially the work of von Schmoller and Sombart. Karl Marx built his economic analysis upon Ricardo's theories. Contrary to what most economists would have you believe, there isn’t just one kind of economics – Neoclassical economics. Schools of heterodox economics … While economists do not always fit into particular schools, particularly in modern times, classifying economists into schools of thought is common. Keynesian economics has developed from the work of John Maynard Keynes and focused on macroeconomics in the short-run, particularly the rigidities caused when prices are fixed. Generally associated with Cambridge, England and the work of Joan Robinson (see Post-Keynesian economics). Economists generally specialize into either macroeconomics, broadly on the general scope of the economy as a whole, and microeconomics, on specific markets or actors. The Major Schools of Thought 1. Anarchist economics comprises a set of theories which seek to outline modes of production and exchange not governed by coercive social institutions: Thinkers associated with anarchist economics include: Distributism is an economic philosophy that was originally formulated in the late 19th century and early 20th century by Catholic thinkers to reflect the teachings of Pope Leo XIII's encyclical Rerum Novarum and Pope Pius's XI encyclical Quadragesimo Anno. The first school of thought, structuralism, was advocated by the founder of the first psychology lab, Wilhelm Wundt. Demonological schools of thought would be most likely to explain the cause of a crime as the influence of Satan. In its infancy the application of non-linear dynamics to economic theory, as well as the application of evolutionary psychology explored the processes of valuation and the persistence of non-equilibrium conditions. Some influential approaches of the past, such as the historical school of economics and institutional economics, have become defunct or have declined in influence, and are now considered heterodox approaches. Jairus Banaji (2007), "Islam, the Mediterranean and the rise of capitalism". since, “No Rules Rules: Netflix and the Culture of Reinvention”. The most visible work was in the area of applying fractals to market analysis, particularly arbitrage (see Complexity economics). With scarcity, choosing one alternative implies forgoing another alternative—the opportunity cost. Heterodox approaches often embody criticisms of perceived "mainstream" approaches. Vol.23. The Historical school was involved in the Methodenstreit ("strife over method") with the Austrian School, whose orientation was more theoretical and a prioristic. It is fast, free and secure. (J.W. However, advocates of a more fundamental critique of neoclassical economics formed a school of Post-Keynesian economics. And none of these schools can claim superiority over others and still less monopoly over truth. Post-Keynesian economics is an alternative school—one of the successors to the Keynesian tradition with a focus on macroeconomics. Then, subscribe to the ME-P. Austrian) and defunct (e.g Mercantilism) schools of thought in Economics.I think an ideal answer would include: the defining features of the school, which enable it to differentiate it with the rest.. major authors/writers behind the school.. key papers/books/textbooks presenting/defending the ideas of the school of thought. The subject thus defined involves the study of choice, as affected by incentives and resources. As a result, Marxian economics is usually considered part of the Classical School tradition. The study of risk was influential, in viewing variations in price over time as more important than actual price. This school has seen a revived interest in development and understanding since the later part of the 20th century. Previous question Next question Transcribed Image Text from this Question. , Disputes within mainstream macroeconomics tend to be characterised by disagreement over the convincingness of individual empirical claims (such as the predictive power of a specific model) and in this respect differ from the more fundamental conflicts over methodology that characterised previous periods (like those between Monetarists and Neo-Keynesians), in which economists of differing schools would disagree on whether a given work was even a legitimate contribution to the field.. They saw economics as resulting from careful empirical and historical analysis instead of from logic and mathematics. Modern economic thought emerged in the 17th and 18th centuries as the western world began its The Historical school held that history was the key source of knowledge about human actions and economic matters, since economics was culture-specific, and hence not generalizable over space and time. Mainstream economics is distinguished in general economics from heterodox approaches and schools within economics. Goethe, 1817, Principles of Natural Science ) These were advocated by well-defined groups of academics that became widely known: In the late 20th century, areas of study that produced change in economic thinking were: risk-based (rather than price-based models), imperfect economic actors, and treating economics as a biological science (based on evolutionary norms rather than abstract exchange). Classical economics focuses on the tendency of markets to move to equilibrium and on objective theories of value. These forays into economic thought contribute to the modern understanding, ranging from ancient Greek conceptions of the role of the household and its choices to mercantilism and its emphasis on the hoarding of precious metals. I hope you enjoy the new HET website. Account active For instance: Other viewpoints on economic issues from outside mainstream economics include dependency theory and world systems theory in the study of international relations. The latter combines neuroscience, economics, and psychology to study how we make choices. As a result, the Classical school is sometimes also called the "Ricardian" or "British" school. The development of Keynesian economics was a substantial challenge to the dominant neoclassical school of economics. The School voraciously defended free trade and laissez-faire capitalism. It rediscovers aspects of classical political economy. The Physiocrats were 18th century French economists who emphasized the importance of productive work, and particularly agriculture, to an economy's wealth. Systematic economic theory has been developed mainly since the beginning of what is termed the modern era. banking).. By clicking ‘Sign up’, you agree to receive marketing emails from Business Insider Most members of the school were also Kathedersozialisten, i.e. The Historical school largely controlled appointments to Chairs of Economics in German universities, as many of the advisors of Friedrich Althoff, head of the university department in the Prussian Ministry of Education 1882-1907, had studied under members of the School. Yet the Historical school forms the basis—both in theory and in practice—of the social market economy, for many decades the dominant economic paradigm in most countries of continental Europe. This made the French School a forerunner of the modern Austrian School. Marxian economics also descends from classical theory. From the man who bought you "the shortest economic textbook in the world"; and "13 things Economists won't tell you", here is Ha-Joon Chang's ultimate pocket guide to the differences (and similarities) between all the economic schools of thought. This school of thought emphasizes the achievements of the so-called “Middle Ages” in Christian economic order, with its system of cooperative oaths, interdependence, and agrarian leisure. These are usually made to be endogenous features of these models, rather than simply assumed as in older style Keynesian ones (see New-Keynesian economics). Since the second half of the twentieth century economic theory, unable to explain economic reality, has been moved increasingly away from … feminist economics criticizes the valuation of labor and argues female labor is systemically undervalued; green economics criticizes instances of externalized and intangible ecosystems and argues for them to be brought into the tangible, post-keynesian economics disagrees with the notion of the long-term neutrality of demand, arguing that there is no natural tendency for a competitive market economy to reach. Anders Chydenius (1729–1803) was the leading classical liberal of Nordic history. Politically, most mainstream economists hold views ranging from laissez-faire to modern liberalism. Modern mainstream economics has foundations in neoclassical economics, which began to develop in the late 19th century. Economists believe that incentives and costs play a pervasive role in shaping decision making. Another infant branch of economics was neuroeconomics. The word "economics" is derived from οικονομίκος (oikonomikos, which means ‗skilled in household management‘). New-Keynesian economics is the other school associated with developments in the Keynesian fashion. Fractional-reserve banking is disallowed as a form of breach of trust. Keynesian views entered the mainstream as a result of the neoclassical synthesis developed by John Hicks. It uses models of economic growth for analyzing long-run variables affecting national income. Like Keynes, they were inspired by the works of Knut Wicksell, a Swedish economist active in the early years of the twentieth century. Ray Spier (2002), "The history of the peer-review process". New institutional economics is a perspective that attempts to extend economics by focusing on the social and legal norms and rules (which are institutions) that underlie economic activity and with analysis beyond earlier institutional economics and neoclassical economics. The American Economic Review. "Institutional Economics: Then and Now,", Learn how and when to remove this template message, Shams al-Mo'ali Abol-hasan Ghaboos ibn Wushmgir, http://www.columbia.edu/~mw2230/Convergence_AEJ.pdf, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2535453, "Retrospectives: What Did the Ancient Greeks Mean by Oikonomia ? The School rejected the universal validity of economic theorems. It became the dominant school of thought in the 19th C., particularly in Britain. The School preferred historical, political, and social studies to self-referential mathematical modelling. The criticisms of Distributism assert that Solidarism is simply a veiled form of Socialism. - Only MONETARY policies should be used to stimulate the economy. Schools of Economic Thought. Sign up for a daily selection of our best stories — based on your reading preferences. concerned with social reform and improved conditions for the common man during a period of heavy industrialization. New economic thinking, they reminded me, begins by remembering past economic thinking. The origins can be traced back to the Caliphate, where an early market economy and some of the earliest forms of merchant capitalism took root between the 8th–12th centuries, which some refer to as "Islamic capitalism".. The definition of scarcity is that available resources are insufficient to satisfy all wants and needs; if there is no scarcity and no alternative uses of available resources, then there is no economic problem. Modern macro- and microeconomics are young sciences. It is often referred to by its critics as Orthodox Economics. Monetarism, school of economic thought that maintains that the money supply (the total amount of money in an economy, in the form of coin, currency, and bank deposits) is the chief determinant on the demand side of short-run economic activity. Its original focus lay in Thorstein Veblen's instinct-oriented dichotomy between technology on the one side and the "ceremonial" sphere of society on the other. ... between all the economic schools of thought. Absent scarcity and alternative uses of available resources, there is no economic problem. Economic Schools of Thoughts. That is, economics deals with tradeoffs. It would be great to have a brief but comprehensive list of current (e.g. Some more recent developments in economic thought such as feminist economics and ecological economics adapt and critique mainstream approaches with an emphasis on particular issues rather than developing as independent schools. Classicals: The very first school of Economicn thought in 18th Century under Adam Smith and David Ricardo. STUDY. Notable schools or trends of thought in economics in the 20th century were as follows. These researchers tend to share with other Neoclassical economists the emphasis on models based on micro foundations and optimizing behavior, but focus more narrowly on standard Keynesian themes such as price and wage rigidity. Classical economics, also called classical political economy, was the original form of mainstream economics of the 18th and 19th centuries. use of paper currency also stands out. But many in the past have thought on topics ranging from value to production relations. Their early support of free trade and deregulation influenced Adam Smith and the classical economists. Another distinguishing feature is prohibition of interest in the form of excess charged while trading in money. One of the earliest and perhaps most prominent contributions to the study of macroeconomics was introduced in the 18th century by Adam Smith and subsequently expanded by David Ricardo and Robert Malthus. Institutional economics focuses on understanding the role of the evolutionary process and the role of institutions in shaping economic behaviour. Despite what the experts want you to believe, there is more than one way of ‘doing’ economics. Timur Kuran (2005), "The Absence of the Corporation in Islamic Law: Origins and Persistence". Samir Amin (1978), "The Arab Nation: Some Conclusions and Problems", Walton H. Hamilton (1919). Islamic economics seeks to enforce Islamic regulations not only on personal issues, but to implement broader economic goals and policies of an Islamic society, based on uplifting the deprived masses. | Competition is evil, market is bad. Scarcity means that available resources are insufficient to satisfy all wants and needs. 1. Friedrich List, one of the most famous proponents of the economic system, named it the National System, and was the main impetus behind the development of the German Zollverein and the economic policies of Germany under Chancellor Otto Von Bismarck beginning in 1879. Islamic economics is the practice of economics in accordance with Islamic law. Heterodox economics is any economic thought or theory that contrasts with orthodox schools of economic thought, or that may be beyond neoclassical economics. While economists do not always fit into particular schools, particularly in modern times, classifying economists into schools of thought is common. These include institutional, evolutionary, feminist, social, post-Keynesian (not to be confused with New Keynesian), ecological, Georgist, Austrian, Marxian, socialist and anarchist economics, among others. It refers to a loosely organized group of Swedish economists that worked together, in Stockholm, Sweden primarily in the 1930s. This school revered the inductive process and called for the merging of historical fact with those of the present period. Other longstanding heterodox schools of economic thought include Austrian economics and Marxian economics. The rise of Keynesianism, and its incorporation into mainstream economics, reduced the appeal of heterodox schools. In this way a state would be seen as "generous" because it was not a heavy burden on its citizens. Economic theory is really just a set of beliefs concerning individual and group behavior. Keynesian economics was founded by economist John Maynard Keynes. It employs game theory for modeling market or non-market behavior. In Chart Review Form *** *** *** Conclusion Your thoughts and comments on this ME-P are appreciated.
Seinfeld Middle Finger Gif, Meaningful Gifts For Employees, Lana Del Rey, Brenda Strong Spaceballs, Rolls-royce Target Market, Project Acceptance Letter, Best Cookbooks For Home Cooks, Grad School Cookbook, | <urn:uuid:efa9b03a-2a6d-4b21-bca2-406493d97525> | CC-MAIN-2021-21 | http://featuremereality.com/iniv6/schools-of-economic-thought-chart-bd1d0c | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.40/warc/CC-MAIN-20210514060536-20210514090536-00495.warc.gz | en | 0.951949 | 5,082 | 2.8125 | 3 |
If you want to understand how deep learning can create protein fingerprints, Bronstein suggests looking at digital cameras from the early 2000s. Even Michael Bronsteinâs earlier method, which let neural networks recognize a single 3D shape bent into different poses, fits within it. The data is four-dimensional, he said, âso we have a perfect use case for neural networks that have this gauge equivariance.â. 04/22/2017 ∙ by Federico Monti, et al. These âconvolutional neural networksâ (CNNs) have proved surprisingly adept at learning patterns in two-dimensional data â especially in computer vision tasks like recognizing handwritten words and objects in digital images. 12 min read. Luckily, physicists since Einstein have dealt with the same problem and found a solution: gauge equivariance. âThis is one of the things that I find really marvelous: We just started with this engineering problem, and as we started improving our systems, we gradually unraveled more and more connections.â. Get Quanta Magazine delivered to your inbox, Get highlights of the most important news delivered to your email inbox, Quanta Magazine moderates comments to facilitate an informed, substantive, civil conversation. Risi Kondor, a former physicist who now studies equivariant neural networks, said the potential scientific applications of gauge CNNs may be more important than their uses in AI. corr... He is also a principal engineer at Intel Perceptual Computing. Physical theories that describe the world, like Albert Einsteinâs general theory of relativity and the Standard Model of particle physics, exhibit a property called âgauge equivariance.â This means that quantities in the world and their relationships donât depend on arbitrary frames of reference (or âgaugesâ); they remain consistent whether an observer is moving or standing still, and no matter how far apart the numbers are on a ruler. Now this idea is allowing computers to detect features in curved and higher-dimensional space. 0 âAnd they figured out how to do it.â. He has previously served as Principal Engineer at Intel Perceptual Computing. For example, the network could automatically recognize that a 3D shape bent into two different poses â like a human figure standing up and a human figure lifting one leg â were instances of the same object, rather than two completely different objects. List of computer science publications by Michael M. Bronstein In view of the current Corona Virus epidemic, Schloss Dagstuhl has moved its 2020 proposal submission period to July 1 to July 15, 2020 , and there will not be another proposal round in November 2020. Michael Bronstein, a computer scientist at Imperial College London, coined the term âgeometric deep learningâ in 2015 to describe nascent efforts to get off flatland and design neural networks that could learn patterns in nonplanar data. ∙ communities, © 2019 Deep AI, Inc. | San Francisco Bay Area | All rights reserved, is a professor at Imperial College London, where he holds the Chair in Machine, . share, Point clouds provide a flexible and scalable geometric representation in 2019). â 14 â share read it. Pursuit, Graph Neural Networks for IceCube Signal Classification, PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks, MotifNet: a motif-based Graph Convolutional Network for directed graphs, Dynamic Graph CNN for Learning on Point Clouds, Subspace Least Squares Multidimensional Scaling, Localized Manifold Harmonics for Spectral Shape Analysis, Generative Convolutional Networks for Latent Fingerprint Reconstruction, Geometric Matrix Completion with Recurrent Multi-Graph Neural Networks, Geometric deep learning on graphs and manifolds using mixture model CNNs, Geometric deep learning: going beyond Euclidean data, Learning shape correspondence with anisotropic convolutional neural 0 ∙ ∙ Graph Attentional Autoencoder for Anticancer Hyperfood Prediction Recent research efforts have shown the possibility to discover anticance... 01/16/2020 â by Guadalupe Gonzalez, et al. Computers can now drive cars, beat world champions at board games like chess and Go, and even write prose. Sort. This poses few problems if youâre training a CNN to recognize, say, cats (given the bottomless supply of cat images on the internet). Michael M. Bronstein Full Professor Institute of Computational Science Faculty of Informatics SI-109 Università della Svizzera Italiana Via Giuseppe Buffi 13 6904 Lugano, Switzerland Tel. share, Fast evolution of Internet technologies has led to an explosive growth o... 12/29/2011 ∙ by Jonathan Masci, et al. 0 0 Geometric deep learning: going beyond Euclidean data Michael M. Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, Pierre Vandergheynst Many scientific fields study data with an underlying structure that is a non-Euclidean space. 0 ∙ share, Shape-from-X is an important class of problems in the fields of geometry... The laws of physics stay the same no matter oneâs perspective. Instead, you can choose just one filter orientation (or gauge), and then define a consistent way of converting every other orientation into it. âGauge equivariance is a very broad framework. share, Drug repositioning is an attractive cost-efficient strategy for the But holding the square of paper tangent to the globe at one point and tracing Greenlandâs edge while peering through the paper (a technique known as Mercator projection) will produce distortions too. Michael Bronstein 2020 Machine Learning Research Awards recipient. ∙ Schmitt is a serial tech entrepreneur who, along with Mannion, co-founded Fabula. By 2018, Weiler, Cohen and their doctoral supervisor Max Welling had extended this âfree lunchâ to include other kinds of equivariance. 05/04/2017 ∙ by Jan Svoboda, et al. In 2015, Cohen, a graduate student at the time, wasnât studying how to lift deep learning out of flatland. He is credited as one of the pioneers of geometric ML and deep learning on graphs. ∙ Verified email at twitter.com - Homepage. ∙ His main research expertise is in theoretical and computational methods for, data analysis, a field in which he has published extensively in the leading journals and conferences. 12/11/2013 ∙ by Michael M. Bronstein, et al. and Pattern Recognition, and Head of Graph, Word2vec is a powerful machine learning tool that emerged from Natural ∙ chall... 0 Qualcomm, a chip manufacturer which recently hired Cohen and Welling and acquired a startup they built incorporating their early work in equivariant neural networks, is now planning to apply the theory of gauge CNNs to develop improved computer vision applications, like a drone that can âseeâ in 360 degrees at once. share, Tasks involving the analysis of geometric (graph- and manifold-structure... Now, researchers have delivered, with a new theoretical framework for building neural networks that can learn patterns on any kind of geometric surface. 73, When Machine Learning Meets Privacy: A Survey and Outlook, 11/24/2020 ∙ by Bo Liu ∙ The revolution in artificial intelligence stems in large part from the power of one particular kind of artificial neural network, whose design is inspired by the connected layers of neurons in the mammalian visual cortex. ∙ He has held visiting appointments at Stanford, MIT, Harvard, and Tel Aviv University, and, has also been affiliated with three Institutes for Advanced Study (at TU Munich as Rudolf Diesel Fellow (2017-), at Harvard as Radcliffe fellow (2017-2018), and at Princeton (2020)), . 0 ∙ Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. âWeâre now able to design networks that can process very exotic kinds of data, but you have to know what the structure of that data isâ in advance, he said. In the case of a cat photo, a trained CNN may use filters that detect low-level features in the raw input pixels, such as edges. share, The question whether one can recover the shape of a geometric object fro... 73, Digital Twins: State of the Art Theory and Practice, Challenges, and And gauge CNNs make the same assumption about data. Michael Bronstein is a professor at USI Lugano, Switzerland and Imperial College London, UK where he holds the Chair in Machine Learning and Pattern Recognition. The challenge is that sliding a flat filter over the surface can change the orientation of the filter, depending on the particular path it takes. share, In recent years, a lot of attention has been devoted to efficient neares... ∙ share, Finding a match between partially available deformable shapes is a 09/28/2018 ∙ by Emanuele Rodolà, et al. share, Many applications require comparing multimodal data with different struc... 0 Bronstein is chair in machine learning & pattern recognition at Imperial College, London â a position he will remain while leading graph deep learning research at Twitter. Michael Bronstein is Professor, Chair in Machine Learning and Pattern Recognition at Imperial College, London, besides Head of Graph ML at Twitter / ML Lead at ProjectCETI/ ex Founder & Chief Scientist at Fabula_ai/ ex at Intel #AI #ML #graphs. G raph Neural Networks (GNNs) are a class of ML models that have emerged in recent years fo r learning on graph-structured data. co... 0 This article was reprinted on Wired.com. 9 min read. ∙ You canât press the square onto Greenland without crinkling the paper, which means your drawing will be distorted when you lay it flat again. The term â and the research effort â soon caught on. ∙ 0 ∙ 0 share, Mappings between color spaces are ubiquitous in image processing problem... 16 06/16/2020 ∙ by Giorgos Bouritsas, et al. Michael Bronstein is a professor at Imperial College London, where he holds the Chair in Machine Learning and Pattern Recognition, and Head of Graph Learning Research at Twitter. He has previously served as Principal Engineer at Intel Perceptual Computing. The new deep learning techniques, which have shown promise in identifying lung tumors in CT scans more accurately than before, could someday lead to better medical diagnostics. 0 (It also outperformed a less general geometric deep learning approach designed in 2018 specifically for spheres â that system was 94% accurate. 94, Tonic: A Deep Reinforcement Learning Library for Fast Prototyping and Michael Bronstein (Università della Svizzera Italiana) Evangelos Kalogerakis (UMass) Jimei Yang (Adobe Research) Charles Qi (Stanford) Qixing Huang (UT Austin) 3D Deep Learning Tutorial@CVPR2017 July 26, 2017. 1 Michael Bronstein received his Ph.D. degree from the TechnionâIsrael Institute of Technology in 2007. ∙ (Conv... share, Maximally stable component detection is a very popular method for featur... Prof. Michael Bronstein homepage, containing research on non-rigid shape analysis, computer vision, and pattern recognition. ∙ Bronstein and his collaborators found one solution to the problem of convolution over non-Euclidean manifolds in 2015, by reimagining the sliding window as something shaped more like a circular spiderweb than a piece of graph paper, so that you could press it against the globe (or any curved surface) without crinkling, stretching or tearing it. communities, Join one of the world's largest A.I. 0 07/06/2012 ∙ by Jonathan Masci, et al. 12/17/2010 ∙ by Roee Litman, et al. Benchmarking, 11/15/2020 ∙ by Fabio Pardo ∙ su... corres... 01/22/2016 ∙ by Zorah Lähner, et al. 0 Title: Temporal Graph Networks for Deep Learning on Dynamic Graphs. ∙ ∙ âPhysics, of course, has been quite successful at that.â, Equivariance (or âcovariance,â the term that physicists prefer) is an assumption that physicists since Einstein have relied on to generalize their models. Move the filter around a more complicated manifold, and it could end up pointing in any number of inconsistent directions. 4 âYou can think of convolution, roughly speaking, as a sliding window,â Bronstein explained. This procedure, called âconvolution,â lets a layer of the neural network perform a mathematical operation on small patches of the input data and then pass the results to the next layer in the network. share, Deep learning has achieved a remarkable performance breakthrough in seve... (This fish-eye view of the world can be naturally mapped onto a spherical surface, just like global climate data. Sort by citations Sort by year Sort by title. 78, Learning from Human Feedback: Challenges for Real-World Reinforcement The goal of this workshop is to establish a GDL community in Israel, get to know each other, and hear what everyone is up to. ∙ 0 Geometric Deep Learning with Joan Bruna and Michael Bronstein https: ... Assistant Professor at the Courant Institute of Mathematical Sciences and the Center for Data Science at NYU, and Michael Bronstein, associate professor at Università della Svizzera italiana (Switzerland) and Tel Aviv University. Physics and machine learning have a basic similarity. 09/11/2012 ∙ by Davide Eynard, et al. He has held visiting appointments at Stanford, MIT, Harvard, and Tel Aviv University, and has also been affiliated with three Institutes for Advanced Study (at TU Munich as Rudolf Diesel Fellow (2017-), at Harvard as Radcliffe fellow (2017-2018), and at Princeton (2020)). Subscribe: iTunes / Google Play / Spotify / RSS. The algorithms may also prove useful for improving the vision of drones and autonomous vehicles that see objects in 3D, and for detecting patterns in data gathered from the irregularly curved surfaces of hearts, brains or other organs. ∙ Slide it up, down, left or right on a flat grid, and it will always stay right-side up. Cohen, Weiler and Welling encoded gauge equivariance â the ultimate âfree lunchâ â into their convolutional neural network in 2019. The catch is that while any arbitrary gauge can be used in an initial orientation, the conversion of other gauges into that frame of reference must preserve the underlying pattern â just as converting the speed of light from meters per second into miles per hour must preserve the underlying physical quantity. Cited by. If you move the filter 180 degrees around the sphereâs equator, the filterâs orientation stays the same: dark blob on the left, light blob on the right. Michael got his Ph.D. with distinction in Computer Science from the Technion in 2007. networks, Efficient Globally Optimal 2D-to-3D Deformable Shape Matching, Geodesic convolutional neural networks on Riemannian manifolds, Functional correspondence by matrix completion, Heat kernel coupling for multiple graph analysis, Structure-preserving color transformations using Laplacian commutativity, Multimodal diffusion geometry by joint diagonalization of Laplacians, Descriptor learning for omnidirectional image matching, A correspondence-less approach to matching of deformable shapes, Diffusion framework for geometric and photometric data fusion in A dynamic network of Twitter users interacting with tweets and following each other. ∙ 0 Yet, those used to imagine convolutional neural networks with tens or even hundreds of layers wenn sie âdeepâ hören, would be disappointed to see the majority of works on graph âdeepâ learning using just a few layers at most. 11/28/2018 ∙ by Luca Cosmo, et al. All the edges have a timestamp. share, Establishing correspondence between shapes is a fundamental problem in Cohen canât help but delight in the interdisciplinary connections that he once intuited and has now demonstrated with mathematical rigor. âI have always had this sense that machine learning and physics are doing very similar things,â he said. The change also made the neural network dramatically more efficient at learning. âThis framework is a fairly definitive answer to this problem of deep learning on curved surfaces,â Welling said. 11/02/2011 ∙ by Michael M. Bronstein, et al. 09/19/2018 ∙ by Stefan C. Schonsheck, et al. But that approach only works on a plane. As Cohen put it, âBoth fields are concerned with making observations and then building models to predict future observations.â Crucially, he noted, both fields seek models not of individual things â itâs no good having one description of hydrogen atoms and another of upside-down hydrogen atoms â but of general categories of things. 06/03/2018 ∙ by Federico Monti, et al. He is credited as one of the pioneers of, methods to graph-structured data. These âgauge-equivariant convolutional neural networks,â or gauge CNNs, developed at the University of Amsterdam and Qualcomm AI Research by Taco Cohen, Maurice Weiler, Berkay Kicanaoglu and Max Welling, can detect patterns not only in 2D arrays of pixels, but also on spheres and asymmetrically curved objects. Creating feature maps is possible because of translation equivariance: The neural network âassumesâ that the same feature can appear anywhere in the 2D plane and is able to recognize a vertical edge as a vertical edge whether itâs in the upper right corner or the lower left. 12/29/2010 ∙ by Dan Raviv, et al. âThe same idea [from physics] that thereâs no special orientation â they wanted to get that into neural networks,â said Kyle Cranmer, a physicist at New York University who applies machine learning to particle physics data. These kinds of manifolds have no âglobalâ symmetry for a neural network to make equivariant assumptions about: Every location on them is different. The researchersâ solution to getting deep learning to work beyond flatland also has deep connections to physics. ∙ Share. âIt just means that if youâre describing some physics right, then it should be independent of what kind of ârulersâ you use, or more generally what kind of observers you are,â explained Miranda Cheng, a theoretical physicist at the University of Amsterdam who wrote a paper with Cohen and others exploring the connections between physics and gauge CNNs. ∙ 12/29/2010 ∙ by Dan Raviv, et al. 0 06/07/2014 ∙ by Davide Boscaini, et al. ∙ Work with us See More Jobs. Open Research Questions, 11/02/2020 ∙ by Angira Sharma ∙ ∙ Already, gauge CNNs have greatly outperformed their predecessors in learning patterns in simulated global climate data, which is naturally mapped onto a sphere. Michael Bronstein sits on the Scientific Advisory Board of Relation. share, In this paper, we introduce heat kernel coupling (HKC) as a method of 12/27/2014 ∙ by Artiom Kovnatsky, et al. ∙ ∙ 09/11/2017 ∙ by Amit Boyarski, et al. 09/14/2019 ∙ by Fabrizio Frasca, et al. 233, Combining GANs and AutoEncoders for Efficient Anomaly Detection, 11/16/2020 ∙ by Fabio Carrara ∙ Facebook; Twitter; LinkedIn; Email; Imperial College London "Geometric Deep Learning Model for Functional Protein Design" Visit Website. At the same time, Taco Cohen and his colleagues in Amsterdam were beginning to approach the same problem from the opposite direction. follower 02/10/2019 ∙ by Federico Monti, et al. âDeep learning methods are, letâs say, very slow learners,â Cohen said. The fewer examples needed to train the network, the better. He is credited as one of the pioneers of geometric deep learning, generalizing machine learning methods to graph-structured data. share, Many scientific fields study data with an underlying structure that is a... âWe used something like 100 shapes in different poses and trained for maybe half an hour.â. 01/22/2011 ∙ by Artiom Kovnatsky, et al. 01/24/2018 ∙ by Yue Wang, et al. ∙ geometric deep learning graph representation learning graph neural networks shape analysis geometry processing. Download PDF Abstract: Graph Neural Networks (GNNs) have recently become increasingly popular due to their ability to learn complex systems of relations or interactions arising in a broad spectrum of problems ⦠deve... They used their gauge-equivariant framework to construct a CNN trained to detect extreme weather patterns, such as tropical cyclones, from climate simulation data. ∙ Their âgroup-equivariantâ CNNs could detect rotated or reflected features in flat images without having to train on specific examples of the features in those orientations; spherical CNNs could create feature maps from data on the surface of a sphere without distorting them as flat projections. 11/24/2016 ∙ by Michael M. Bronstein, et al. ∙ 12/19/2013 ∙ by Jonathan Masci, et al. âThat aspect of human visual intelligenceâ â spotting patterns accurately regardless of their orientation â âis what weâd like to translate into the climate community,â he said. 0 In this paper, we explore the use of the diffusion geometry framework fo... Natural objects can be subject to various transformations yet still pres... We introduce an (equi-)affine invariant diffusion geometry by which surf... Maximally stable component detection is a very popular method for featur... Fast evolution of Internet technologies has led to an explosive growth o... Tuning Word2vec for Large Scale Recommendation Systems, Improving Graph Neural Network Expressivity via Subgraph Isomorphism Those models had face detection algorithms that did a relatively simple job. Counting, Learning interpretable disease self-representations for drug 05/31/2018 ∙ by Jan Svoboda, et al. Convolutional networks became one of the most successful methods in deep learning by exploiting a simple example of this principle called âtranslation equivariance.â A window filter that detects a certain feature in an image â say, vertical edges â will slide (or âtranslateâ) over the plane of pixels and encode the locations of all such vertical edges; it then creates a âfeature mapâ marking these locations and passes it up to the next layer in the network. Gauge equivariance ensures that physicistsâ models of reality stay consistent, regardless of their perspective or units of measurement. ∙ Michael Bronstein joined the Department of Computing as Professor in 2018. 11/01/2013 ∙ by Davide Eynard, et al. Bronstein and his collaborators knew that going beyond the Euclidean plane would require them to reimagine one of the basic computati⦠T his year, deep learning on graphs was crowned among the hottest topics in machine learning. share, Deep learning systems have become ubiquitous in many aspects of our live... The workshop will be in English, and will take place virtually via Zoom due to COVID19 restrictions. A gauge CNN would theoretically work on any curved surface of any dimensionality, but Cohen and his co-authors have tested it on global climate data, which necessarily has an underlying 3D spherical structure. 2 03/27/2010 ∙ by Alexander M. Bronstein, et al. Title. However, if you slide it to the same spot by moving over the sphereâs north pole, the filter is now upside down â dark blob on the right, light blob on the left. share, In recent years, there has been a surge of interest in developing deep His research encompasses a spectrum of applications ranging from machine learning, computer vision, and pattern recognition to geometry processing, computer graphics, and imaging. share, Matrix completion models are among the most common formulations of communities in the world, Get the week's mostpopular data scienceresearch in your inbox -every Saturday, Software engineering for artificial intelligence and machine learning ∙ ∙ 02/04/2018 ∙ by Federico Monti, et al. share, The use of Laplacian eigenfunctions is ubiquitous in a wide range of com... share, We construct an extension of diffusion geometry to multiple modalities ∙ ∙ Graph deep learning has recently emerged as a powerful ML concept allowi... 02/11/2020 â by Anees Kazi, et al.
Chippewa Valley High School Football Roster, Hyena Attack Humans Video, Times New Roman Font Style, Sap Beetle Control, Black Marble Sphere, Fallout 4 Legendary Spawn Rate Mod, Turtle Beach Battle Buds Xbox One, Black And Decker Edge Hog Manual, Laburnum Tree Dead Branches, How Much Land Does A Man Need Sparknotes, | <urn:uuid:9af34f39-2165-4266-85b0-77a973920365> | CC-MAIN-2021-21 | https://www.scanjetsystems.com/autumn-flowers-tzpc/article.php?tag=ba8a21-michael-bronstein-deep-learning | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988837.67/warc/CC-MAIN-20210508031423-20210508061423-00537.warc.gz | en | 0.915753 | 5,273 | 3.390625 | 3 |
Pine Mountain Settlement School
Series 12: LAND USE
THE ROAD [Laden Trail]
DANCING IN THE CABBAGE PATCH XIV – LADEN TRAIL or THE ROAD
TAGS: Laden Trail or The Road ; prospectus ; map ; economic advantages of the Road ; hauling goods over the mountain ; cable incline ; Harlan County’s financial contribution ; high cost of road-building in Appalachia ; benefit to School’s endowment and scholarship fund ; fundraising for the Road ; Little Shepherd Trail ; Kentucky Good Roads Association Plan ; biodiversity ; Ethel de Long Zande; Katherine Pettit; Celia Cathcart; Evelyn K. Wells
LADEN TRAIL, or “THE ROAD,” is a historical record of the campaign for and the building of a paved road over Pine Mountain that would connect Pine Mountain Settlement School to the Laden railroad station near Putney some eight miles across Pine Mountain. Negotiations for the building of the Road began in 1914, approximately a year after Pine Mountain Settlement School was founded.
The close timing of the Road and the School’s beginnings was not coincidental. As construction of the School progressed, it became obvious that the steep Laden Trail — truly only a trail —over the mountain was inadequate for hauling needed supplies by wagon. By 1920 the founders of the School had a plan, had called in consultants and had begun a fundraising campaign for a road. This page features a map and their argument for “The Road” in a Prospectus that was written to inspire donations.
LADEN TRAIL or THE ROAD: Gallery
Prospectus: Road on Laden Trail across Pine Mountain. [delong_road_prosp_002.jpg]
Prospectus Map: Road on Laden Trail across Pine Mountain. [delong_road_pro_001.jpg]
LADEN TRAIL or THE ROAD: Transcription of the Prospectus
“Here, then is Appalachia: one of the great landlocked areas of the globe, more English than Britain itself, more American by blood than any other part of America, encompassed by a high-tension civilization, yet less affected by modern ideas, less cognizant of modern progress, than any other part of the English-speaking world.” — Horace Kephart
Will you help us build a road over our mountain, from Appalachia to modern America? The road will bring 5,000 people in touch with the railroad. It will be a wheeled-vehicle outlet for a great section of three mountain counties, which now have the most indirect communication with the world, either by costly, roundabout roads, or by tiny trails that even a sure-footed mule cannot haul a cart over.
It will mean economic improvement for the whole section — a market for apples and other surplus products — therefore, improved living conditions, larger houses, more knowledge of sanitation, a decrease in moonshining.
It will mean that the Pine Mountain Settlement School pays twenty-five cents a hundred pounds to haul goods from the railroad, instead of seventy cents. In 1916 the School paid to bring in from the L. and N. Railroad its groceries, building materials, heating apparatus, etc. $1000 of this, scholarships enough for eight children, could have been spent directly for their education, if there had been a road.
If we do not build the road, we will soon be at even greater expense in bringing in supplies. At present, we haul goods over the mountain on a broken-down cable incline, some eight miles away, built years ago to take poplar logs to the railroad. The cable sometimes breaks, the rails are rotten, and the incline is already dangerous. At the foot of the mountain, the goods must be reloaded onto wagons, and hauled eight miles over a road which is often impassable.
It is better economy for us to stop right now, and work for a road, than to go on spending money wastefully, year after year. It is also a more constructive policy for our neighborhood. Money spent now, in a lump, for the road, means improvement along many lines. Spent in smaller sums, year by year, it is frittered away, and accomplishes no solid good.
Harlan County cannot build this road now, because it is spending all it will have for some years on fifty or sixty miles of road in the heart of the county. Remember that road-building is in its infancy in Appalachia —that the hills and the creeks make a mile of good road a costly thing.
Our six miles of road are the costly link in a network of highways that will in time bind together three county seats, and give free communication to many square miles of mountains. For the passion for road-building has come to Eastern Kentucky. But a mountain a mile high, with seventy-five foot cliffs to blast through is a huge obstacle whose removal will hasten tenfold the opening up of communication, through the expenditure of County funds.
Harlan County has given five thousand dollars for this road,— a princely gift, — and the first large sum ever appropriated for the benefit of outlying districts. The county will also return $25,000 from its share of state road funds in annual installments of $1200 if we turn over to it the funds for road-building this summer.
The game is worth the candle, for the $25,000 will accomplish three purposes:
1. It will build the road immediately.
2. It will save the School money yearly, and thereby add to our
3. It will become endowment for the School as the annual installments are returned by the County.
Such results are worth a huge effort. A great constructive undertaking brings its own inspiration with it. $3500 has already been given for this project. For this $21,000 still needed we must find:
I. Givers of $500.
2. Givers of $100.
3. Friends who will organize groups to give $100.
4. Promoters, who will talk for the road, suggest possible givers, make appointments for Miss [Ethel] de Long, keep faith alive.
There must, and will be a road across Pine Mountain. How many feet of it will you help us build?
LADEN TRAIL or THE ROAD: History
What follows is a historical overview of the building “The Road” which was finally completed in 1940. The narrative concludes with a summary of the lessons that were learned and the changes the Road brought to the area and the School.
Laden Trail road, car and forested hillside. c. early 1940s [nace_II_album_086.jpg]
From Katherine Pettit regarding progress on the Laden Trail, c. 1920:
You’d be interested in the preliminary report Mr. Obenchain [State Engineer] has just gotten out. On this side of Pine Mountain, there is a rise of one foot in every 1.34 feet (less than 45 degrees). The distance through the mountain is 1-7/8 miles, but we shall need almost 12 miles of road at $6,000 a mile, with an ascent of five feet in every hundred feet. Some undertaking!
This note from Katherine Pettit to the board in the early 1920s was a preliminary assessment of conditions for a road across Pine Mountain to the School. It was among the first steps in the difficult task of bringing a road from the south side of Pine Mountain to the north side of the mountain or, more specifically, from the railroad station at Laden to the settlement school at Pine Mountain.
Geography is often confounding. In the eastern mountains of Kentucky, this is especially true. From the earliest records of exploration of the “Dark and Bloody Ground” of Kentucky to the present day, mountains have been a barrier, friend, wealth, an obstacle to progress, an insulator of culture, and just plain hard to negotiate. The early accounts of travel in the region speak to the tangle of laurel thickets, the sharp ridges and the undulating crests, the short distance but the long journey. Horses and oxen fared little better than their passengers and their loads teetered on slippery saddles or slippery slopes. On many mountain paths supplies slipped, people tumbled down hillsides, roots caught up the unwary and weather made all mountain travel unpredictable, dangerous and costly.
Katherine Pettit was wary of the rapidly developing industrial age, but she was practical and knew that if the school was to thrive it must have a viable transportation corridor for the exchange of goods, people, mail, and communication with new ideas. Rail had already been laid along the Cumberland river on the opposite side of the mountain to carry the cargo of coal and trees from the land, and this exploitative rush on the Southern Appalachians could not be stemmed.
Pettit and de Long believed that, while there were many reasons to join the industrial age, the process must be a partnership entered into with good skills and good sense, not one of exploitation. The isolation of the deep hollows and the mountain-locked valley would, in their view, eventually leave the region poor, exploited, and unhealthy. Roads, they believed, were part of the necessary infrastructure of the Progressive movement and they aimed to see to it that the school at Pine Mountain and the people of the long valley on the north side have this vital conduit to progress.
The undulating escarpment of the Pine Mountain, with its gentle dip slope and its sharp scarp slope is beautiful to view, but it yields few locations where roads can pass through natural gaps. The entire length of the Pine Mountain range, running northeast to southwest for 100 miles, give or take, is evidence of a thrust fault of major proportions. The settlement school sits at the foot of the steep side or scarp slope of the mountain.
LADEN TRAIL or THE ROAD: Biodiversity of Pine Mountain
A hike through the heavily forested area reveals the richness of the flora and fauna of the mountain. The Kentucky State Nature Preserves System has noted that there are more than 250 occurrences of 94 rare species of flora and fauna that are native to the region. Each year Pine Mountain School leads a walk through this wondrous mountain area that commemorates the early work of Emma Lucy Braun, a leading ecologist who stayed at Pine Mountain in 1916 while she conducted research on the “mixed mesophytic” (a term she coined) forest floor of the region. Her early work found the region to be the source of most of the woody species that appear throughout much of the Southern Appalachians. In the mesophytic forest there are some eighty different species as opposed to three or four in most other common forests. [Library of Congress: “Tending the Commons: Folklife and Landscape in Southern West Virginia: Cove Typography”, web resource]. Every year reveal the increasing risk to the some 250 occurrences of the 94 rare species of flora and fauna on the mountains surrounding Pine Mountain Settlement School.
‘Rebel’s Rock’ in early Spring, along the Laden Trail road, 2016. [P1120108.jpg]
It is through this wonderland of vegetation and long views that the workers and students walked to get to the School from the rail station at Laden (later Putney). As travelers came down the mountain from Incline (appropriately named) they could look northeast down the long valley and see West Wind,
the large white building that sits prominently on the hillside facing out from the campus. Many said that if they could see the building, they knew they were close to the school. But, it was still a long walk.
At $6,000 a mile and twelve long miles, the $72,000 road project was vastly beyond the fiscal grasp of the School, but not beyond a cooperative venture with the county and the state of Kentucky. Pine Mountain became the voice of the cooperative project and a long battle with bureaucracy and funding stretched over the course of many years and is well documented in the School records and archive. An important player in the construction was the Kentucky Good Roads Association.
KENTUCKY GOOD ROADS ASSOCIATION PLAN
In 1912 the Kentucky Good Roads Association came into being in order to promote better methods of road construction and maintenance and to improve the laws and under which the work on roads was to be executed. Out of these early discussions and over the course of about ten years the Kentucky Good Roads Association, a non-partisan association, advocated for the issuing of a $50,000,000 bond to be expended over a period of no less than five years in order to complete the state mandate of 1920 to construct a state-wide system. This system would connect every county seat with hard-surfaced roads. However, the Legislature of 1922 failed tp submit the bond referendum to the people in a timely manner and the Good Roads Association decided in 1923 they would push for the submission of the referendum in the 1924 Legislative session.
The Eastern Kentucky branch of the Kentucky Good Roads Association was formed in 1923 and joined with the Central Kentucky Good Roads Association. The two began a campaign for the adoption and passage of the referendum. Their adopted motto assigned by member Tom Wallace was “United, we move forward; divided, we stick in the mud.” This was in reference to the taxation for road repair that the people called the “mud tax.” The “mud” was a reminder of the condition of the many roads in the state that were in poor condition.
Various counties appointed district chairmen to represent them. Ominously, Harlan County had no representatives at the time the Kentucky Good Roads Association published their platform, which was to be taken to the State convention of the Good Roads Association in Lexington on July 19 and 20 of 1924. But, wisely, they were later chosen as a representative of Harlan County at the State convention. Pine Mountain was a voice in moving the Good Roads Association platform forward.
The plan of the 1923 campaign was to follow 3 objectives:
- Distribute literature and news matter in order to show the people of Kentucky what it would cost to build and maintain a completed system of hard surface roads.
- Form in every County an active organization to carry out the aims fo the Association.
- Through solicitation of memberships, collect fund to defray the expenses of the campaign, from every resident, taxpayer, person, firm or corporation having a legitimate interest in the construction of a hard road system in Kentucky.
The common practice of operating in a patchwork manner in which over 54 different centers of construction tried to coordinate jobs and plans, was not working and it was clear that the old patchy system needed replacement. Another challenge was found in what was referred to as the “Sinking Fund” which was the provision that was mandated to be used to pay off the bonds. The current funding for the Sinking Fund was also to be used to maintain the roads after construction. It was a sinking proposition. The ultimate outcome was a proposal that would require some $2,830,000 a year to retire the bonds in the timeframe mandated by the State agreement. With state revenues to off-set the pay-back, the total approximate maintenance budget could be kept at $1,100,000, a figure that some felt to be beyond reach.
Broadside for the Pine Mountain Laden Trail Road project. [roads_004.jpg]
Katherine Pettit and others, like her, believed that many of the deficits claimed by the nay-sayers could be recovered by increased revenues to the counties through improved roads and increases in the motor traffic of the region. The assessment of motor vehicle owners, it was believed, could further offset the cost of principle and interest of the 30-year bond. The thirty-year cost would stand at around $85,729,721 which includes the principle of the bond of $50,000,000 and an interest rate of 4.5%. A further justification was made regarding the improvement that suggested the vital importance of roads to agriculture in the state and to the increase in opportunity for industrial materials transport.
The Kentucky Good Roads Association plan was a good one and one that had the full endorsement of the Pine Mountain Settlement School administration and staff, particularly the efforts of de Long and Pettit. Both Director Ethel de Long and Katherine Pettit saw the state referendum as an opportunity for the School to raise sufficient money to qualify for extension and improvement of the trail into a full and useful road from the Putney rail-head to the School. Though their efforts were not immediately evident, the trail would never have become a road had they not pushed for a corridor to transport goods and services into and out of the Pine Mountain valley. The trajectory of their effort was a long one, seen in the chronology below.
LADEN TRAIL or THE ROAD: A Chronology of Pine Mountain Settlement School Road History
A one-page document compiling the history of the “Road Over Pine Mountain” was drafted by an anonymous author. It captures the long course of events associated with the creation of Laden Trail Road by the School.
The School asked Harlan County Fiscal Court to appropriate money for a good road over Pine Mountain. Early estimates placed the cost at $10,000, and in June the fiscal court appropriated half that sum, and the School started out to raise the other half.
Miss de Long made trips to Harlan, found the cost would be $50,000, instead of $10,000, [The] School was to raise half and the state give dollar for dollar. But [the] School had to raise the second half to loan the state which promised to pay it back in annual installments of $1200.
Miss Celia Cathcart went to work to raise the first half in Ohio, Illinois, Indiana and Kentucky. Miss de Long, from the School, raised the second half, which was to be the loan to the state.
Uncle William Creech went to Louisville to speak for the Road and made many friends for the School.
1918 – JANUARY
Surveying began after preliminary work done in 1915-1916.
1919 – May
Work began, but prices so high by this time that the Road had to cost $100,000. All the $50,000 raised by the School had to be used, and none of it would come back. In 1922 the funds gave out, and the Road had been graded only to the top of Pine Mountain [on the South side]. There was no money for further work on this [North] side.
Mrs. [de Long] Zande went to Frankfort to lobby, succeeded in getting a bill through which made the Road a link in the chain of State primary Highways between county seats, thus ensuring that eventually Kentucky would have to finish the Road. The School could do no more.
The hauling road down this [North] side of the mountain was built by neighbors and county labor about 1924.
In 1929 a sum of $50,000 was appropriated by the Harlan County Fiscal Court for the completion of the Road, and resurfacing of what had already been graded, but this sum was not available in the end.
In 1931-32 the poor grade was improved a bit by the emergency relief workers. Aaron Creech was paid by the School to survey the new grade and was boarded at E.M. Nolen’s house free. Some work was commenced but was given up when the money ran out again.
In the spring of 1934 the CCC [Civilian Conservation Corps] workers began work on the new survey made by Aaron Creech. Right of way was given by all the owners except Otto Nolan, who has paid money by neighbors and School.
The Road was completed by CCC labor.
By 1937 the road situation in eastern Kentucky had improved and the number of paved roads can be seen in this hand-drawn map prepared by an unknown individual at Pine Mountain Settlement School. As can be seen in the map, the only high school that does not have access by a paved road is Pine Mountain Settlement School. While getting a road across the mountain was successful, access continued to be difficult for students on the north side of Pine Mountain. The “S” in green at the center top of the map is Laden Trail road as it leads into PMSS.
1937 Road Map of Harlan County, hand-drawn by anonymous individual and showing location of high schools in the county and their access by paved road in 1937. [roads_005.jpg]
Work on the Road continued for many years even after its energetic beginning. Evelyn Wells, the Editor[?], writing for the Pine Mountain NOTES in November of 1920, speaks lyrically of the progress on the Road :
The progress of the Pine Mountain Road has been without haste and without rest.
Six years ago we had a Dream of a Road.
Next year we hope to have The Road.
There is about its history a slow rhythm suggestive of classic Roman roads, which should augur well for its quality as a finished road. It started at the railroad, sauntering along so easily that one would never know it was climbing, stopping now and again at a refreshing spring or stream, or just to give one a chance to look at Big Black Mountain’s wonderful mass. It struck a little hill and had to gather up its young strength to eat its way through with a steam shovel that chewed out four hundred cubic yards of rock every day for weeks. Then it swept around the hill joyously and easily until it came to cliffs — a genuine jumping-off place, full of old bears’ dens. Here it halted many weeks while the air drills and steam shovels moved tons of rock to make a huge fill. And now the road continues its climbing unwearied, below the Rebel Rocks, through deep, still thickets of rhododendron, and across pure streams, viewing always the mountain across the valley. We stand at the point where the old trail crosses the road, and wonder if future visitors, coming across all the way on its beautiful, easy grade, will ever believe that once we all, two-footed and four-footed alike, scrambled up the twenty-five percent grade trail!
The other day at dusk, seven men started up the road on their mules, one behind the other, quite as if it were not wide enough for them to go three or four abreast. The Editor called out, “What makes you go up endwise still? Why don’t you ride together until you have to take the trail?” “We got so used to it we couldn’t help it” came the answer, and the Editor read again the poem of Mr. W. A. Bradley on the “Men of Harlan.”
“For, in that far, strange country, where the
men of Harlan dwell,
There are no roads at all, like ours, as we’ve
heard travelers tell.
But only narrow trails that wind along each
Where the silence hangs so heavy you can hear
the leathers squeak,
And there no two can ride abreast, but each
alone must go,
Picking his way as best he may, with careful
steps and slow.”
Frances Lavender Album. I don’t know these girls but the rear horse is Bobby. — This is such a good view of our roads.
It was always a topic of conversation with workers and students as seen in this exchange in the student newsletter THE PINE CONE, October 1937, which captures the ambivalence of older staff to change and progress:
Signs of progress are the highways of travel
Asphalt, cement, sand and gravel;
All play a part in this building plan,
Making easy the tours of man;
Girding the earth like ribbons of gray
Stretch in untold miles the broad highway.
Mistaken the one who the above lines wrote,
The following facts are worthy of note
Pine Mountain, Kentucky, clings to the past
Old customs, old ballads, she holds these fast.
Highways of travel — roads did you say?
“No such animal” comes this way!
Trails, paths, a creek bed for a road
Rough and rocky — a light weight becomes a load;
Mud and slush, mire and hill —
Traveling her give one a thrill!
No easy sailing over a road like this;
End of journey is peaceful bliss.
Companions of travel along these bogs
Are countless razor backs and other kinds of hogs;
In spite of the primitive way of it all,
There’s something about it seems to call
To the soul of living for a bigger life,
Away from the modern rush and strife.
So here’s to Pine Mountain, her roads and her ways,
May the blessings of peace be hers always;
If progress and growth be her birthright
Grant these come with education’s light;
Roads — highways of hope!
These, too, perhaps in her horoscope.
AND NOW [The Editor (?) writes:]
The above poem, which was written by Miss McDavid, a housemother at Old Log in 1930, brings to the mind of a worker who has been away from Pine Mountain for seven years the great contrast between then and now. Change has taken place and it seems to be simultaneous with the building of the road. No longer is there a blind clinging to the past merely for the sake of tradition. The best of the past has been retained and many new things have been added. Old restrictions are gone. The freedom at Pine Mountain is an amazing thing, but more than that, the new responsibilities on the shoulders of each student are a sign of a large forward step. Roads are a symbol of civilization.
The ever-present question is in what direction will Pine Mountain go from now on? Has the road brought each student a new highway of hope?
LADEN TRAIL or THE ROAD: What About Today?
Today, in 2016, the School is encircled by paved roads and even roads that perhaps should have remained dirt thoroughfares, such as Little Shepherd Trail on the crest of Pine Mountain above the School. That one-lane scenic road was paved, in part, to keep it from washing out and, in part, as a response to the success of the paving of other scenic mountain roads, such as the Blue Ridge Parkway. However, the sections that are now paved will need to compete with foot traffic if a proposed Pine Mountain hiking corridor reaches fruition. Obviously, transportation corridors change and the changes reflect the changing times.
The lessons of “The Road” were many before it found its way across the mountain. The negotiation, cooperation, double-dealing, graft, community support, all brought along an education to a new generation entering the industrial age. The Road made the trip to the School easier for many visitors who made the journey. It enabled the Cooperative Store during the Boarding School year to function, and it kept the growing school supplied through the difficult years of World War II.
To put this simple unpaved road in perspective, the Appalachian Scenic Highway, later the Blue Ridge Parkway, was begun in 1935. The Parkway was a 469.1-mile road that stretched across two states and took 50 years to complete. The final Lynn Cove segment of the road was not completed until 1987!
The Road on Laden Trail, six miles of arduous negotiation and labor was finally completed in 1940. It is only in the late 1970s that the Road became a scenic route across the mountain. The completion of Highway 421 across Pine Mountain at Bledsoe became the preferred conduit for goods and people across the steep Pine Mountain ridge.
Today, Laden Trail is not a designated scenic parkway, but it holds a special place in the mind of the community and continues to offer the beauty of the forests of Pine Mountain and the long views of both the Black Mountain range on the south side and the peaceful Pine Mountain valley on the north side. It offers access to the Little Shepherd Trail, a popular narrow and scenic road that intersects the Laden Trail at near its mid-point. The Little Shepherd Trail, which runs along the crest of the Pine Mountain has become a long classroom for the many environmental programs that Pine Mountain School offers to the public.
Further, while the main transportation routes in the area are now paved, unfortunately, they remain some of the most dangerous roads in Kentucky and have some of the highest maintenance requirements. Due to the many years of travel by logging and coal trucks, the narrow roads in eastern Kentucky can be heart-stopping at times, and also confusingly un-marked and intricate.
Roads come with their benefits and their deficits, but it is certain that road construction leading into the Pine Mountain valley, one of the most remote of Harlan County’s areas, would not have happened for many years were it not for a passel of women working hard to pave the way.
LADEN TRAIL or THE ROAD
LETTERS OF CORRESPONDENCE RELATED TO “THE ROAD”
LADEN TRAIL or THE ROAD CORRESPONDENCE Part I
LADEN TRAIL or THE ROAD CORRESPONDENCE Part II
CELIA CATHCART CORRESPONDENCE
LADEN TRAIL PHOTO GALLERY
LITTLE SHEPHERD TRAIL
LADEN TRAIL VIDEO (1980s) – Paul Hayes | <urn:uuid:1b0fc786-bd76-4579-854e-40b474e0536c> | CC-MAIN-2021-21 | https://pinemountainsettlement.net/?tag=laden-trail | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989614.9/warc/CC-MAIN-20210511122905-20210511152905-00377.warc.gz | en | 0.966145 | 6,186 | 2.953125 | 3 |
Sorsogon (Bikol: Probinsya kan Sorsogon; Waray: Probinsya san Sorsogon; Tagalog: Lalawigan ng Sorsogon), is a province in the Philippines located in the Bicol Region. It is the southernmost province in Luzon and is subdivided into fourteen municipalities (towns) and one city. Its capital is Sorsogon City (formerly the towns of Sorsogon and Bacon) and borders the province of Albay to the north.
Sorsogon is at the tip of the Bicol Peninsula and faces the island of Samar to the southeast across the San Bernardino Strait and Ticao Island to the southwest. Sorsogueños is how the people of Sorsogon call themselves.
In 1570 two Augustinian friars, Alonzon Jiménez and Juan Orta, accompanied by a certain captain, Enrique de Guzmán, reached Hibalong, a small fishing village near the mouth of Ginangra River, and planted the cross and erected the first chapel in Luzon. It was from this village that Ibalong, referring to the whole region, came to be. Moving inland with a northwesterly direction they passed by the territory now known as Pilar, before they reached Camalig, Albay. The establishment of the Abucay-Catamlangan Mission later was ample proof of this. The early towns established here were: Gibalon in 1570 (now sitio of Magallanes); Casiguran – 1600; Bulusan – 1631; Pilar – 1635; Donsol – 1668; Bacon – 1764; Gubat, Sorsogon, Juban and Matnog – 1800; Bulan – 1801; Castilla – 1827; Magallanes – 1860; Sorsogon – 1866 and Irosin – 1880. The province was eventually separated from Albay on October 17, 1894 and adopted the name Sorsogon. The town of Sorsogon was also selected as its capital.
On the 1935 Philippine Constitutional convention, Sorsogon had its own delegates. They were Adolfo Grafilo, Francisco Arellano, José S. Reyes, and Mario Gaurino.
Sorsogon covers a total area of 2,119.01 square kilometres (818.15 sq mi) occupying the southeastern tip of the Bicol Peninsula in Luzon. The province is bordered on the north by Albay, east by the Philippine Sea, south by the San Bernardino Strait, and west and northwest by the Ticao and Burias Passes. The Sorsogon Bay lies within the central portion of the province.
The province has an irregular topography. Except for landlocked Irosin, all the towns lie along the coast. They are all connected by concrete and asphalt roads. Mountains sprawl over the northeast, southeast and west portions. Mount Bulusan, the tallest peak, rises 1,560 metres (5,120 ft) above sea level.
Except for its overland link with the province of Albay to the north, it is completely surrounded by water. Sorsogon is the gateway of Luzon to the Visayas and Mindanao through its Roll-on/Roll-off ferry terminal facilities located in the municipalities of Matnog, Pilar and Bulan.
|City or municipality||District||Area||Density|
The population of Sorsogon in the 2015 census was 792,949 people, with a density of 370 inhabitants per square kilometre or 960 inhabitants per square mile.
The top 5 towns with the most number of population is Sorsogon City (168,110), Bulan (100,076), Pilar (74,564), Gubat (59,534), and Castilla (57,827). The least populated municipality since the 2000 census is Santa Magdalena.
Of the 704,024 household population in 2007, males accounted for 51.1% and while females compromised 48.9%.
The voting-age population of the province was 369,204 in 2007, equivalent to 52.1 percent of the household population.
The Bicolano language predominates in Sorsogon as a language used by its people. But the people who lived at the southern part like Gubat they speak Waray language. English and Filipino are the official languages used in education and various forms of communications. But Bicolano, as used in this province has some peculiarities. What is known as "Bikol Naga" is used in written communications and generally understood as a spoken language.
However, there are Bikol languages peculiar to certain specific places. For example, people in Bacon, Prieto Diaz and Magallanes speak the Albay Bikol variant. In Sorsogon City, Casiguran and Juban, Bicolano is slightly different for some of the terms used are similar to Hiligaynon, which is mainly spoken in Western Visayas and southwestern Masbate.
Barcelona, Gubat, Bulusan, Matnog, Irosin and Santa Magdalena speak a dialect which uses terms and tones similar to the Waray-Waray of Eastern Visayas (especially that of Northern Samar), called Gubat language. The people of Pilar and Donsol speak a dialect similar but not exactly alike to the "Miraya Bicol" or the dialect spoken by the nearby towns of Camalig and Daraga in Albay province. The Castilla dialect is the same as that of Daraga.
Sorsogon Ayta Language
In 2010, UNESCO released its 3rd world volume of Endangered Languages in the World, where 3 critically endangered languages were in the Philippines. One of these languages in the Southern Ayta (Sorsogon Ayta) language which has an estimated speaker of 150 people in the year 2000. The language was classified as Critically Endangered, meaning the youngest speakers are grandparents and older, and they speak the language partially and infrequently and hardly pass the language to their children and grandchildren anymore. If the remaining 150 people do not pass their native language to the next generation of Sorsogon Ayta people, their indigenous language will be extinct within a period of 1 to 2 decades.
The Sorsogon Ayta people live only on the municipality of Prieto Diaz, Sorsogon. They are one of the original Negrito settlers in the entire Philippines. They belong to the Aeta people classification, but have distinct language and belief systems unique to their own culture and heritage.
Sorsogon is predominantly a Catholic province. Spanish conquistadores gave Sorsogon its first encounter with Christianity. This was in the year 1569 when Fray Alonzo Jimenez, OSA, chaplain of the expedition under Luis Enriquez de Guzman celebrated the first Mass upon landing on the coast of sitio Gibal-ong (or Gibalon), barangay Siuton, in the town of Magallanes. Christianity, however, was formally established in Sorsogon with the planting of the Cross on the shores of Casiguran town in 1600 by the Franciscan Friars. This was a prelude to the erection of the first church building dedicated to the Holy Rosary, still revered at present as the Patroness of Casiguran. From there, the Franciscan missionaries devotedly spread the faith to the other towns in Bacon (1617), Bulusan (1630) and Donsol (1668). The other twelve towns followed suit in the course of time. In the original geographic division, the province of Sorsogon formed part of Albay province. It seceded as a separate province on Oct. 17, 1984. Catholicism is followed by 93% of the population of Sorsogon.
The Diocese of Sorsogon was originally part of the Archdiocese of Nueva Caceres. When it was made a separate diocese on June 29, 1951, it included the territory of Masbate. When the Diocese of Nueva Caceres was elevated into an archdiocese in the same year, Legazpi and Sorsogon were made suffragan dioceses of Nueva Caceres. On March 23, 1968, Masbate was made into a separate diocese. At present the Diocese of Sorsogon covers simply the civil province of Sorsogon and the City of Sorsogon.
Superstitions and local legends and beliefs
Prior to colonization, the region had a complex religious system which involved various deities. These deities include: Gugurang, the supreme god who dwells inside of Mount Mayon where he guards and protects the sacred fire in which an Aswang( local version of witches and monsters) and , his brother was trying to steal. Whenever people disobey his orders, wishes and commit numerous sins, he would cause Mount Mayon to burst lava as a sign of warning for people to mend their crooked ways. Ancient Bikolanos had a rite performed for him called Atang.; Asuang, the evil god who always try to steal the sacred fire of Mount Mayon from his brother, Gugurang. Addressed sometimes as Aswang, he dwells mainly inside Mount Malinao. As an evil god, he would cause the people to suffer misfortunes and commit sins. Enemy of Gugurang and a friend of Bulan the god of the moon; Haliya, the masked goddess of the moonlight and the arch-enemy of Bakunawa and protector of Bulan. Her cult is composed primarily of women. There is also a ritual dance named after her as it is performed to be a counter-measure against Bakunawa.; Bulan, the god of the pale moon, he is depicted as a pubescent boy with uncommon comeliness that made savage beast and the vicious mermaids (Magindara) tame. He has deep affection towards Magindang, but plays with him by running away so that Magindang would never catch him. The reason for this is because he is shy to the man that he loves. If Magindang manages to catch Bulan, Haliya always comes to free him from Magindang's grip; Magindang, the god of the sea and all its creatures. He has deep affection to the lunar god Bulan and pursues him despite never catching him. Due to this, the Bicolanos reasoned that it is to why the waves rise to reach the moon when seen from the distant horizon. Whenever he does catch up to Bulan, Haliya comes to rescue Bulan and free him immediately; Okot, god of forest and hunting; and Bakunawa, a gigantic sea serpent deity who is often considered as the cause of eclipses, the devourer of the sun and the moon, and an adversary of Haliya as Bakunawa's main aim is to swallow Bulan, who Haliya swore to protect for all of eternity.
The province's economic activity is highly concentrated in its capital city, Sorsogon City, and the towns of Bulan, Irosin, Gubat, Pilar and Matnog as well. Sorsogon Province is classified as 2nd class with an average annual income of ₱339.4M (C.Ys. 2000-2003). This is about ₱11M short for the province to attain 1st class reclassification which requires at least ₱350M average annual income.
The province had a great contribution on the 97-percent growth in investments for the first quarter of 2008 and increasing tourism arrivals that buoyed the Bicol Region economy, despite the damage brought about by incessant rains and a rice shortage. This is according to the Quarterly Regional Economic Situationer (QRES) released by the National Economic and Development Authority (NEDA) Regional Office in Bicol (NRO 5).
Among the provinces, Sorsogon posted the highest growth (293% respectively) in investments from the previous year. Next to Sorsogon is Catanduanes that posted a growth of 280%. Albay contributed 39 percent to the region's investments and posted a growth of 221% from the preceding quarter.
“For the third time, Bicol Region hosted the kick-off of Asia's premier extreme sailing event, the Philippine Hobie Challenge last February 16 at Gubat, Sorsogon. This 260-mile journey from Gubat-Sambuyan-Bacsal-Marambut-Suluan to Siargao enticed both local and foreign water sports enthusiasts. It opened the opportunity for the municipality of Gubat to showcase the town's best,” the QRES stated.
Ranked from main sources of income, 40% of families in the province derived incomes from entrepreneurial activities, 33% from salaries and wages, and 27% from income from other sources such as rental incomes, interests, and overseas Filipino remittances.
The Pan-Philippine Highway (N1/AH26), is the highway backbone network, and the secondary and tertiary roads interconnect most cities and municipalities in Pilar, Castilla, Sorsogon City, Casiguran, Juban, Irosin before ending at Matnog at the ferry terminal.
In order to spur development in the province, The Toll Regulatory Board declared Toll Road 5 the extension of South Luzon Expressway. A 420-kilometer, four lane expressway starting from the terminal point of the now under construction SLEX Toll Road 4 at Barangay Mayao, Lucena City in Quezon to Matnog, Sorsogon, near the Matnog Ferry Terminal. On August 25, 2020, San Miguel Corporation announced that they will invest the project which will reduce travel time from Lucena to Matnog from 9 hours to 5.5 hours.
The Matnog Ferry Terminal provides access to the island of Northern Samar in Allen.
Sorsogon belongs to Type 2 climate based on the Climate Map of the Philippines by the Philippine Atmospheric, Geophysical and Astronomical Services Administration (PAGASA). Being a Type 2, Sorsogon has No dry season with a pronounced rainfall from November to January.
The province of Sorsogon normally gets 5 to 10 typhoons every year.
The most notable typhoon is in 1987, when Sorsogon was devastated by Super Typhoon Nina named Sisang. It was a major disaster in the Province of Sorsogon. Damages in properties cost million of pesos, and killing 200 people. It is said that Sisang is the strongest typhoon that hit the Province, especially its capital, Sorsogon City. According to PAGASA, Typhoon Nina ravaged with a wind of 180 kilometres per hour (50 m/s) and a gustiness of 200 km/h (56 m/s). Thousands of houses plus business establishments were destroyed by the said natural calamity. Typhoon Sisang hit the Sorsogon soil at around 7:00 pm and it last until dawn of the next day. it also caused massive storm surges particularly around the Sorsogon Bay area which contributed to the many fatalities during the battering of the typhoon.
Typhoon Xangsane (Milenyo) also battered the province in September 2006 with torrential rains and strong winds. It caused massive flooding and caused infrastructure and agricultural damages. Damages to the entire province was initially placed at ₱2.23 billion, of which ₱1.27 billion was accounted for by damaged houses. Agriculture suffered damage worth ₱234.21 million; school facilities, ₱51 million and infrastructure, ₱208 million.
Most of the inhabitants of the province belong to the ethnolinguistic Bicolano and Bisakol groups. Sorsogueños are religious, being mostly Roman Catholics, and are active in festivities celebrated throughout the year. Each town honors their Patron Saint with celebration on its Feast Day. In Sorsogon City, the locals celebrate the Fiesta of the Patron Saints Peter and Paul every June 28–29 annually. Another featured attraction during town fiestas are the traveling carnivals set up near the town center. In Gubat, the feast of Gubat is celebrated on June 13.
- Kasanggayahan Festival — celebrated in the whole province in the last week of October, commemorates the founding of Sorsogon as a province. Festivities include a series of cultural, historical, religious, agro-industrial and economic activities, showcasing the province’s abundant agricultural products, particularly food and decorative items. One of the main activities and highlight of the festival is the Pantomina sa Tinampo, it is a kind of cultural-ethnic streed dance native to the province. Hundreds of men and women participated, clad in colorful traditional Filipino couture while dancing barefoot as they parade around the city.
- Pili Festival — in Sorsogon City, honors the Pili nut and tree which is indigenous to province. The festival coincides with the town fiesta of Sorsogon City. Celebrations include street dancing by locals donning pili nut costumes, cooking competitions, fireworks displays, color run, and even a nutcracking session along the road by the locals.
- Parau Festival – Pilar, Sorsogon celebrates Parau Festival every October. The Festival coincides with the town fiesta of Pilar. Events include Inter-High School Sportsfest, DLC Competition, Parau Street Dancing Competition, Color Run, Palarong Bayan.
- Ginubat Festival – from Gubat, Sorsogon, a festival based on the roots of the town of which its name was derived. It features the following activities: cultural street parade, exhibit, sail boat race, beauty pageant, fiesta celebration and the Balik Gubat which is the highlight of the festival.
Minorities include Muslim immigrants from Mindanao, who engage in street vending and small shop businesses. A mosque is situated inside Sitio Bolangan on the outskirts of the city. A significant small Chinese population are owners of hardware stores and commodity shops and dwell in the business center. Indian communities are also present and are Hindus. They are typically known to engage in money lending businesses—colloquially called "five-six".
Sorsogon is subdivided into 2 Congressional Districts. The 1st Congressional District comprises the City of Sorsogon and towns of Pilar, Donsol, and Castilla. The Sorsogon Provincial Capitol is located in the City of Sorsogon.
- Cite error: Invalid
<ref>tag; no text was provided for refs named
- Republic Act No. 8806 - An Act Creating the City of Sorsogon by Merging the Municipalities of Bacon and Sorsogon in the Province of Sorsogon and Appropriating Funds Therefor (16 August 2000).
- Asuang Steals Fire from Gugurang by Damiana L. Eugenio.
- Clark, Jordan (2011) The Aswang Phenomenon Animation https://www.youtube.com/watch?v=goLgDpSStmc
- Inquirer NewsInfo: Bicol Artist protest Natl. Artist awardees.
- GMANews: Eclipse; Bakunawa eats the sun behind a curtain of clouds.
- Sorsogon (en).
- SLEX Toll Road 5 to connect Quezon province to Sorsogon. YugaTech (August 18, 2020).
- San Miguel investing P122B for SLEX Toll Road 5, Pasig River Expressway projects. GMA News Online (August 25, 2020).
- Festivals in Sorsogon. Sorsogon Tourism Website. Retrieved on 2010-06-06. | <urn:uuid:594b0fcd-380e-4b26-b6fb-06037f0175f8> | CC-MAIN-2021-21 | https://en.wikipilipinas.org/view/Sorsogon | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990419.12/warc/CC-MAIN-20210511214444-20210512004444-00497.warc.gz | en | 0.93525 | 4,194 | 2.796875 | 3 |
By John D. Gresham
Looking back on the age of fighting sail, a common image is that of battles between huge ships of the line, led by such famous admirals as Nelson and Collingwood. But such massive confrontations were rare, and much of Napoleonic era seapower actually was dull and mundane patrolling, cruising, and blockade duty off of enemy ports. I say “much,” because if there was an exception to the usual boredom of naval life, it was to be found aboard the greyhounds of sail-based naval presence, frigates. If ever there were glamorous ships in the dangerous world of wooden warships, they were the frigate sailing ships of the world’s navies.
Frigates were the ships to be on, if adventure, action, and a sense of glory were your idea of navy life in the age of sail. But not all frigates in the world’s navies were so pleasant to serve aboard. Clearly that honor went to those of the Royal Navy, which reached the zenith of its power during the Napoleonic Wars, from 1793 to 1815. Frigates were the true measure of British seapower, holding the line in peace and leading the fleet in war. Aboard the frigates of the Royal Navy were found the finest officers in the service and men who frequently sought duty aboard, not the “press” (forced recruiting) gangs of the era. Frigate captains were the equivalent of modern-day rock stars to the public, respected for their daring and achievements, sought out for their acquired prize wealth and influence. These were the greatest sailors of their time, wielding the most flexible weapons system of the age. (Learn all about the naval encounters that defined the Napoleonic Era by subscribing to Military Heritage magazine.)
From Fighting Galleons to Frigate Sailing Ships
The sailing vessels that came to be called frigates had their origins in the fighting galleons of the 16th century. By the middle of the 17th century, they had begun to mature, developing the long, narrow lines that would be their trademark. Originally, these were the largest ships (called 4th Rate) not considered fit to stand in a line of battle, but still carrying at least 38 large-bore weapons on their gun decks. While they lacked the firepower, crew, or structure to slug it out in battle with the 1st (90 guns or more with three decks), 2nd (80 to 89 guns with three decks), or 3rd Rates (54 to 79 guns with two decks), they had the rigging (three masts) and speed (over 12 knots in a good wind) to run from such vessels.
On the other hand, 4th Rates could easily maneuver with and outgun the smaller 5th (with 18 to 37 guns, and often oars) and 6th (6 to 17 guns used for courier duty) Rate warships, along with the merchant vessels that were the reason for having a navy in the first place. These early frigates wound up being used for a variety of important tasks, ranging from scouting and reconnaissance ahead of the battle fleet to protecting convoys and commerce raiding on the high seas. Very quickly, navies everywhere began to see the value of frigates, both for their relatively low cost and ease of manning compared with ships of the line. Basically, frigates were fighting ships that could outrun anything that could hurt them and outgun anything that could catch them.
Over the next century, various countries evolved their own interpretations on the basic form of the frigate, each adding their own unique design ideas and touches. Some of these included the British trend of adding more sails and guns, the Dutch habit of making their vessels light with a shallow draft, and the garish French tradition of festooning their ships with ornate Baroque decorations, along with bow and stern guns. Along the way, the ratings of warships evolved as well, resulting in the classifications by the British Admiralty shown in the chart below.
As can be seen from the chart, frigates sat solidly in the middle of the warship spectrum, continuing their tradition of being able to outrun more heavily armed vessels and defeat ships of similar speed and maneuverability. They also had another advantage, one rarely mentioned in the swashbuckling stories of the time: Frigates were often kept in service in both peace and war, while the majority of larger ships were normally “laid up” as a reserve fleet, activated only at the outbreak of hostilities. While this was done as a cost-saving measure, it had very important and positive effects on the fighting qualities of the frigates.
Because frigates were normally kept in commission even during peacetime, their captains and crews tended to be among the best a particular navy could offer. These captains were usually the most aggressive in the service, chosen for their ability to think and act in both war and peace.
The Finest Navy in the World (Because it Had to Be)
Although many nations had frigate sailing ships in their fleets, those of the Royal Navy set the standard for excellence during the Napoleonic Wars. In fact, the previous century had seen England become a worldwide superpower, a status enabled by the British fleet. What set its vessels apart was as much necessitated by geographic reality as the quality of their construction and crews. Then as now, Britain was an island nation, dependent upon freedom of the sea lanes to maintain the most basic of sustenance for its people. This fact had come into being during the time of Queen Elizabeth I and Sir Francis Drake, and has been the cornerstone of British military strategy for over 500 years. In short, the Royal Navy was the finest navy in the world because it had to be. Continental powers of the Napoleonic era like France and Spain might desire maritime trade, but for Britain it was literally the breath of life.
For the Royal Navy, deploying the finest sailing frigates of the era began with a solid design, one that was easy to build and economical with regard to critical strategic materials like hardwood and metals. This was essential, because almost everything that went into English warships except the crews and guns had to be imported over the very sea lanes they would eventually protect. All the same, these frigates had to be capable of slugging it out with enemy ships of similar size and firepower and sustaining voyages lasting months without repair or resupply. This meant balancing displacement, armament, crew size, and dozens of other factors into a specification that could be translated into the wooden walls of a warship. The Admiralty Board of the Royal Navy, headed by the First Sea Lord, accomplished that job.
In the 1790s, the job went to Lord Spencer, who with input from deployed commanders and his fellow board members, laid down the basic parameters for the various rated ships that would be constructed in the coming years. By falling back on conservative, proven designs, this meant that Royal Navy frigates (5th Rates) would be based upon the 1780-vintage Perseverance class, designed by Sir Edward Hunt. Displacing approximately 870 to 900 tons and rated as “36s” (the number of guns nominally carried), these were by all accounts fine sailing ships, with all the qualities desired by the Admiralty for frigate operations. Starting in 1801, several repeat batches (a total of 15 ships) based upon the Perseverance, known as the Tribune class, were constructed. They were followed by even larger batches of improved frigates, driven by the ever-increasing demands of the threat from the Continental alliance under Napoleon. These were gradually to grow in size, eventually displacing over 1,000 tons and carrying up to 38 guns. Eventually, there were even classes of so-called “super” frigates, rated as 44s, as a direct response to the six powerful “big” frigates built by the United States of America.
“If I Were to Die This Moment, Want of Frigates Would be Found Engraved on my Heart”
The British also made extensive use of frigate sailing ships captured from enemy navies, the French units being especially popular for their fine inshore handling qualities. Nevertheless, frigates never made up more than a quarter of the 412-ship Royal Navy strength (in 1793, over 700 in 1815), causing the incomparable Lord Nelson to write, “If I were to die this moment, want of frigates would be found engraved on my heart.” There never were enough of these priceless ships to go around, something that made more than one deployed squadron commander nervous and apprehensive. Given the need to escort convoys, conduct raids along enemy coastlines, attack hostile shipping, or scout for the battle fleet, frigates were always in short supply.
One of the oddities about British frigates of this era is that while they were superb engines of war when manned and afloat, the quality of their construction was uneven. Although an excellent elm keel and oak ribs were part of every wooden Royal Navy warship, the materials and workmanship on the rest of the ship’s structure could be very marginal. Shortages of hardwood and other materials meant that lighter materials like pine were sometimes substituted. Several of the Tribune-class vessels built in Bombay were actually constructed of teakwood, though uneven quality and excessive weight offset the excellent durability of that material. As now, private contractors skimped on workmanship and materials, while Admiralty yards struggled to get the necessary output from civil service workers. Nevertheless, the frigates got built and headed for the fitting-out docks to receive their armament, supplies, and crews before sailing. It was during this “working-up” period that British frigate sailing ships began to develop the fighting qualities that made them so feared by opponents.
Although the workmanship of the construction yards might be questionable at times, the same could not be said of what the Admiralty did to make these ships ready for battle. Generally, the procurement officers of the Royal Navy did a good job of buying the tools and fuel of war, including the miles of rope (called “cordage”), acres of sail, shot, powder, guns, or the tons of hardtack bread (known derisively as “biscuits” by the crews) and salted pork and beef that made life possible afloat. While life onboard was always hard and dangerous, the British stood alone in making sure that certain minimum standards of sustenance and livability were the rights of every “Jack Tar.” Not that these were exactly up to luxury standards. Every crewman was assigned 18 inches of width to sleep in his hammock while off watch, and given a “tot” of grog (a 50/50 mix of rum and water) every day, entitlement from a grateful king and country. Water and food were strictly rationed at sea, because it might be months between visits to port for resupply. Royal Navy captains were held strictly accountable for the health and welfare of crewmen, and the memories of several particularly bad mutinies in the 1700s helped to enforce the policy.
The role of any warship is to put its weapons onto an enemy, and the sailing frigates of the Royal Navy had an impressive array for their displacement. For example, the ubiquitous Euryalus-class 36s (around 950 tons displacement) actually carried a nominal total of 42 weapons into battle. These included 26 18-pounder guns on the upper deck, 12 32-pounder carronades on the quarterdeck, and two 9-pounder cannon and a pair of 32-pounder carronades on the forecastle. Along with this main battery, every frigate had a detachment of Royal Marines, making the ship capable of conducting a variety of missions ashore. The crew could also be armed with a variety of hand weapons (pistols, cutlasses, muskets, etc.) for boarding and other off-ship operations.
A Good Commander Could Make a War Ship Legendary
If there was a real strength to the Royal Navy in general and the frigates in particular, it was the quality of their captains. Although a bad commander could make a warship almost ineffective with cruelty and excessive punishment, a good one could make the same ship legendary. Unlike most navies, Britain built its officer corps from the bottom, starting most as teenaged midshipmen apprenticed to a particular ship’s captain. From the start the midshipmen would undergo a rigorous series of lessons and examinations, leading them up through qualification to the lieutenant rank. Eventually, these officers would be considered for command of a minor 6th Rate vessel such as a sloop or armed merchantman. Only after a decade or more of honorable and effective service would an officer even be considered for frigate or higher command.
There was a strict system of seniority within the Royal Navy, which had a huge influence on promotions and appointments. Nevertheless, this seniority system was offset by the ability of senior admirals and overseas squadron commanders to make appointments for command through patronage or influence. This combination of policies had the positive effect of offsetting each other and helping create the finest pool of commanding officers (known as “Post Captains”) of the period. It was the best and brightest of these that the Royal Navy gave duty as frigate captains.
Most of the British frigate captains were young, often in their late 20s and early 30s, fully a decade younger than their contemporaries on ships of the line. There were good reasons for this beyond the simple energy and stamina of youth. Frigate captains spent most of their time away from the battle fleet or home bases, ranging across the globe to serve the interests and objectives of the Royal Navy. This independence of command was something such officers needed to embrace and celebrate if they were to meet the expectations of their king and nation. They also required aggressiveness and intelligence, with an eye for seeing opportunity and taking calculated risks.
Despite this, there was also a need for British frigate commanders to avoid recklessness and indelicacy. One day might see an English frigate captain conducting himself at a diplomatic reception with some small and obscure monarch, and the next raiding the harbors of a neighboring state. Clearly this meant that such officers required a strong sense of situational awareness and judgment in matters ranging from politics and protocol to tactics and martitime law. It was, to say the least, a unique balance of personality traits that made for a successful frigate captain. Add to this the need for leadership and management skills to man and operate his ship, along with the seamanship to sail and fight the vessel, and you get some idea of the qualities of such men.
Pellew Made a Habit of Daring Rescues
As might be imagined, British frigate skippers were towering figures who rose to the top of the naval service in peace and war. Two of the best-known captains of this era were Admirals Sir Edward Pellew and Sir George Cockburn. Born in 1757 and joining the Royal Navy in 1770, Pellew is best known in America for his depiction in the famous Horatio Hornblower novels of C.S. Forester. In real life, he was even more charismatic and colorful than in fictional accounts of his deeds. Made commander for a successful attack on a French frigate after the death of his captain, Pellew made Post Captain after doing battle with three enemy privateers in 1782. He saw action in both the American Revolutionary War and the Napoleonic Wars, where he eventually succeeded Nelson, Collingwood, and Cotton as Commander-in-Chief of the Mediterranean Fleet from 1811 to 1814. Along the way, he commanded the Nymph (with which he captured the French 40 La Cleopatre in 1793) and Indefatigable from which he headed the inshore frigate squadron off Brest from 1794 to 1797. Along the way he developed a habit of daring rescues, including the entire complement of the wrecked East Indian freighter Dutton. Pellew eventually rose to sit on the Admiralty Board at the end of his career, a wealthy man from his many prizes, victories, awards, and patronage.
Slightly more infamous to Americans was George Cockburn, who led the British squadron that attacked the mid-Atlantic region in 1814. Born in 1772, he joined the Royal Navy and caught the eye of Horatio Nelson, who rewarded him with early promotion and command of the captured French frigate La Minerve. Backed by an able young lieutenant named George Hardy (who would later command the flagship Victory at Trafalgar in 1805), he became Nelson’s most trusted frigate commander. Famous for his scouting missions, some of which he did from inside enemy fleet formations, he carried Nelson and his sightings to Admiral Jervis in time to precipitate the Battle of Cape St. Vincent in 1797. Made an admiral, he eventually became somewhat notorious for commanding the squadron that raided the Chesapeake Bay in 1814, personally led the Royal Marines that burned the White House, and commanded the bombardment of Fort McHenry in Baltimore Harbor. A year later, he was trusted with the delicate task of delivering Napoleon Bonaparte into exile on St. Helena. For his dedicated service, Cockburn was made Admiral of the Fleet and retired in 1846 after serving as First Naval Lord.
With men like this to recruit, train, and lead British frigate crews, it is easy to see how they acquired such a “bogeyman” reputation among their opponents. This elite status, even among their ship of the line peers, gave Royal Navy frigate captains the confidence and esprit to attempt all variety of missions. Almost anything, from amphibious landings and “cutting out” expeditions (capturing enemy vessels in sheltered anchorages), to swashbuckling ship- to-ship actions against superior odds became not only possible, but also expected, by the Admiralty and England.
Impressive as the record of the Royal Navy’s frigates had been, though, the end of the Napoleonic Wars and the coming of the War of 1812 sounded the end of their dominance. The loss of a number of British 36s to American “big” frigate sailing ships like the USS Constitution caused a near panic with the British press and public, causing the Admiralty to authorize the conversion of three old 3rd Rate 74s into cut-down, two-decked, 58-gun 4th Rates called Rasées. These were followed by a handful of so-called “super” frigates with 40 (the Modified Endymion class) and 50 (the Leander and Newcastle classes) guns, respectively. However, the end of hostilities with France and America saw the end of large-scale sailing warship design and construction, along with the Age of Fighting Sail. Postwar economies after a generation of worldwide war and an odd little steam-powered vessel designed by American inventor Robert Fulton ensured that battles like Trafalgar would never occur again. Steam power, iron structures, and explosive shells would become the technologies that would drive warship design for the remainder of the 19th century and into the modern era.
Ironically, a century later, another warship pioneered by Robert Fulton, the submarine, would recapture the spirit and daring of the Royal Navy’s sailing frigates. Over two world wars and the 50-year Cold War, submarines became the most independent of naval commands, home to young, aggressive, and daring officers and men from around the world who wanted to do more than just be part of a battle line. Today, nations like the United States and Great Britain are designing their post-Cold War nuclear submarines to be much like those Royal Navy frigates of the Napoleonic era. Able to strike at sea and ashore with guided missiles, and special operations forces, these will be the new frigates for the 21st century, keeping faith with the spirit of Pellew, Cockburn, and all their sailors of that bygone era. | <urn:uuid:4cb789c4-a68a-43d1-ac88-ccd9b4db8c0f> | CC-MAIN-2021-21 | https://warfarehistorynetwork.com/2016/08/12/the-frigate-sailing-ship-pride-of-the-british-royal-navy/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991943.36/warc/CC-MAIN-20210513173321-20210513203321-00537.warc.gz | en | 0.980149 | 4,179 | 3.34375 | 3 |
When discussing music, songs can typically be classified into most genres according to things like instrumentals, vocals, or other musical elements. However, deciding that a song is a protest song does not have to go through this same process. Protest music can be folk, rock, rap, or any number of different genres. If that is the case, then how do we actually identify what protest music is?
Our research project focused on identifying distinguishable linguistic features of American protest music and its lyrics. We know that in these songs, all types of genres can work as protest songs. This motivated us to turn towards the lyrics. We wished to answer the following question: can an American protest song be identified by linguistic elements found in the lyrics?
We approached this question by analysing three time periods in America:
- The Great Depression (1929 - 1939)
- The Vietnam Era (1960 - 1975)
- The Contemporary Era (1975 - Present)
We compared lyrics to non-protest lyrics from the same time periods. We tagged the lyrics syntactically, labelling noun phrases, verb phrases, and adverbs, and noted when a phrase is negated in certain instances. We looked at the various phrases to see if there were distinct differences in syntax between the protest songs from one time period versus the non-protest songs from that same period. We used statistical analysis to determine if these differences were, in fact, significant. With this, we wished to discover that although protest music may not be identifiable by a single musical style, it can still be identified by properties found in the lyrics.
To gain further understanding of what each protest song is referring to in our project, please read about the different eras in America below.
At first, a lot of people will try to define protest music in terms of its content: it seems that in a lot of American protest songs, there is reference to a specific war or current event that upsets the artists at hand. However, using a definition of this sort has two potential errors. First, it can make an argument very circular in nature. Saying that a song has to incorporate war or current events does not leave room for protest music to grow or evolve over time. Any future songs that might still be protest songs, but do not have the appropriate content based off this type of definition, will be improperly disregarded. Another error is that it can allow in too many songs as proper candidates. There is still the option that songs can be created where an artist is upset about a situation at hand but isn't necessarily protesting anything.
Our group wanted to turn away from this method of explaining protest music. Therefore, we turned to the syntax of song lyrics, rather than its content. We wanted to see if this could reveal significant differences in how an artist sings when one is protesting versus when one isn't.
To do this, we looked at 10 of the most popular songs from each era that were not considered "protest" songs by outside sources. We also looked at 10 of the most popular songs from each era that have been labeled as protest songs by outside sources. We used XML markup and tagged the songs according to a synatctical hierarchy; these rules closely resembled what one would find in X-Bar Theory, with a few modifications. To start, we did not see reason to tag everything, such as conjunctions. We also decided to not tag every phrase, such as adjectival and adverbial phrases, but instead just tag the adjectives and the adverbs. Linguistic data is extremely tedious and time-consuming, and because of this, we tagged parts we wanted to study within the limited amount of time we had, rather than tag everything. If we had more time to continue with this project, we would continue to mark up each and every phrase and observe more and more components of the lyrics.
In summary, we decided to study nouns, noun phrases (while also tagging determiners), verbs, verb phrases, adjectives, adverbs, prepositions, and prepositional phrases.
One issue with this method comes into play when there are noun phrases that consist of just determiners. Whereas some theories want to make determiners the head of phrases, and thus call them determiner phrases, we decided to go against this theory and keep everything as noun phrases - even the phrases that did not have nouns in them. We did this for two reasons. One, we needed equal participation from all team members, and so we needed to keep tagging comprehensive for members who did not have a linguistic background. There was a higher chance of error in identifying phrases based off their determiners versus their nouns, since it is more obvious to identify a noun as a non-linguist. Second, we wanted the project to very comprehensive for users on our site who are also non-linguists. We did not want confusion as to what a determiner phrase is or what it meant; it is more comprehensive to identify a noun in the lyrics and then see the overarching phrase of which it is a part.
For each line, we tagged a noun phrase and a verb phrase as the overarching child elements. We then tagged the noun phrases to allow nouns, determiners, adjectives, and adverbs. We tagged verb phrases to allow verb phrases, noun phrases, prepositional phrases, and all of their child elements. This is a result of our modification to X bar theory. Instead of labeling overarching verb phrases as V', or V-bar, we tagged them as verb phrases on top of verb phrases instead. This still preserves the hierarchy of different verbs in each line, without making it too incomprehensible for non-linguist users and researchers. This same concept applies to our prepositional phrases, labeling each as a phrase rather than incorporating P', or P-bar, and instead calling them phrases. Prepositional phrases could have prepositional phrases inside them, along with verb phrases, noun phrases, adverbs, adjectives, and all of their child elements.
Therefore, we also eliminated the use of N', Adj', and Adv' based on similar principle. Though we wanted to label the hierarchy of nouns, verbs, phrases, etc. in the lines, we wanted to do it in a way that did not make the data too overwhelmingly complicated.
We wanted to mark when negation words, such as the word "not"
were found within certain phrases. We gave noun phrases and verb
phrases the option to be tagged
neg="true", signaling in the data
that a word such as "not" was present.
There are two final notes. The first is that participles were not marked as distinct parts of speech. Instead, the participles were grouped with the verbs with which they were associated and tagged under a single verb element.
The second is that refrain sections (chorus sections) were tagged only once. This is to reduce the number of repeating words and phrases in the song. We did not our results to be skewed by one or two lines that repteated constantly throughout the duration of the song.
The Great Depression is noted as one of the deepest and longest-lasting economic downturns in the history of the world. The Great Depression began in the United States after the stock market crash in October of 1929. In 1933, during the lowest point in the Great Depression, 13 to 15 million Americans were unemployed. The economic plight of the Great Depression lasted
African Americans were the hardest hit group of people during The Great Depression; half of all African Americans were unemployed in 1932. In some Northern cities, whites called for blacks to be fired from any jobs as long as there were whites out of work. Violence towards African Americans, such as lynchings, became more common again during The Great Depression. During this period, white supremacist groups such as the Ku Klux Klan were at the height of their prowess.
During this time, the Great Plains region suffered from a drought from 1934-1937, known today as the Dust Bowl. An extended drought and overuse of land rendered the top layer of soil to dust. Heavy winds then blew the topsoil in places like Oklahoma, Kansas, Colorado, New Mexico, and The Texas Panhandle, creating dust clouds called 'black blizzards.' Sixty percent of the people fled the region during this time as a result. The Dust Bowl was most devastating famine in United States history. Artists, from authors to singers, expressed the need to be in harmony with nature, rather than try to dominate and exploit it.
Finally, the Harlan County War took place from 1931-1932. This was a series of coal mining-related skirmishes, executions, bombings, and strikes that took place in Harlan County, Kentucky, during the 1930s. During The Great Depression, miners were making as little as eighty cents per day. The workers went on strike for better wages and better working conditions.
The Great Deprssion presents a fascinating time in American protest music. In the early stages of the era, the American people were rife with discontent. People believed the American government had truly failed to protect them from the excess of capitalism. Everybody was struggling, but the poor struggled the most. Songs such as Brother Can You Spare a Dime scoffed at the notion of the American Dream, which had long been the beacon of hope for the poor. The era progressed in an interesting fashion as well. In 1932, Franklin Roosevelt was elected to the presidency. He brought the American people a breath of fresh air, as well as the New Deal. The New Deal created an economic and social safety net and created millions of jobs in order to put as many unemployed as possible back to work. A noted instition created by the New Deal was the Works Progress Administration (WPA). Noted names such as Woody Guthrie recieved funding from the WPA to produce music. One could perhaps say that during the Great Depression, the government funded the writing of protest music.
The Vietnam era truly represents a golden age of American protest music. Though it is termed in this project as the Vietnam era, it really consists of two major events. First is of course, the Vietnam War, which lasted for nearly 20 years, from 1955 to 1975. Second is the Civil Rights Movement. The Civil Rights movement has existed in some form throughout much of American history, but for this project, we will focus on the events that transpired in the 1960s. Thus, in terms of the scope of this project, the Vietnam era covers the years from 1960 to 1975. This era witnessed the composition of some of the most popular and famous protest music in history.
The turmoil of the era was an incubator for the outburst of protest music seen in this era, especially when compared with the relatively tranquil 1950s. The dialogue of the protesters in the early years of this era was generally mild. Songs like Bob Dylan's Blownin' in the Wind projected a profound message of pacifism that transcended a mere opposition to the war in Vietnam. As the decade progressed, the protesters became more and more tense. Meaningful change was either not occurring, or not occurring quickly enough. A sense of impatience set in around the country.
This impatience turned to shock and anger as this era drew into its latter years. Popular figures like Martin Luther King Jr. and Robert Kennedy were killed by the bullets of assassins. The Civil Rights movement had largely descended into violence. The Vietnam War had seemingly no end in sight. Protester clashed with police in Chicago outside the Democratic National Convention.
In addition to the thousands of American soldiers killed, the war had destroyed a president. The new president, Richard Nixon, promised to end the war in Vietnam, although his actions, such as the invasion of Cambodia further incited the protesters.
By the end of the decade, country was at the boiling point. Many of the protesters were college students, so many protests took place on college campuses. On May 4th, 1970, one of these protests took place at Kent State University in Ohio. The Ohio State National Guard was summoned onto campus once the protests on campus, and eventually fired on the protesters, killing four students and wounding nine. The Kent State massacre received a plethora of media attention, and photographs of that day were sent to newspapers worldwide. This event inspired Neil Young to write the song Ohio, which uses an angrier rhetoric than the songs that characterized the early era, going as far as accusing Nixon himself.
Protest music characterized the 1960s. Polarizing issues like Civil Rights and the Vietnam War made for popular song topics. The protest culture of the era centered primarily on the youth. No other group of youth in American history has been more politically active than the protesters of the 1960s. While their actions have been largely discredited in the ensuing years, they gifted upon us some of the most influential protest over written.
For this project, we defined the Modern Era as any time after 1975, which was the official end of the Vietnam War.
After the loss in Vietnam, veterans from the war were not treated with respect when they returned home. Unlike the World War II veterans, who were seen as heroes, the Vietnam Veterans were 'baby killers,' 'psychos,' 'drug addicts,' and 'war mongers.' Veterans were extremely mistreated; they were refused service at restaurants, and they were cursed at by anti-war Americans. Protesters would stand at the gates in airports protesting against the war as Vietnam veterans returned home. Bruce Springsteen depicts the Vietnam Vet in 'Born in the USA:' Springsteen explains that, 'He's isolated from the government, isolated from his family, to the point where nothing makes sense.'
On November 6th, 1990, the state of Arizona voted down to create a holiday for Dr. Martin Luther King, Jr. Two years prior, the governor was Evan Mecham, who cancelled Martin Luther King day saying, 'I guess King did a lot for the colored people, but I don't think he deserves a national holiday.' In 1991, Public Enemy produced 'By the Time I get to Arizona' as a response, and the message spread -- in part because it was aired on MTV. By 1993, Arizona lost its chance to host the Super Bowl. Arizona lost $350 million in revenue before reinstating MLK day in 1993.
On February 4th, 1999, Police officers in New York City fired 41 shots at an unarmed, West African immigrant who had no criminal record. The immigrant's name was Amadou Diallo, he was 22 years old, and he worked as a street peddler in the city. Bruce Springsteen wrote his song 'American Skin (41 shots)' about this police shooting.
In 2013, Springsteen dedicated his song to Trayvon Martin. Trayvon Martin was as 17-year old African American who was shot by George Zimmerman in 2013 while he was out running errands at a convenience store. Both the New York City police and George Zimmerman noticed the men in each case late at night and declared that they looked suspicious.
In 2008, musicians were protesting against the war in Iraq, which went from 2003 until 2011. The Iraq war was constantly justified by Washington as 'a preventative military action' against a country that could use 'weapons of mass destruction' (WMDs) against the United States. Prior to the attack, no WMDs were found in Iraq. Many people opposed this strategy. Former president Bill Clinton warned of the consequences of a preventative invasion, as such action may lead to unwelcome consequences in the future. At one point he said, 'I don't care how precise your bombs and your weapons are, when you set them off, innocent people will die.' Many theories went around as to why the United States really wanted to invade Iraq. Nelson Mandela, former president of South Africa, voiced his opinion of president George W. Bush; he believed that, 'All [Mr. Bush] wants is Iraqi oil.' Many believed that the Iraq War was putting Americans through unnecessary traumatic experiences via military involvement - Americans were risking their lives and killing what seemed to be innocent people for perhaps no good reason at all. Many compared parts of the Iraq war to the war in Vietnam, and many people protested against the war as a result. Between January 3rd and April 12th, 2003, 36 million people across the globe took part in almost 3,000 protests against war in Iraq, with demonstrations on 15 February 2003 as the 'largest and most prolific of them.'
As a result, protest music during this time expressed this viewpoint. In 2012, Tim McIlrath, the lead singer of the band Rise Against, sang his song Hero of War outside of the NATO summit in Chicago. His song depicted an Iraq War Veteran remembering his experiences in the military during this time. His song is meant to remind listeners of the perspective of the soldier. The United States does not simply send in weapons to shoot targets; Americans are getting severely injured and killed while injuring and killing other human beings in the name of their country.
The modern era, like every other era, wanted to change the way African Americans were treated. Americans protested against multiple issues they had with the United States government. One thing to note in this era more so than the others is that protesters highlighted not just one event in history, but many - pointing out the flaws in our actions across time. This typically happens when discussing methods that the United States used in order to dominate in wartime.
After completing the markup of the songs, we began exploring ways of analyzing the marked up text. Our objective was of course to determine whether or not there was a general linguistic difference between the group of protest songs, and the control group of non-protest songs. The question for us became, how does one quantify linguistic differences?
A principle that was immediately clear was that all froms of subjectivity had to be limited as much as possible. This ruled out all forms of "sentiment" analysis. We simply could not afford to do something like marking up ideas within the texts, because they could be open to interpretation. One person's interpretation of a song could be very different from anothers, which would discredit any data that we collected. Instead, we looked for something more concrete. Something that could be easily intrepreted in black or white, rather than shades of grey.
In our original markup of the texts, we marked up both verb phrases and noun phrases. A noun phrase can consist of many words but only truly requires a single noun or a single determiner, similar to a verb phrase, which only requires a single verb. However, more words in these phrase can add description. The use of desciption in these phrases colors the meaning of the phrases, invoking emotions in the audience that would not be possible without the descptive words. The primary desciptive word in a noun phrase is an adjective, while in turn, the primary descriptive word in a verb phrase is an adverb. We decided to simply test for the presence of these primary descriptive words. A further test we invoked was for the presence of negation in both verb phrases and noun phrases. Negation and primary desciptive words were coded as different variables. For example, if a noun phrase did feature negation but did not have an adjective, it was coded as a "1" in negation and a "0" for adjective. Similarly, if a verb phrase featured both negation and an adverb, it as coded as a "1" for both.
After compiling the data from the songs, we performed a probit analysis on the collected data in order to check to see if there was a statistical difference between the protest song data, and the nonprotest song data. We additionally made graphs for the data from each song, which helped to paint individual picture for each song.
This research project was completed by David Galloway, Leonidas Pashos, and Katie Uihlein. This project was performed at the University of Pittsburgh. This project was for a class called Computational Methods in the Humanities, taught by Dr. David Birnbaum. | <urn:uuid:b74db136-e162-4f03-8c40-707ecbac3456> | CC-MAIN-2021-21 | http://protest.obdurodon.org/about.php | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988759.29/warc/CC-MAIN-20210506175146-20210506205146-00576.warc.gz | en | 0.97406 | 4,109 | 2.90625 | 3 |
Chapter 20: Ecosystems and the Biosphere
By the end of this section, you will be able to:
- Describe the effects of abiotic factors on the composition of plant and animal communities in aquatic biomes
- Compare the characteristics of the ocean zones
- Summarize the characteristics of standing water and flowing water in freshwater biomes
Like terrestrial biomes, aquatic biomes are influenced by abiotic factors. In the case of aquatic biomes the abiotic factors include light, temperature, flow regime, and dissolved solids. The aquatic medium—water— has different physical and chemical properties than air. Even if the water in a pond or other body of water is perfectly clear (there are no suspended particles), water, on its own, absorbs light. As one descends deep enough into a body of water, eventually there will be a depth at which the sunlight cannot reach. While there are some abiotic and biotic factors in a terrestrial ecosystem that shade light (like fog, dust, or insect swarms), these are not usually permanent features of the environment. The importance of light in aquatic biomes is central to the communities of organisms found in both freshwater and marine ecosystems because it controls productivity through photosynthesis.
In addition to light, solar radiation warms bodies of water and many exhibit distinct layers of water at differing temperatures. The water temperature affects the organisms’ rates of growth and the amount of dissolved oxygen available for respiration.
The movement of water is also important in many aquatic biomes. In rivers, the organisms must obviously be adapted to the constant movement of the water around them, but even in larger bodies of water such as the oceans, regular currents and tides impact availability of nutrients, food resources, and the presence of the water itself.
Finally, all natural water contains dissolved solids, or salts. Fresh water contains low levels of such dissolved substances because the water is rapidly recycled through evaporation and precipitation. The oceans have a relatively constant high salt content. Aquatic habitats at the interface of marine and freshwater ecosystems have complex and variable salt environments that range between freshwater and marine levels. These are known as brackish water environments. Lakes located in closed drainage basins concentrate salt in their waters and can have extremely high salt content that only a few and highly specialized species are able to inhabit.
The ocean is a continuous body of salt water that is relatively uniform in chemical composition. It is a weak solution of mineral salts and decayed biological matter. Within the ocean, coral reefs are a second type of marine biome. Estuaries, coastal areas where salt water and fresh water mix, form a third unique marine biome.
The ocean is categorized by several zones ([Figure 2]). All of the ocean’s open water is referred to as the pelagic realm (or zone). The benthic realm (or zone) extends along the ocean bottom from the shoreline to the deepest parts of the ocean floor. From the surface to the bottom or the limit to which photosynthesis occurs is the photic zone (approximately 200 m or 650 ft). At depths greater than 200 m, light cannot penetrate; thus, this is referred to as the aphotic zone. The majority of the ocean is aphotic and lacks sufficient light for photosynthesis. The deepest part of the ocean, the Challenger Deep (in the Mariana Trench, located in the western Pacific Ocean), is about 11,000 m (about 6.8 mi) deep. To give some perspective on the depth of this trench, the ocean is, on average, 4267 m or 14,000 ft deep.
The physical diversity of the ocean has a significant influence on the diversity of organisms that live within it. The ocean is categorized into different zones based on how far light reaches into the water. Each zone has a distinct group of species adapted to the biotic and abiotic conditions particular to that zone.
The intertidal zone ([Figure 2]) is the oceanic region that is closest to land. With each tidal cycle, the intertidal zone alternates between being inundated with water and left high and dry. Generally, most people think of this portion of the ocean as a sandy beach. In some cases, the intertidal zone is indeed a sandy beach, but it can also be rocky, muddy, or dense with tangled roots in mangrove forests. The intertidal zone is an extremely variable environment because of tides. Organisms may be exposed to air at low tide and are underwater during high tide. Therefore, living things that thrive in the intertidal zone are often adapted to being dry for long periods of time. The shore of the intertidal zone is also repeatedly struck by waves and the organisms found there are adapted to withstand damage from the pounding action of the waves ([Figure 1]). The exoskeletons of shoreline crustaceans (such as the shore crab, Carcinus maenas) are tough and protect them from desiccation (drying out) and wave damage. Another consequence of the pounding waves is that few algae and plants establish themselves in constantly moving sand or mud.
The neritic zone ([Figure 2]) extends from the margin of the intertidal zone to depths of about 200 m (or 650 ft) at the edge of the continental shelf. When the water is relatively clear, photosynthesis can occur in the neritic zone. The water contains silt and is well-oxygenated, low in pressure, and stable in temperature. These factors all contribute to the neritic zone having the highest productivity and biodiversity of the ocean. Phytoplankton, including photosynthetic bacteria and larger species of algae, are responsible for the bulk of this primary productivity. Zooplankton, protists, small fishes, and shrimp feed on the producers and are the primary food source for most of the world’s fisheries. The majority of these fisheries exist within the neritic zone.
Beyond the neritic zone is the open ocean area known as the oceanic zone ([Figure 2]). Within the oceanic zone there is thermal stratification. Abundant phytoplankton and zooplankton support populations of fish and whales. Nutrients are scarce and this is a relatively less productive part of the marine biome. When photosynthetic organisms and the organisms that feed on them die, their bodies fall to the bottom of the ocean where they remain; the open ocean lacks a process for bringing the organic nutrients back up to the surface.
Beneath the pelagic zone is the benthic realm, the deepwater region beyond the continental shelf ([Figure 2]). The bottom of the benthic realm is comprised of sand, silt, and dead organisms. Temperature decreases as water depth increases. This is a nutrient-rich portion of the ocean because of the dead organisms that fall from the upper layers of the ocean. Because of this high level of nutrients, a diversity of fungi, sponges, sea anemones, marine worms, sea stars, fishes, and bacteria exists.
The deepest part of the ocean is the abyssal zone, which is at depths of 4000 m or greater. The abyssal zone ([Figure 2]) is very cold and has very high pressure, high oxygen content, and low nutrient content. There are a variety of invertebrates and fishes found in this zone, but the abyssal zone does not have photosynthetic organisms. Chemosynthetic bacteria use the hydrogen sulfide and other minerals emitted from deep hydrothermal vents. These chemosynthetic bacteria use the hydrogen sulfide as an energy source and serve as the base of the food chain found around the vents.
In which of the following regions would you expect to find photosynthetic organisms?
- The aphotic zone, the neritic zone, the oceanic zone, and the benthic realm.
- The photic zone, the intertidal zone, the neritic zone, and the oceanic zone.
- The photic zone, the abyssal zone, the neritic zone, and the oceanic zone.
- The pelagic realm, the aphotic zone, the neritic zone, and the oceanic zone.
[reveal-answer q=”471811″]Show Answer[/reveal-answer]
[hidden-answer a=”471811″]2. The photic zone, the intertidal zone, the neritic zone, and the oceanic zone.[/hidden-answer]
Coral reefs are ocean ridges formed by marine invertebrates living in warm shallow waters within the photic zone of the ocean. They are found within 30˚ north and south of the equator. The Great Barrier Reef is a well-known reef system located several miles off the northeastern coast of Australia. Other coral reefs are fringing islands, which are directly adjacent to land, or atolls, which are circular reefs surrounding a former island that is now underwater. The coral-forming colonies of organisms (members of phylum Cnidaria) secrete a calcium carbonate skeleton. These calcium-rich skeletons slowly accumulate, thus forming the underwater reef ([Figure 3]). Corals found in shallower waters (at a depth of approximately 60 m or about 200 ft) have a mutualistic relationship with photosynthetic unicellular protists. The relationship provides corals with the majority of the nutrition and the energy they require. The waters in which these corals live are nutritionally poor and, without this mutualism, it would not be possible for large corals to grow because there are few planktonic organisms for them to feed on. Some corals living in deeper and colder water do not have a mutualistic relationship with protists; these corals must obtain their energy exclusively by feeding on plankton using stinging cells on their tentacles.
Evolution in Action
Global Decline of Coral ReefsIt takes a long time to build a coral reef. The animals that create coral reefs do so over thousands of years, continuing to slowly deposit the calcium carbonate that forms their characteristic ocean homes. Bathed in warm tropical waters, the coral animals and their symbiotic protist partners evolved to survive at the upper limit of ocean water temperature.
Together, climate change and human activity pose dual threats to the long-term survival of the world’s coral reefs. The main cause of killing of coral reefs is warmer-than-usual surface water. As global warming raises ocean temperatures, coral reefs are suffering. The excessive warmth causes the coral organisms to expel their endosymbiotic, food-producing protists, resulting in a phenomenon known as bleaching. The colors of corals are a result of the particular protist endosymbiont, and when the protists leave, the corals lose their color and turn white, hence the term “bleaching.”
Rising levels of atmospheric carbon dioxide further threaten the corals in other ways; as carbon dioxide dissolves in ocean waters, it lowers pH, thus increasing ocean acidity. As acidity increases, it interferes with the calcification that normally occurs as coral animals build their calcium carbonate homes.
When a coral reef begins to die, species diversity plummets as animals lose food and shelter. Coral reefs are also economically important tourist destinations, so the decline of coral reefs poses a serious threat to coastal economies.
Human population growth has damaged corals in other ways, too. As human coastal populations increase, the runoff of sediment and agricultural chemicals has increased, causing some of the once-clear tropical waters to become cloudy. At the same time, overfishing of popular fish species has allowed the predator species that eat corals to go unchecked.
Although a rise in global temperatures of 1°C–2°C (a conservative scientific projection) in the coming decades may not seem large, it is very significant to this biome. When change occurs rapidly, species can become extinct before evolution leads to newly adapted species. Many scientists believe that global warming, with its rapid (in terms of evolutionary time) and inexorable increases in temperature, is tipping the balance beyond the point at which many of the world’s coral reefs can recover.
Estuaries: Where the Ocean Meets Fresh Water
Estuaries are biomes that occur where a river, a source of fresh water, meets the ocean. Therefore, both fresh water and salt water are found in the same vicinity; mixing results in a diluted (brackish) salt water. Estuaries form protected areas where many of the offspring of crustaceans, mollusks, and fish begin their lives. Salinity is an important factor that influences the organisms and the adaptations of the organisms found in estuaries. The salinity of estuaries varies and is based on the rate of flow of its freshwater sources. Once or twice a day, high tides bring salt water into the estuary. Low tides occurring at the same frequency reverse the current of salt water ([Figure 4]).
The daily mixing of fresh water and salt water is a physiological challenge for the plants and animals that inhabit estuaries. Many estuarine plant species are halophytes, plants that can tolerate salty conditions. Halophytic plants are adapted to deal with salt water spray and salt water on their roots. In some halophytes, filters in the roots remove the salt from the water that the plant absorbs. Animals, such as mussels and clams (phylum Mollusca), have developed behavioral adaptations that expend a lot of energy to function in this rapidly changing environment. When these animals are exposed to low salinity, they stop feeding, close their shells, and switch from aerobic respiration (in which they use gills) to anaerobic respiration (a process that does not require oxygen). When high tide returns to the estuary, the salinity and oxygen content of the water increases, and these animals open their shells, begin feeding, and return to aerobic respiration.
Freshwater biomes include lakes, ponds, and wetlands (standing water) as well as rivers and streams (flowing water). Humans rely on freshwater biomes to provide aquatic resources for drinking water, crop irrigation, sanitation, recreation, and industry. These various roles and human benefits are referred to as ecosystem services. Lakes and ponds are found in terrestrial landscapes and are therefore connected with abiotic and biotic factors influencing these terrestrial biomes.
Lakes and Ponds
Lakes and ponds can range in area from a few square meters to thousands of square kilometers. Temperature is an important abiotic factor affecting living things found in lakes and ponds. During the summer in temperate regions, thermal stratification of deep lakes occurs when the upper layer of water is warmed by the Sun and does not mix with deeper, cooler water. The process produces a sharp transition between the warm water above and cold water beneath. The two layers do not mix until cooling temperatures and winds break down the stratification and the water in the lake mixes from top to bottom. During the period of stratification, most of the productivity occurs in the warm, well-illuminated, upper layer, while dead organisms slowly rain down into the cold, dark layer below where decomposing bacteria and cold-adapted species such as lake trout exist. Like the ocean, lakes and ponds have a photic layer in which photosynthesis can occur. Phytoplankton (algae and cyanobacteria) are found here and provide the base of the food web of lakes and ponds. Zooplankton, such as rotifers and small crustaceans, consume these phytoplankton. At the bottom of lakes and ponds, bacteria in the aphotic zone break down dead organisms that sink to the bottom.
Nitrogen and particularly phosphorus are important limiting nutrients in lakes and ponds. Therefore, they are determining factors in the amount of phytoplankton growth in lakes and ponds. When there is a large input of nitrogen and phosphorus (e.g., from sewage and runoff from fertilized lawns and farms), the growth of algae skyrockets, resulting in a large accumulation of algae called an algal bloom. Algal blooms ([Figure 5]) can become so extensive that they reduce light penetration in water. As a result, the lake or pond becomes aphotic and photosynthetic plants cannot survive. When the algae die and decompose, severe oxygen depletion of the water occurs. Fishes and other organisms that require oxygen are then more likely to die.
Rivers and Streams
Rivers and the narrower streams that feed into the rivers are continuously moving bodies of water that carry water from the source or headwater to the mouth at a lake or ocean. The largest rivers include the Nile River in Africa, the Amazon River in South America, and the Mississippi River in North America ([Figure 6]).
Abiotic features of rivers and streams vary along the length of the river or stream. Streams begin at a point of origin referred to as source water. The source water is usually cold, low in nutrients, and clear. The channel (the width of the river or stream) is narrower here than at any other place along the length of the river or stream. Headwater streams are of necessity at a higher elevation than the mouth of the river and often originate in regions with steep grades leading to higher flow rates than lower elevation stretches of the river.
Faster-moving water and the short distance from its origin results in minimal silt levels in headwater streams; therefore, the water is clear. Photosynthesis here is mostly attributed to algae that are growing on rocks; the swift current inhibits the growth of phytoplankton. Photosynthesis may be further reduced by tree cover reaching over the narrow stream. This shading also keeps temperatures lower. An additional input of energy can come from leaves or other organic material that falls into a river or stream from the trees and other plants that border the water. When the leaves decompose, the organic material and nutrients in the leaves are returned to the water. The leaves also support a food chain of invertebrates that eat them and are in turn eaten by predatory invertebrates and fish. Plants and animals have adapted to this fast-moving water. For instance, leeches (phylum Annelida) have elongated bodies and suckers on both ends. These suckers attach to the substrate, keeping the leech anchored in place. In temperate regions, freshwater trout species (phylum Chordata) may be an important predator in these fast-moving and colder river and streams.
As the river or stream flows away from the source, the width of the channel gradually widens, the current slows, and the temperature characteristically increases. The increasing width results from the increased volume of water from more and more tributaries. Gradients are typically lower farther along the river, which accounts for the slowing flow. With increasing volume can come increased silt, and as the flow rate slows, the silt may settle, thus increasing the deposition of sediment. Phytoplankton can also be suspended in slow-moving water. Therefore, the water will not be as clear as it is near the source. The water is also warmer as a result of longer exposure to sunlight and the absence of tree cover over wider expanses between banks. Worms (phylum Annelida) and insects (phylum Arthropoda) can be found burrowing into the mud. Predatory vertebrates (phylum Chordata) include waterfowl, frogs, and fishes. In heavily silt-laden rivers, these predators must find food in the murky waters, and, unlike the trout in the clear waters at the source, these vertebrates cannot use vision as their primary sense to find food. Instead, they are more likely to use taste or chemical cues to find prey.
When a river reaches the ocean or a large lake, the water typically slows dramatically and any silt in the river water will settle. Rivers with high silt content discharging into oceans with minimal currents and wave action will build deltas, low-elevation areas of sand and mud, as the silt settles onto the ocean bottom. Rivers with low silt content or in areas where ocean currents or wave action are high create estuarine areas where the fresh water and salt water mix.
Wetlands are environments in which the soil is either permanently or periodically saturated with water. Wetlands are different from lakes and ponds because wetlands exhibit a near continuous cover of emergent vegetation. Emergent vegetation consists of wetland plants that are rooted in the soil but have portions of leaves, stems, and flowers extending above the water’s surface. There are several types of wetlands including marshes, swamps, bogs, mudflats, and salt marshes ([Figure 7]).
Freshwater marshes and swamps are characterized by slow and steady water flow. Bogs develop in depressions where water flow is low or nonexistent. Bogs usually occur in areas where there is a clay bottom with poor percolation. Percolation is the movement of water through the pores in the soil or rocks. The water found in a bog is stagnant and oxygen depleted because the oxygen that is used during the decomposition of organic matter is not replaced. As the oxygen in the water is depleted, decomposition slows. This leads to organic acids and other acids building up and lowering the pH of the water. At a lower pH, nitrogen becomes unavailable to plants. This creates a challenge for plants because nitrogen is an important limiting resource. Some types of bog plants (such as sundews, pitcher plants, and Venus flytraps) capture insects and extract the nitrogen from their bodies. Bogs have low net primary productivity because the water found in bogs has low levels of nitrogen and oxygen.
Aquatic biomes include both saltwater and freshwater biomes. The abiotic factors important for the structuring of aquatic biomes can be different than those seen in terrestrial biomes. Sunlight is an important factor in bodies of water, especially those that are very deep, because of the role of photosynthesis in sustaining certain organisms. Other important factors include temperature, water movement, and salt content. Oceans may be thought of as consisting of different zones based on water depth, distance from the shoreline, and light penetrance. Different kinds of organisms are adapted to the conditions found in each zone. Coral reefs are unique marine ecosystems that are home to a wide variety of species. Estuaries are found where rivers meet the ocean; their shallow waters provide nourishment and shelter for young crustaceans, mollusks, fishes, and many other species. Freshwater biomes include lakes, ponds, rivers, streams, and wetlands. Bogs are an interesting type of wetland characterized by standing water, a lower pH, and a lack of nitrogen.
Where would you expect to find the most photosynthesis in an ocean biome?
- aphotic zone
- abyssal zone
- benthic realm
- intertidal zone
[reveal-answer q=”235606″]Show Answer[/reveal-answer]
A key feature of estuaries is
- low light conditions and high productivity
- salt water and fresh water
- frequent algal blooms
- little or no vegetation
[reveal-answer q=”771588″]Show Answer[/reveal-answer]
Describe the conditions and challenges facing organisms living in the intertidal zone.
Organisms living in the intertidal zone must tolerate periodic exposure to air and sunlight and must be able to be periodically dry. They also must be able to endure the pounding waves; for this reason, some shoreline organisms have hard exoskeletons that provide protection while also reducing the likelihood of drying out.
- abyssal zone
- the deepest part of the ocean at depths of 4000 m or greater
- algal bloom
- a rapid increase of algae in an aquatic system
- aphotic zone
- the part of the ocean where photosynthesis cannot occur
- benthic realm
- (also, benthic zone) the part of the ocean that extends along the ocean bottom from the shoreline to the deepest parts of the ocean floor
- the bed and banks of a river or stream
- coral reef
- an ocean ridge formed by marine invertebrates living in warm shallow waters within the photic zone
- the invertebrates found within the calcium carbonate substrate of coral reefs
- ecosystem services
- the human benefits provided by natural ecosystems
- emergent vegetation
- the plants living in bodies of water that are rooted in the soil but have portions of leaves, stems, and flowers extending above the water’s surface
- a region where fresh water and salt water mix where a river discharges into an ocean or sea
- intertidal zone
- the part of the ocean that is closest to land; parts extend above the water at low tide
- neritic zone
- the part of the ocean that extends from low tide to the edge of the continental shelf
- oceanic zone
- the part of the ocean that begins offshore where the water measures 200 m deep or deeper
- pelagic realm
- (also, pelagic zone) the open ocean waters that are not close to the bottom or near the shore
- photic zone
- the upper layer of ocean water in which photosynthesis is able to take place
- an animal that eats plankton
- source water
- the point of origin of a river or stream
- environment in which the soil is either permanently or periodically saturated with water | <urn:uuid:3b6756af-6374-4e9a-b9df-a0c381db79bb> | CC-MAIN-2021-21 | https://opentextbc.ca/conceptsofbiologyopenstax/chapter/aquatic-and-marine-biomes/ | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989766.27/warc/CC-MAIN-20210512162538-20210512192538-00257.warc.gz | en | 0.932172 | 5,287 | 3.921875 | 4 |
What are Color Models?
What are color models? And which do you need for your graphic work? An indepth tutorial that goes into RGB, HSV, HSL and CMYK with applied interactives.
Computer graphics, animations and interactions with digital equipment are now self-evident. You just have to pick up your smartphone, tablet, desktop computer or what else and you feel intuitively when you have to swipe, click, drag or pinch zoom. You also expect nothing less than nice interfaces with smooth animations.
In this blog series, of which this is the first part of six, I like to take you on a journey through time with our focus on the development before and during the creation of computers, digital graphics, animations, graphical interfaces, graphics software, interactivity, 3D, a pinch of the first games, the creation of the internet and a touch of virtual reality. If I could even mention the largest part of influential events, that would be a world achievement. So that's just impossible. Instead, I like to point out a number of events that I think have made an important contribution to getting where we are now. Sometimes with a slight side-road to a development that indirectly made an important contribution or reflects the spirit of the times and relations between events. Although I personally find audio and music are very important and interesting and I have always been involved in producing music, I have made the decision to omit audio developments in this series to keep the series somewhat concise and focused.
I have made more than 110 illustrations for this series and also provide each part with at least one interactive to bring the events alive as good as possible for you.
We easily forget that the current digital and graphical possibilities do not even exist that long. The fact that there were times without computers and smartphones is difficult to imagine, but that no animations existed is even harder to imagine. I would therefore like to start a good presentation of the history of interactive computer graphics prior to the computer age and even before animation techniques existed.
So lights off... and Magic Lantern on!
Although there were already many other interesting developments before 1654, such as the Pythagorean Theorem (already 500 years before Christ!!) and the inventing of an important technique of perspective drawing we still use today (invented around 1415), I start this blog series at the year 1654.
This is the year in which Christiaan Huygens invented the Magic Lantern. Although it is not certain whether he was the first with this invention, because Leonardo da Vinci also experimented with the same technique a century and a half earlier. The magic lantern is somewhat similar to a slide projector; this allowed an image to be projected onto a wall. Photography was not invented yet, not even the light bulb. The lantern, also called Laterna Magica, therefore still worked with hand-painted images and a candle or oil lamp as a light source. Later, in the 18th century, the magic lantern would also be used by magicians for disappearing tricks.
But despite the fact that there were 'Magic Lantern Shows', which you can best think of as a slide show in a cinema, there were still no animations and moving images in the furthest distance.
The very first form of animation came around only about 1824. The English physicist John Ayrton Paris then co-invented (together with W. Phillips) a method to prove that a visual image always 'remains on our retina' for a little while. He wanted to prove this by having two images alternate at high speed so that the illusion arose that they overlapped eachother as if they were the same image.
In 1825, this was commercially registered under the name 'Thaumatrope' and marketed as a disk, attached between two strings, with an image on each side. If you 'wound up' the disc with the strings by often turning the disc and let the ropes intertwine and then suddenly pull the strings so that the disc starts spinning quickly, the two images alternate quickly and seemed to overlap as if they are the same. If one image is a cage and the other a bird, the bird seemed to be in the cage. By the technique of wrapping the ropes you could even create a simple animation, for example a rider who was thrown off a horse.
This quickly became a popular toy that was frequently copied by others and brought to the market more cheaply. There were also people who claimed that they had come up with this idea earlier.
The French photographer and artist Antoine Claudet already had the idea a few years later, in 1867, of creating a 3D illusion with the Thaumatrope by slightly shifting the image on one side of the disk with respect to the image on the other side, making one image work for the left eye and the other one for the right eye to create depth. This is also the principle of 3D glasses with red and blue lenses that we saw only many years later.
Now that the technique is known of alternating two frames, the next step was to develop devices that can alternate multiple frames and thus form an animation.
The Belgian physicist Joseph Plateau, as a student, was fascinated by the fact that revolving wheels seem to stand still over each other or show an opposite movement. He went deeper into this theory and also communicated with British scientist Michael Faraday (yes, the one of the Faraday Cage). He came around late 1832 with his invention, which was originally intended as an optical illusion: the Phénakistiscope. This is a cylinder with images drawn inside of them that alternate with rotation, creating a smooth animation. The Austrian professor Simon von Stampfer came around the same time with a similar idea, but seems to have been inspired by the Plateau papers.
Later on, other variants of this principle arose, such as a device in which the images of a revolving cylinder were projected via mirrors in the center to the user as an animation.
But the phénakisticope is thus the first device that actually displays an animation as we still do today: with changing frames. This device soon became a popular toy in Europe and was also released under other names, such as the Stroboscope and the Fantascope.
In 1847, the British mathematician George Boole introduced the algebraic system of logic for the first time in his book 'The Mathematical Analysis of Logic'. In other words, calculation with true (one) and false (zero). In 1854 he also came up with a paper in which he described what we now know as Boolean algebra.
Boolean algebra has become an important basis for the development of digital electronics and computers years later. But that was not on their minds at the time.
The basic operations of Boolean algebra are AND, OR and NOT. With these simple operations you can calculate the outcome of a circuit of parallel and serial switches. For example, you can calculate whether a light turns on when you switch certain switches on or off. With this algebra, we still work in computer hardware and programming languages today.
Meanwhile, considerable progress was made in the field of animation. In September 1868 the British printer John Barnes Linnett patented his invention The Kineograph under the project name 'Improvements in the means of producing optical illusions'. The idea was to quickly scroll through sheets of paper on which frames were drawn. Nowadays we would call this a flip book.
The Kineograph, which stands for moving image, is the oldest known form of a flip book as we know it and was the first way of a linear (instead of circular) sequence of images to form an animation sequence. The principle that we still use today for producing moving images for films was thus born.
There have been more devices that have contributed to where we stood in 1877 on the matter and it would be a long blog if I would write them all down, but I certainly think this one is worthy of mention. This device is an important connection point in history, because for the first time the principles of projecting light (The Magic Lantern) were combined with techniques for creating smooth animations with frames (The Kineograph). The very first projector that projected moving image was born: The Praxinoscope.
And because of these special names for these inventions you have to like this history!
The Praxinoscope was invented in 1877 by the French science teacher Charles-Émile Reynaud and consisted of a revolving cylinder with on the inside the animation frames as images which, illuminated by a lamp, were projected onto a wall as moving images via mirrors and a lens.
You almost start to wonder how it took so long to combine the invention of animation with the principles of the projector, that had been invented already 223 years earlier. Well hey, there was no internet at that time to quickly exchange new ideas and to become smarter together. But now that the Praxinoscope was finally a fact, it was time to develop it further... into the very first film! I leave out the origin of the film in this blog and go straight to the very first film in which a drawn animation was shown.
And that was 'The Enchanted drawing', made between September and early November 1900 by James Stuart Blackton in his own Vitagraph Studios. So we literally stepped into a new century with the first drawn animation on film. Even if it was still a silent film and had a short duration, this was an important development for animations.
Blackton made the film with the still relatively new stop motion technique, which was used for the first time in 1897, so three years earlier. In The Enchanted Drawing, a man draws a character on an easel, and fed him a bottle of wine so the drawing got drunk. This short film is seen now as the father of all animated films. This set the tone for more drawn animations and the step to Disney is now only small.
In 1923 the animator Walt Disney made the short film 'Alice's Wonderland', starring a young actress who communicates with animated drawn characters. Walt Disney had another company at that time, but when he went bankrupt he moved to Hollywood.
He and his brother Roy were approached by film distributor Winkler Productions to make the series 'Alice Comedies' and founded 'Disney Brothers Cartoon Studio' on October 16, 1923. We can say that it has become a huge success and Disney has helped animation a lot moving forward.
In 1933 the Walt Disney Studios came up with a new technique to bring depth to illustrations. This was done through the multiplane camera technique. Drawings were made on transparent sheets and placed in layers above each other under the camera. Because of this layer structure, each layer could be moved independently of each other, resulting for the first time in the parallax scrolling effect that would later be copied into the computer in many computer games, such as Super Mario Bros.
Since 2011, the parallax scrolling effect has also been used extensively in websites by allowing background photos to move more slowly than the 'foreground' of the website, creating depth during scrolling of the page.
Disney wasn't the first to use the multiplane technique. It was already used seven years earlier, in 1926, by Lotte Reiniger for her film 'The Adventures of Prince Achmed'. But Disney improved and showed the technology worldwide.
In order to combine drawn animations with camera images, Disney already used drawings with white backgrounds that were projected over film images. Other producers were already using a sort of keying, as Frank Williams did in 'The Invisible Man'. But the use of blue and green screens to replace part of the image with another image was first done in 1933 by visual special effects pioneer Linwood G. Dunn for the film 'Flying Down to Rio'. Done at the time with an optical printer.
Since then, the technology has evolved a lot and now this is a standard technique in every film and television studio and can be found in every professional video editing/effects package.
In 1937 the later called 'Father of Information Theory', the American mathematician, electrical engineer and cryptographer Claude Shannon wrote his MIT master's thesis. In his thesis he used Boolean algebra for calculating the outcomes of circuits with relays and switches. It was the first time that Boolean algebra was used to calculate logical electrical systems and this formed a very important basis for digital systems that would be developed many years later, such as computers. Digital devices today still work with these techniques.
Later, Claude Shannon stated that the Bit is the smallest part, the 'building block' and therefore the unit of information. Smaller than 'true' and 'false' information can not go.
In 1926 on the west coast of the United States, a company started producing postcards and album-souvenirs. This company, Sawyer's, was led by Ed Mayer and marketing by Harold Graves. Soon adding photographic postcards to their assortment, the company became the largest producer of picturesque postcards in America at the time and they sold to large department stores.
In 1938 a new technique was invented by the German William Gruber: the stereophoto. This gave a photo depth. Sawyer's took this technique into production at a factory in Portland, which resulted in a viewer and discs with stereophotos that could be viewed with it: The View-Master.
In 1940 the American army saw great advantages of using these View-Masters for training American soldiers during the Second World War. Some 100,000 viewers were ordered by the army, and between 1942 and the end of the war in 1945 nearly 6 million discs were ordered for it. Later, the View-Master was mainly sold as children's toys.
During the Second World War the Germans communicated with one another via an encryption device called the Enigma Machine. In order to prevent the Enigma's key being cracked and the secret messages to be deciphered on this device every day the many settings were changed by the Germans, so the key to decipher codes changed making cracking of the key and thus the communication virtually impossible. Even if the key would be known, it still could only be used for a maximum of 24 hours. After that, the Enigma settings and thus the encryption-key would be changed again.
Amongst others in Great Britain, they were busy trying every day to crack the codes. But that did not went well and seemed impossible to do on time. Until the day a mathematician reported to the army and said he had an idea to crack the Enigma codes. His name was Alan Turing and by a lot of calculating and a little trial and error he developed a machine, an electromechanical computer, that had to crack the every day changing code to decipher the German messages: The Bombe.
Because of the Bombe, as predicted later, the war was considerably shortened, because the 'secret' communication of the Germans was constantly deciphered and listened to and revealed the precise plans of the Germans. The film 'The Imitation Game', in which the interesting story of Alan Turing is well represented is highly recommended. Today The Bombe is seen as an important factor for developing the computer.
Information technology also started to get off to a good start and in 1947 the American statistician John Tukey came up with the term 'Bit'.
The encoding of data with bits has been done since 1732 when making punch cards where each punch position on the card may or may not have been punched. But there was no name for this yet. Claude Shannon used the term 'Bit' in his Information Technology paper and made it a strong definition. But the origin came from John Tukey who already one year earlier wrote about 'Bits' in his memo to the company Bell Labs. More about Claude Shannon later in this blog.
Bit stands for binary digit. Binary, because there are two possibilities on a bit, it can be either true (1) or false (0). At the same time the term 'Bit' is a wordplay on 'a little bit' and 'bits of information'.
Although the name Bit had been defined, there was Boolean algebra and big steps had been taken in the development of computers, these computers were all but powerful and they were not yet programmable.
Yet Alan Turing, together with David Champernowne, wrote a computer algorithm to play chess in 1950, while there was no computer that could run it. He wanted to show what a computer could do and what it would be capable of doing. This was done under the synonym Turochamp.
Because there were no computers that could run his program, Alan Turing pretended to be the computer himself and everything was done on paper. He spent more than half an hour in calculating each chess move! Theoretically, you could state that this was the very first computer game in the world.
In April that same year, for the first time a programmable computer was shown for the first time. This SEAC (Standards Eastern Automatic Computer) was built by the U.S. National Bureau of Standards (NBS) (now called National Institute of Standards and Technology - NIST), led by Samuel N. Alexander.
It wasn't only the first computer that could be programmed and therefore had no fixed task, like its predecessors. It was also the first computer to work with solid-state equipment (the SEAC used tubes and diodes) and therefore did not use punched cards, tapes or the like. The instruction set consisted of 11 to 16 instructions and programming was done linearly. The device weighed about 1360 kg and was very large, but for that time a huge step forward and it could be operated remotely.
The cinemas had undergone some development and filmmaker Morton Heilig worked on a paper called 'The Cinema of the Future', in which he wrote about an 'Experience Theater' in which you as a user would be offered a full experience that went beyond watching a movie alone.
Seven years later, in 1962, his paper resulted into a prototype: The Sensorama. This is one of the first virtual reality-like experiences in history and the device was way ahead of its time. It was able to show the stereoscopic 3D image, made use of a seat with body tilting, stereo sound and even had separate channels to spread different scents and wind during a film. Unfortunately, further development had to be stopped due to financial problems. Morton is now seen as the 'Father of Virtual Reality'.
American computer pioneer Russel A. Kirsch was working with a team to develop the first digital photo scanner in the world. This he presented in 1957. The scanner was connected to the SEAC computer and was capable of making photo scans of 176 by 176 pixels. With that, not only the first scanner was born, but also the very first images that were built with pixels, so raster graphics. The famous first scan he made was one of a photo of his son.
This scanner and the use of pixels had laid the foundation for many important computer graphics techniques that we use frequently today. Such as digital photos, editing programs like Photoshop, making digital films, textures etc.
You can see that mathematics, electrical engineering, graphics, film and animation techniques are starting to come together more and more in new technological developments.
In the field of mathematics an important development for computer graphics was the algorithm that Paul de Casteljau devised in 1959: The Casteljau algorithm. We now know this algorithm better as the Bézier Curve, because later the Frenchman Pierre Étienne Bézier, engineer at Renault, further developed the algorithm and provided the 'control handles' that we still use in programs such as Adobe Illustrator or Affinity Designer today to make curved shapes.
Now it's time for an interactive to see how the Casteljau / Bézier algorithm calculates points on the curve to be able to display it on a monitor. The variant shown below is one with a single control point (C). There are also variants with two or even more control points, although more than two control points are rarely used and are not needed for graphics work.
First position the points S (start), E (end) and C (control) to shape the curve. To see how the points are calculated that are on the curve, move one of the handles with a T on it. These represent the time from 0% to 100% that the curve traverses from start to end point. By taking the same percentage of the green cut line you arrive at the location that the point for that time on the curve has. If you move a handle T from start to finish you can see that the curve is being drawn from start to finish.
Click & drag handles S (start), C (control) and E (end) to shape the curve. Drag handle T (time) to see how the points that make the curve get calculated from 0% to 100%.
That was it for part one, in which it was mainly about the origin of the first animations, computers and even the first steps towards Virtual Reality. But this was just the beginning. If you find it as interesting as I do seeing how different movements, such as mathematics, electrical engineering, science, logic, creativity, animations and illustrations come together to lead to major developments, I like to see you again in the next part of this series.
In part two we will continue where we left off and we will immediately start with the development of computers and graphics, which really got started at that point. A very interesting period in which many important and groundbreaking developments have taken place. And you will probably even be surprised about a number of things in history.
In the coming months I will continue to write new parts in this series up to six volumes in total. Did you find this interesting or do you want to ask something or comment? Let me hear from you below and share the blog on social media. It motivates me to keep writing quality blogs like this one. Also after clicking on like or dislike you have the option to add comments if you'd like to comment on something (optional). Thanks and 'till next time!
Impress and Tell it with Interactives for Websites and Touch Screens | <urn:uuid:a310ac9a-3596-455c-8dff-34d48f72f29a> | CC-MAIN-2021-21 | https://www.wigglepixel.nl/en/blog/history-interactive-computer-graphics-part-1 | s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988966.82/warc/CC-MAIN-20210509092814-20210509122814-00137.warc.gz | en | 0.97651 | 4,548 | 3.296875 | 3 |