chapter
stringlengths 1.97k
1.53M
| path
stringlengths 47
241
|
---|---|
Description
This module covers the Critical Infrastructure Security and Resilience foundational courses and certifications from the Federal Emergency Management Administration (FEMA). It is based on a three-part assignment that uses the online FEMA Emergency Management Institute courses and exam certifications that cover the following three topics:
• IS-860.C: The National Infrastructure Protection Plan, An Introduction
• IS-913.A: Critical Infrastructure Security and Resilience: Achieving Results through Partnership and Collaboration
• IS-921.A: Implementing Critical Infrastructure Security and Resilience
The focus is on five key subject sectors that the National Infrastructure Protection Plan identifies as “Lifeline” sectors: Energy, Water and Wastewater Systems, Communications, Transportation Systems, and Emergency Services. This module gives students a better understanding of what those assets are, what components are considered “critical,” and how to identify them for entry into the IP Gateway that serves as the single interface through which Department of Homeland Security (DHS) partners can access the department’s integrated infrastructure protection tools and information.
Objectives
• Define critical infrastructure, protection, and resilience in the context of the National Infrastructure Protection Plan (NIPP).
• Describe critical infrastructure in communities and the impact Lifeline sector assets have on a community’s resiliency.
• Describe the processes that support critical infrastructure security and resilience.
• Identify strategies and methods for achieving results through critical infrastructure partnerships.
• Describe the roles and responsibilities of entities such as the DHS, sector-specific agencies, and state, local, tribal, and territorial governments.
• Discuss common standards bodies, such as the North American Electricity Reliability Council (NAERC) and the National Institute of Standards and Technology (NIST).
• Understand which certifications are required to protect critical infrastructure.
1.02: Presentation and Required Reading
Presentation
[embeddoc url=”http://pb.libretexts.org/cybersecuri...on-Final3.pptx” download=”all” viewer=”microsoft”][1]
Required Reading
Miller, Stephen, and Clark, Richard H. Framework for SCADA Cybersecurity. Smashwords Edition, eBook ISBN 978-1310-30996-0. Chapter 2 “Cybersecurity Framework Introduction,” Section 1 “Framework Introduction,” pages 43-45.
1.03: Hands-on Activity
Overview
This three-part assignment uses the online FEMA Emergency Management Institute courses and exam certifications that cover the following three topics:
The focus is on five key subject sectors that the National Infrastructure Protection Plan identifies as “Lifeline” sectors: Energy, Water and Wastewater Systems, Communications, Transportation Systems, and Emergency Services. This module gives students a better understanding of what those assets are, what components are considered “critical,” and how to identify them for entry into the IP Gateway that serves as the single interface through which Department of Homeland Security (DHS) partners can access its integrated infrastructure protection tools and information.
Hands-on Activity Objectives
• Understand the roles and responsibilities of entities such as the DHS, sector-specific agencies, and state, local, tribal, and territorial governments.
• Describe the processes that support critical infrastructure security and resilience.
• Define critical infrastructure, protection, and resilience in the context of the National Infrastructure Protection Plan (NIPP).
• Identify strategies and methods for achieving results through critical infrastructure partnerships.
• Identify various methods for assessing and validating information.
• Describe critical infrastructure in communities and the impact Lifeline sector assets have on a community’s resiliency.
• Discuss common standards bodies, such as the North American Electricity Reliability Council (NAERC) and National Institute of Standards and Technologies (NIST).
Independent Study Exams require a FEMA Student Identification (SID) Number. Students who do not have a SID can register for one at https://cdp.dhs.gov/femasid.
Questions regarding the FEMA Independent Study Program or other Emergency Management Institute (EMI) related requests, such as requests for certificates, transcripts, or online test scores/results, should be referred to the FEMA Independent Study program office at 301-447-1200 or emailed to Independent.Study@fema.dhs.gov. Please do not contact the FEMA SID Help Desk, as they are unable to provide assistance with such requests.
IS-860.C: The National Infrastructure Protection Plan, An Introduction
Course Overview
Ensuring the security and resilience of the critical infrastructure of the United States is essential to the nation’s security, public health and safety, economic vitality, and way of life.
The purpose of this course is to present an overview of the National Infrastructure Protection Plan (NIPP). The NIPP provides the unifying structure for the integration of existing and future critical infrastructure security and resilience efforts into a single national program.
Learning Objectives
• Explain the importance of ensuring the security and resilience of critical infrastructure of the United States.
• Describe how the NIPP provides the unifying structure for the integration of critical infrastructure protection efforts into a single national program.
• Define critical infrastructure, protection, and resilience in the context of the NIPP.
Primary Audience
The course is intended for DHS and other federal staff responsible for implementing the NIPP, and tribal, state, local, and private sector emergency management professionals. The course is also designed to teach potential security partners about the benefits of participating in the NIPP.
None
2 hours
IS-913.A: Critical Infrastructure Security and Resilience: Achieving Results through Partnership and Collaboration
Course Overview
The purpose of this course is to introduce the skills and tools to effectively achieve results for critical infrastructure security and resilience through partnership and collaboration.
The course provides an overview of the elements of and processes to develop and sustain successful critical infrastructure partnerships.
Learning Objectives
• Explain the value of partnerships to infrastructure security and resilience.
• Identify strategies to build successful critical infrastructure partnerships.
• Describe methods to work effectively in a critical infrastructure partnership.
• Identify processes and techniques used to sustain critical infrastructure partnerships.
• Identify strategies and methods for achieving results through critical infrastructure partnerships.
Primary Audience
The course is designed for critical infrastructure owners and operators from both the government and private sector and those with critical infrastructure duties and responsibilities at the state, local, tribal, and territorial levels.
Prerequisites
None. The following is recommended prior to starting the course:
• IS-921.A, Implementing Critical Infrastructure Security and Resilience
2 hours
IS-921.A: Implementing Critical Infrastructure Security and Resilience
Course Overview
This course introduces those with critical infrastructure duties and responsibilities at the state, local, tribal, and territorial levels to the information they need and the resources available to them in the execution of the mission to secure and improve resilience in the nation’s critical infrastructure.
Learning Objectives
• Summarize critical infrastructure responsibilities.
• Identify the range of critical infrastructure government and private-sector partners at the state, local, tribal, territorial, regional, and federal levels.
• Describe processes for effectively sharing information with critical infrastructure partners.
• Identify various methods for assessing and validating information.
Primary Audience
This course is designed for all individuals with critical infrastructure protection responsibilities.
Prerequisites
None. The following are recommended prior to starting the course:
• Review of the National Infrastructure Protection Plan (NIPP) and Critical Infrastructure Support Annex to the National Response Framework (NRF) documents.
OR
• Completion of the following Independent Study courses:
• IS-860.B, National Infrastructure Protection Plan (NIPP); and
• IS-821.A, Critical Infrastructure Support Annex.
3 hours
Assignment Deliverables
1. Completion of all three FEMA Emergency Management Institute courses and exam certifications.
Grading Criteria Rubric
• Students should submit copies of all three exam completion certificates.
Grade points: 300 | textbooks/workforce/Information_Technology/Information_Systems/Critical_Infrastructure_Cybersecurity_(CyberWatch_West)/01%3A_Introduction_to_Critical_Infrastructure/1.01%3A_Description_and_Objectives.txt |
Overview
Students pair into teams, which identify one of the 16 critical infrastructure sectors to focus on throughout the course. Each week’s module will be examined through the lens of the chosen sector. Student teams are expected to investigate their chosen sector and create a fictitious organization that will be used as a case study in future assignments.
Team Activity Objectives
• Define critical infrastructure, protection, and resilience in the context of the NIPP.
• Identify strategies and methods for achieving results through critical infrastructure partnerships.
• Identify various methods for assessing and validating information.
• Describe critical infrastructure in communities and the impact Lifeline sector assets have on a community’s resiliency.
• Discuss common standards bodies, such as the North American Electricity Reliability Council (NAERC) and National Institute of Standards and Technologies (NIST).
Please select a critical infrastructure sector for your team:
• Chemical Sector
• Commercial Facilities Sector
• Communications Sector
• Critical Manufacturing Sector
• Dams Sector
• Defense Industrial Base Sector
• Emergency Services Sector
• Energy Sector
• Financial Services Sector
• Food and Agriculture Sector
• Government Facilities Sector
• Healthcare and Public Health Sector
• Information Technology Sector
• Nuclear Reactors, Materials, and Waste Sector
• Transportation Systems Sector
• Water and Wastewater Systems Sector
Now create a fictitious organization that would work in that sector. Determine the organization’s name, number of employees, and the type of work it does.For example, to investigate the Nuclear Reactors, Materials, and Waste sector, you could describe a nuclear plant with 350 employees, where uranium is refined for use in nuclear weapons.
Research the following about organizations like the one you have described:
• Is this sector a “Lifeline” sector?
• What standards does your organization fall under?
• What role would your sector-specific agency play in your organization?
• Identify at least three potential cybersecurity risks to your organization.
Assignment Options
Option 1: Write a two-page abstract on your sector and your fictitious organization, answering the four questions above.
Option 2: Prepare 2–3 presentation slides about your sector and your fictitious organization, answering the four questions above.
Grading Criteria Rubric
• Content
• Evidence of teamwork
• References
• Use of American Psychological Association (APA) style in writing the assignment
Grade Points: 100
1.05: Assessment
True/False
Indicate whether the statement is true or false.
____ 1. Nuclear power plants that generate electricity fall under the Energy Sector.
____ 2. “Lifeline” critical infrastructure sectors are those sectors that are essential for the operation of most other critical infrastructure.
____ 3. A coordinating sector-specific agency for the Food and Agriculture Sector is the Department of Heath and Human Services.
____ 4. The organization that defines the standards for reliable bulk power systems is NIST.
Multiple Choice
Identify the choice that best completes the statement or answers the question.
____ 5. Which of the following is not a Lifeline sector?
a. Energy c. Healthcare and Public Health
b. Telecommuncations d. Transportation Systems
____ 6. Which of the following is not a segment in the Energy sector?
a. Gas c. Oil
b. Electricity d. Solar
____ 7. Which of the following designates DHS as the responsible agency to provide strategic guidance to the critical sectors?
a. NIST c. PPD-21
b. NERC d. SLTT
____ 8. Which of the following is a role of SLTTGCC?
a. Coordinate with DHS c. Provide organizational structure, coordinating across jurisdictions
b. Serve as a federal interface d. Carry out incident management responsibilities
Completion
Complete the sentence.
9. Infrastructure resilience is ___________________________________________________________.
Short Answer
10. Name the five Lifeline sectors and explain why these sectors are essential to the nation’s economy and well-being.
11. Identify two other sectors on which the Food and Agriculture Sector depends and explain the relationship.
12. Explain how attacks on the Water and Wastewater Systems sector can negatively impact health and human safety.
For the answers to these questions, email your name, the name of your college or other institution, and your position there toinfo@cyberwatchwest.org. CyberWatch West will email you a copy of the answer key. | textbooks/workforce/Information_Technology/Information_Systems/Critical_Infrastructure_Cybersecurity_(CyberWatch_West)/01%3A_Introduction_to_Critical_Infrastructure/1.04%3A_Team_Activity.txt |
Description
This module introduces Supervisory Control and Data Acquisition (SCADA) systems, Distributed Control Systems (DCS), and Process Control Systems (PCS), with overviews of what they are and how they are used.
Objectives
• Describe the components and applications of industrial control systems.
• Describe the purpose and use of SCADA, DCS, and PCS systems.
• Describe the configuration and use of field devices used to measure critical infrastructure processes, such as flow rate, pressure, temperature, level, density, etc.
• Describe the use and application of Programmable Logic Controllers (PLCs) in automation.
2.02: Presentation and Required Reading
Presentation
[embeddoc url=”http://textbooks.whatcom.edu/phil101...on-final5.pptx” download=”all” viewer=”microsoft”]
None
2.03: Hands-on Activity
Overview
Students download a 15-day free trial of PLC Ladder (located at PLCtrainer.net or LogicsPro). They install the software and explore its options to understand how PLC works; packet capture – protocol and transit across the network; and how to program the PLC.
Hands-on Activity Objectives
• Describe the purpose and use of SCADA, DCS, and PCS systems.
• Describe the configuration and use of field devices used to measure critical infrastructure processes, such as flow rate, pressure, temperature, level, density, etc.
• Provide examples of HMI screens and displays used within SCADA systems.
• Describe the use and application of PLCs in automation.
Lab Assignment
Use PLC Simulator to explore relay logic
1. Install the LogixPro 500 PLC simulator on a Windows VM. It can be used with a 15-day free trial (available from The Learning Pit).
2. Launch the LogixPro simulator.
3. Click on the “Help” drop-down menu and select “Student Exercises.” The following web page will open.
4. Under the “Student RSLogix Programming Exercises” section, select the “Relay Logic …. Introductory Exercise” option. The following web page will open.
5. Complete the “LogixPro Relay Logic Introductory Lab” exercise following these instructions. You can use the printed handout instead if preferred.
6. Under the “Student RSLogix Programming Exercises” section, select the “Door Simulation …. Applying Relay Logic” option. The following web page will open.
7. Complete the “LogixPro Door Simulation Lab” exercise following these instructions. You can use the printed handout instead if preferred.
8. Under the “Student RSLogix Programming Exercises” section, select the “Silo Simulator …. Applying Relay Logic to a Process” option. The following web page will open.
9. Complete the “LogixPro Silo Lab” exercise following these instructions. You can use the printed handout instead if preferred.
Grade Points: 100
This lab was developed by CSEC, the Cyber Security Education Consortium, an Advanced Technological Education (ATE) program funded by the National Science Foundation.
2.04: Team Activity
Overview
Student teams continue to build a description of the operating environment for their sector-based organizations. What systems will be used within the organization?
Team Activity Objectives
• Describe the purpose and use of SCADA, DCS, and PCS systems.
• Describe the configuration and use of field devices used to measure critical infrastructure processes, such as flow rate, pressure, temperature, level, density, etc.
• Provide examples of HMI screens and displays used within SCADA systems.
• Describe the use and application of PLCs in automation.
• Describe the components and applications of industrial control systems.
Write a description of the operating environment for your sector-based organization. Determine what industrial control/SCADA and business IT systems will be used within the organization.
Assignment
Write a 2-page abstract summarizing your findings on your sector and the industrial control/SCADA and business IT systems that will be used within your fictitious organization.
Grading Criteria Rubric
• Content
• Evidence of teamwork
• References
• Use of American Psychological Association (APA) style in writing the assignment
Grade points: 100
2.05: Assessment
True/False
Indicate whether the statement is true or false.
____ 1. Radio telemetry is not used much to communicate field data as it is expensive and limited in range to only several hundred feet from the device.
____ 2. Remote Telemetry Units (RTUs) and Programmable Logic Controllers (PLCs) serve roughly the same purpose, monitoring process feedback and sending the data to a centralized computer.
____ 3. A logic programming language that is similar to an AC wiring diagram is called a Function Block Diagram.
____ 4. SCADA systems can be networked in either a LAN or a WAN.
Multiple Choice
Identify the choice that best completes the statement or answers the question.
____ 5. Which of the following is not a part of industrial control systems?
a. Supervisory Control and Data Acquisition c. Production Control Systems
b. Distributed Control Systems d. Programmable Logic Controllers
____ 6. Which of the following is not one of the main methods by which measurement data is communicated to a system?
a. Computer protocols, such as serial communications c. Binary alarms
b. Analog devices d. Digital devices
____ 7. Which of the following is not a function of SCADA data?
a. Providing information from which reports and trending data can be generated c. Providing meaningful displays for operators
b. Monitoring and annunciating alarm conditions d. All of the above
Completion
Complete each sentence.
8. A system that provides for remote monitoring and control of industrial devices and equipment (bringing plant/process data into a computer systems) is known as ________________________.
9. ______________________ can be either digital or analog devices that provide an audible warning of a condition.
Short Answer
10. Discuss some of the useful information that SCADA reports can provide.
For the answers to these questions, email your name, the name of your college or other institution, and your position there to info@cyberwatchwest.org. CyberWatch West will email you a copy of the answer key. | textbooks/workforce/Information_Technology/Information_Systems/Critical_Infrastructure_Cybersecurity_(CyberWatch_West)/02%3A_Introduction_to_Control_Systems_and_SCADA/2.01%3A_Description_and_Objectives.txt |
Description
A number of different networking and SCADA protocols, hardware, and security devices are available to protect a network and the devices on that network. This module addresses the various mechanisms for employing hardware, protocols, and technologies with basic protections in infrastructure and network design. It also identifies methods for enhancing the security of an enterprise network through the positioning of certain pieces of hardware, protocol, and network equipment.
Objectives
• List several types of networking hardware and explain the purpose of each.
• List and describe the functions of common communications protocols and network standards used within CI.
• Identify new types of network applications and how they can be secured.
• Identify and understand the differences between IPv4 and IPv6.
• Discuss the unique challenges/characteristics of devices associated with industrial control systems.
• Explain how existing network administration principles can be applied to secure CIKR.
3.02: Presentation and Required Reading
Presentation
[embeddoc url=”http://textbooks.whatcom.edu/phil101...on-final3.pptx” download=”all” viewer=”microsoft”]
Required Reading
Industrial Control Systems Cyber Emergency Response Team (ICS-CERT), U.S. Department of Homeland Security. Recommended Practice: Improving Industrial Control Systems Cybersecurity with Defense-in-Depth Strategies. September 2016. Available online at https://ics-cert.us-cert.gov/Abstract-Defense-Depth-RP.
3.03: Hands-on Activity
Overview
Explore the interactive graphic Secure Architecture Design. This secure design is the result of an evolutionary process of technology advancement and increasing cyber vulnerability presented in the Recommended Practice document Improving Industrial Control Systems Cybersecurity with Defense-in-Depth Strategies.
Hands-on Activity Objectives
• List several types of networking hardware and explain the purpose of each.
• List and describe the functions of common communications protocols and network standards used within CI.
• Explain how existing network administration principles can be applied to secure CIKR.
• Identify new types of network applications and how they can be secured.
Assignment
Use the ICS-Cert learning portal to examine an enterprise diagram for an overview of a network. If you are not registered yet, please register.
Hover over the various areas of the Secure Architecture Design graphic, located at https://ics-cert.us-cert.gov/Secure-Architecture-Design. Click inside the box for additional information associated with the system elements.
After downloading and reading Recommended Practice: Improving Industrial Control Systems Cybersecurity with Defense-In-Depth Strategies (see Required Reading), navigate through the embedded description in the Secure Architectural Design diagram.
Write a short paper describing the following recommended practices for improving industrial control systems cybersecurity with Defense-In-Depth Strategies for your team’s fictitious sector-based company:
• Security Challenges within Industrial Control Systems
• Isolating and Protecting Assets: Defense-in-Depth Strategies
• Recommendations and Countermeasures
Grading Criteria Rubric
• Content
• Evidence of teamwork
• References
• Use of American Psychological Association (APA) style in writing the assignment
Grade points: 100
3.04: Team Activity
Overview
Student teams continue to build a description of the operating environment for their sector-based organizations. They identify the networking protocols and technologies that will be used within the organization.
Team Activity Objectives
• List several types of networking hardware and explain the purpose of each.
• List and describe the functions of common communications protocols and network standards used within CI.
• Explain how existing network administration principles can be applied to secure CIKR.
• Identify new types of network applications and how they can be secured.
• Discuss the unique challenges/characteristics of devices associated with industrial control systems.
Assignment
Using Visio or another diagramming application, develop and draw a network diagram of your enterprise system. See the example below.
Grading Criteria Rubric
• Content
• Evidence of teamwork
• References
• Diagram
Grade Points: 100
3.05: Assessment
True/False
Indicate whether the statement is true or false.
____ 1. Unlike IT systems, ICS places more importance on availability than on confidentiality.
____ 2. Stateless firewalls examine each packet and make a determination about whether or not the packet is allowed based on context.
____ 3. IPv6 is an improvement over IPv4 because of its ability to support encryption, authentication, and longer address space.
Multiple Choice
Identify the choice that best completes the statement or answers the question.
____ 4. Which of the following is not an element of Operational Technology?
a. Event-driven architecture c. Consists of electromechanical, sensors, actuators, coded displays, handheld devices
b. Processes transactions and provides information d. Controls machines rather than providing support to people
____ 5. Which of the following is not a major component of an ICS network?
a. Fieldbus Network c. Communications routers
b. Remote Access Points d. File server
____ 6. Which of the following is not an open communication protocol?
a. Modbus c. DNP3
b. Fieldbus d. HART
Completion
Complete the sentence.
7. A ____________ network is an industrial network system connecting instruments, sensors, and other devices to a PLC or controller.
8. ____________ was created in 1979 as a communications protocol for use with PLCs and is now a defacto standard.
Matching
Match the major component of an ICS to its function.
A. Control Server E. Intelligent Electronic Devices (Sensors/Actuators)
B. SCADA Server or Master Terminal Unit (MTU) F. Human-Machine Interface (HMI)
C. Remote Terminal Unit (RTU) G. Data Historian
D. Programmable Logic Controller (PLC) H. Input/Output (IO) Server
____ 9. Controllers used at the field level
____ 10. Hosts DCS or PLC software
____ 11. Software and hardware used by a person to monitor the state of the process and manage the settings
____ 12. Devices that convert physical properties to an electronic signal and then perform a physical action
____ 13. Device that collects, buffers, and provides access to information on subcomponents
____ 14. Master in a SCADA system
____ 15. Centralized database that logs information received from ICS devices
____ 16. Special purpose data acquisition and control unit device
Short Answer
17. Address some of the potential challenges with ICS devices.
18. Identify some “best practices” in securing critical infrastructure and key resources (CIKR).
19. Discuss some “best practices” in ICS firewall design.
For the answers to these questions, email your name, the name of your college or other institution, and your position there toinfo@cyberwatchwest.org. CyberWatch West will email you a copy of the answer key. | textbooks/workforce/Information_Technology/Information_Systems/Critical_Infrastructure_Cybersecurity_(CyberWatch_West)/03%3A_Technologies/3.01%3A_Description_and_Objectives.txt |
Description
This module covers cybersecurity critical infrastructure and risk management. It introduces the NIST Cybersecurity Framework, the structure of the framework, and how it is used. It also describes the processes of risk management in the framework—framework basics, structure, and a business process management approach to implementing and applying the framework.
Objectives
• Describe basic security service principles (confidentiality, integrity, availability, and authentication) and their relative importance to CI systems.
• Explain basic risk management principles.
• Identify various risk management frameworks and standards, such as the NIST Cybersecurity Framework and the North American Electricity Reliability Council (NERC).
• Describe how to use the framework core process.
• Describe how to use the Framework Implementation Tiers to identify cybersecurity risk and the processes necessary to effectively manage that risk.
• Describe the Cybersecurity Framework Assessment Process Model.
• Demonstrate an understanding of how the framework process holistically manages risk.
4.02: Presentation and Required Reading
Presentation
[embeddoc url=”http://textbooks.whatcom.edu/phil101...on-final3.pptx” download=”all” viewer=”microsoft”]
None
4.03: Hands-on Activity
There is no hands-on activity for this module.
4.04: Team Activity
Overview
Student teams continue to build a description of the operating environment for their sector-based organization. They select an appropriate risk management framework for the sector-based organization. In the absence of one required by the industry, teams should begin to apply the NIST Cybersecurity Framework to the selected organization. Each team’s work should be reviewed by the instructor.
Team Activity Objectives
• Identify various risk management frameworks and standards, such as the NIST Cybersecurity Critical Infrastructure Framework (“NIST Cybersecurity Framework”) and North American Electricity Reliability Council (NERC).
• Describe how to use the framework core process.
Assignment Options
Option 1: Write a 2-page abstract summarizing why your team chose your selected risk management framework for your sector-based organization.
Option 2: Prepare 2–3 presentation slides on your justification for selecting this risk management framework.
Grading Criteria Rubric
• Content
• Evidence of teamwork
• References
• Use of American Psychological Association (APA) style in writing the assignment
Total Points: 100
4.05: Assessment
True/False
Indicate whether the statement is true or false.
____ 1. NIST developed the Cybersecurity Framework as a mandatory set of standards to manage risks to critical infrastructure.
____ 2. Risk tolerance is the acceptable level of risk a company is willing to take.
Multiple Choice
Identify the choice that best completes the statement or answers the question.
____ 3. Which of the following is not considered a basic security service?
a. Confidentiality c. Integrity
b. Authentication d. Network Security
____ 4. All of the following are standards defined in the NERC CIP standards, except:
a. Personnel and Training c. Authentication and Access Controls
b. Sabotage Reporting d. Recovery Plans for Critical Cyber Assets
____ 5. Continuous Monitoring activities occur under which Framework Core activity?
a. Identify c. Respond
b. Detect d. Protect
____ 6. An impact analysis is a part of which step in the risk management process?
a. Risk control c. Risk identification
b. Risk assessment d. Risk mitigation
____ 7. Which risk handling method reduces the likelihood of the risk occurring to as low as zero?
a. Mitigation c. Transference
b. Avoidance d. Acceptance
Multiple Response
Select all the choices that apply.
____ 8. Which of the following are a part of the Framework Processes?
a. Framework Profile c. Framework Implementation Tiers
b. Framework Drivers d. Framework Core Functions
Completion
Complete each sentence.
9. The Framework ________________ provide background on how an organization views cybersecurty risk and the processes that are in place to manage that risk.
10. ____________________ is defined as the process of identifying vulnerabilities and taking carefully reasoned steps to ensure the confidentiality, integrity, and availabiliity of the information system.
For the answers to these questions, email your name, the name of your college or other institution, and your position there toinfo@cyberwatchwest.org. CyberWatch West will email you a copy of the answer key. | textbooks/workforce/Information_Technology/Information_Systems/Critical_Infrastructure_Cybersecurity_(CyberWatch_West)/04%3A_Risk_Management/4.01%3A_Description_and_Objectives.txt |
Description
In cybersecurity, a threat is the potential for a negative security event to occur. This module examines common attacks against critical infrastructure including hijacking, denial-of-service attacks, malicious software, SMTP spam engines, Man-in-the-Middle (MITM) attacks, and social engineering. It explores how attacks are being conducted through users, and the different kinds of attacks that target server-side and client-side applications. The module also explores some of the common attacks that are launched against networks, CI and SCADA/Control Systems, and other CI devices today. There is a discussion of how malware is designed and configured, how it works, and the current and future impact of malware on SCADA systems. An overview of how malware like Stuxnet impacts SCADA systems serves as an example.
Objectives
• Define threats and threat agents, and explain how risk assessment relates to understanding threats.
• Identify how different threats—including hijacking, denial-of-service attacks, malicious software, SMTP spam engines, Man-in-the-Middle (MITM) attacks, and social engineering—would apply to critical infrastructure.
• Identify different types of malware and their intended payloads.
• Describe social engineering psychological attacks.
• List and explain the different types of server-side web application and client-side attacks relevant to critical infrastructure.
• Describe overflow attacks and provide examples of the impact on CI systems.
• Provide examples of malware attacks, such as Flame, Stuxnet, BlackEnergy, Havex, and Duqu, and discuss their functionality and impact on critical infrastructure systems.
5.02: Presentation and Required Reading
Presentation
[embeddoc url=”http://textbooks.whatcom.edu/phil101...on-final3.pptx” download=”all” viewer=”microsoft”]
Required Reading
U.S. Government Accountability Office (GOA). Critical Infrastructure Protection: Cybersecurity Guidance Is Available But More Can Be Done to Promote Its Use. GAO-12-92. Published: December 9, 2011. Publicly released: January 9, 2012.
5.03: Hands-on Activity
There is no hands-on activity for this module.
5.04: Team Activity
Overview
Student teams continue to build descriptions of the operating environment for their sector-based organizations. They review the different threat possibilities using the Government Accountability Office (GAO) table, “Sources of Emerging Cybersecurity Threats.” Teams identify the different threats that would be likely to impact their sector-based organizations, providing a rationalization for their selections.
Team Activity Objectives
• Define threats and threat agents, and explain how risk assessment relates to understanding threats.
• Identify how different threats—including hijacking, denial-of-service attacks, malicious software, SMTP spam engines, Man-in-the-Middle (MITM) attacks, and social engineering—would apply to critical infrastructure.
• Identify different types of malware and their intended payloads.
• Describe overflow attacks and provide examples of the impact on CI systems.
• Provide examples of malware attacks, such as Flame, Stuxnet, BlackEnergy, Havex, and Duqu, and discuss their functionality and impact on critical infrastructure systems.
Assignment
Review the Required Reading text, GAO-12-92, Critical Infrastructure Protection: Cybersecurity Guidance Is Available, but More Can Be Done to Promote Its Use.
Also read the table below, which is a reproduction of Table 1 from the U.S. Government Accountability Office (GOA) report Critical Infrastructure Protection: Department of Homeland Security Faces Challenges in Fulfilling Cybersecurity Responsibilities, May 2005.
Table 1, Sources of Emerging Cybersecurity Threats. U.S. Government Accountability Office (GOA) report Critical Infrastructure Protection: Department of Homeland Security Faces Challenges in Fulfilling Cybersecurity Responsibilities, May 2005. Available for download from http://www.gao.gov/products/GAO-05-434.
Look at other resources, like the page “Cyber Threat Source Descriptions” on the ICS-CERT website (https://ics-cert.us-cert.gov/content/cyber-threat-source-descriptions). Research the operation of at least one of the following malware attacks: Flame, Stuxnet, BlackEnergy, Havex, and Duqu.
How does your review affect the confidentiality, integrity, and availability scores? In addition, are there any organizational concerns that might stem from security incidents that go beyond the impact analysis?
Based on your team’s investigation of your chosen sector and created fictitious organization, select standards from the CSET list “Risk Assessment Standards” (available for download or online viewing below).
[embeddoc url=”http://textbooks.whatcom.edu/phil101...management.pdf” download=”all”]
Assignment Options
Option 1: Submit a detailed written explanation of how you selected appropriate risk assessment standards for your fictitious organization.
Option 2: Prepare 2–3 presentation slides explaining your justification for selecting those particular risk assessment standards.
Grading Criteria Rubric
• Content
• Evidence of teamwork
• References
• Use of American Psychological Association (APA) style in writing the assignment
Grade Points 100
5.05: Assessment
True/False
Indicate whether the statement is true or false.
____ 1. An attacker has successfully committed a denial-of-service attack against a website, bringing it down for three hours until network engineers could resolve the problem. This is classified as a threat.
____ 2. Vulnerabilities are weaknesses that allow a threat to occur.
____ 3. Attacks require malicious intent, so they are always caused by people who intend to violate security.
____ 4. Lightning is an example of a threat agent.
Multiple Choice
Identify the choice that best completes the statement or answers the question.
____ 5. Which of the following is not an example of a threat category?
a. Attacks c. Natural event
b. Buggy software d. Human error
____ 6. Which of the following is not a threat to critical infrastructure?
a. Availability of very sophisticated tools that don’t require much skill to use c. The rapid development of technology
b. The high-profile nature of critical infrastructure systems d. The interconnected nature of industrial control systems
____ 7. An attacker that breaks into computers for profit or bragging rights is a/an . . .
a. Cracker c. Terrorist
b. Insider d. Hostile country
Completion
Complete the sentence.
8. The types of attacks and attackers specific to a company is known as the threat ___________.
9. A social engineering attack in which victims are tricked into clicking an emailed link that infects their system with malware or sends their user IDs and passwords to the attacker is known as ____________.
10. A security control that creates a list of authorized applications, preventing unauthorized applications from downloading and installing, is called a/an ___________.
Matching
Match each threat to its definition.
A. Denial-of-service (DoS) attack F. SQL injection
B. Hijacking G. Trojan horse
C. Ransomware H. Virus
D. Distributed denial-of-service (DDoS) attack I. SMTP spam engine
E. Buffer overflow J. Worm
____ 11. An attack in which multiple attackers attempt to flood a device
____ 12. Malware that replicates autonomously
____ 13. A web application attack against a connected database
____ 14. Malicious code attached to a file that, when executed, delivers its payload
____ 15. Malware that encrypts the victims files on their computer until money is sent to the attacker
____ 16. An attack that leverages email protocols to send out messages from the infected device
____ 17. An attack that seizes control of communications, sending the communications to the attacker’s system
____ 18. An attack in which a single attacker overwhelms a system with a flood of traffic in order to make it unavailable
____ 19. An attack that writes data to unexpected areas of memory, causing the device to crash
____ 20. Malware embedded in what appears to be a useful file
For the answers to these questions, email your name, the name of your college or other institution, and your position there toinfo@cyberwatchwest.org. CyberWatch West will email you a copy of the answer key. | textbooks/workforce/Information_Technology/Information_Systems/Critical_Infrastructure_Cybersecurity_(CyberWatch_West)/05%3A_Threats/5.01%3A_Description_and_Objectives.txt |
Description
Vulnerabilities are weaknesses that enable threats to be actualized. This module discusses cybersecurity vulnerabilities in general and those that are of a higher concern for critical infrastructure systems. It also identifies processes and tools for discovering vulnerabilities.
Objectives
• Identify the common vulnerabilities associated with Control Systems (CS).
• Identify SCADA cyber vulnerabilities.
• Describe how an attacker may gain control of the SCADA system.
• Define vulnerability assessment and explain why it is important.
• Identify vulnerability assessment techniques and tools, such as CSET, Nessus, and other assessment tools.
• Explain the differences between vulnerability scanning and penetration testing.
6.02: Presentation and Required Reading
Presentation
[embeddoc url=”http://textbooks.whatcom.edu/phil101...on-Final3.pptx” download=”all” viewer=”microsoft”]
Required Reading
Parfomak, Paul W. Vulnerability of Concentrated Critical Infrastructure: Background and Policy Options. CRS Report for Congress, RL33206. Updated September 12, 2008. Available from the Homeland Security Digital Library.
6.03: Hands-on Activity
There is no hands-on activity for this module.
6.04: Team Activity
Overview
Student teams c ontinue to build a description of the operating environment for their sector-based organization, describing how they would use vulnerability scanning and/or penetration testing to evaluate threat potentials.
Team Activity Objectives
• Identify vulnerability assessment techniques and tools, such as CSET, Nessus, and other assessment tools.
• Explain the differences between vulnerability scanning and penetration testing.
Having identified threats that would be likely to impact your sector-based organization in the Module 5 Team Activity, consider how you would use vulnerability scanning and/or penetration testing to evaluate additional threat potentials. What tools would you use, and how could they impact the availability of a real-time control and/or SCADA system?
Look for passive penetration tools and tests that will not take the control and/or SCADA systems down.
Assignment Options
Option 1: Write a 2-page abstract summarizing your team’s rationale for using vulnerability scanning and/or penetration testing. What tools would your team use, and how could these decisions impact the availability of a real-time control and/or SCADA system?
Option 2: Prepare 2–3 presentation slides summarizing your team’s rationale for using vulnerability scanning and/or penetration testing. What tools would your team use, and how could these decisions impact the availability of a real-time control and/or SCADA system?
Grading Criteria Rubric
• Content
• Evidence of teamwork
• References
• Use of American Psychological Association (APA) style in writing the assignment
Grade Points: 100
6.05: Assessment
True/False
Indicate whether the statement is true or false.
____ 1. Security testing on SCADA systems, if not performed correctly, can disrupt operations.
Multiple Choice
Identify the choice that best completes the statement or answers the question.
____ 2. Which of the following is not a main category of SCADA systems?
a. Legacy/Proprietary c. Legacy/Common
b. Modern/Common d. Modern/Proprietary
____ 3. Which of the following tests attempts to actually exploit weaknesses in the system?
a. Vulnerability assessment c. Risk assessment
b. Penetration test d. Regression testing
____ 4. Which of the following is not a vulnerability associated with a control system?
a. Discovery of unique numbers (point reference numbers) in use c. Legacy systems that have not been updated
b. Wireless access points that do not provide authentication to the network d. All are vulnerabilities
Matching
Match the following assessment tools with their descriptions.
A. CSET D. Wireshark
B. Nessus E. Snort
C. Packet sniffer F. Nmap/netstat
____ 5. Popular vulnerability scanner
____ 6. An intrustion detection system
____ 7. Used to identify open TCP/UDP ports
____ 8. DHS tool used to assess an ICS’s security posture
____ 9. Packet sniffing tool
____ 10. Generic term for a tool used to examine network communications
For the answers to these questions, email your name, the name of your college or other institution, and your position there toinfo@cyberwatchwest.org. CyberWatch West will email you a copy of the answer key. | textbooks/workforce/Information_Technology/Information_Systems/Critical_Infrastructure_Cybersecurity_(CyberWatch_West)/06%3A_Vulnerabilities/6.01%3A_Description_and_Objectives.txt |
Description
This module introduces risk assessment processes and the types of assessments available. Students download the Department of Homeland Security (DHS) CSET tool that was introduced in Module 6. They install it and use it to perform a Cybersecurity Framework Critical Infrastructure Risk Assessment.
Objectives
• Identify the different risk assessment frameworks.
• Discuss Supply Chain Risk Management (SCRM) principles.
• Explain how regulatory requirements are used in determining additional items to review in a risk assessment.
• Demonstrate an understanding of the CSET tool risk assessment functions.
• Apply the CSET tool to an IT general risk assessment.
• Develop a report using CSET.
• Apply the standard available in the CSET tool to an IT general risk assessment.
7.02: Presentation and Required Reading
Presentation
[embeddoc url=”http://textbooks.whatcom.edu/phil101...on-Final3.pptx” download=”all” viewer=”microsoft”]
None
7.03: Hands-on Activity
Overview
Students download the Department of Homeland Security (DHS) CSET tool, install it, and use it to perform a Cybersecurity Framework Critical Infrastructure Risk Assessment.
Hands-on Activity Objectives
• Download, install, and run the CSET tool.
• Demonstrate an understanding of the CSET tool risk assessment functions.
• Apply the CSET tool to an IT general risk assessment
• Develop a report using CSET.
• Apply the standard available in the CSET tool to an IT general risk assessment.
Preparation
Watch some of the video tutorials available to help you better understand how to use the CSET tool. The videos are designed to play within YouTube, so you must have an active Internet connection to view them. You can access these videos by navigating to the CSET YouTube channel, https://www.youtube.com/c/CSETCyberSecurityEvaluationTool (link is external). To use close captioning in YouTube, click on the “cc” icon on the video window.
Downloading CSET onto a PC
System Requirements
In order to execute CSET, the following minimum system hardware and software is required:
• Pentium dual core 2.2 GHz processor (Intel x86 compatible)
• CD-ROM drive if creating a physical CD
• 5 GB free disk space
• 3 GB of RAM
• Microsoft Windows 7* or higher
• A Microsoft Office compatible (.docx) document reader is required to view reports in .docx format
• A Portable Document Format (PDF) reader such as Adobe Reader is required to view supporting documentation. The latest free version of Adobe Reader may be downloaded from http://get.adobe.com/reader/ (link is external).
• Microsoft .NET Framework 4.6 Runtime (included in CSET installation)
• SQL Server 2012 Express LocalDB (included in CSET installation)
NOTE: For all platforms, we recommend that you upgrade to the latest Windows Service Pack and install critical updates available from the Windows Update website to ensure the best compatibility and security.
Downloading CSET
Download CSET using the following link: http://ics-cert.us-cert.gov/Downloading-and-Installing-CSET.
After clicking the link, you will be asked to identify yourself and will then be given the opportunity to download the file CSET_x.x.iso (where x.x represents the download version).
The CSET download is in a file format known as “ISO.” This file is an “image” of the equivalent installation files included on the CSET CD. Because of this format, it is necessary to process the download using one of the following methods:
1. Decompressing the File — Open the file using any one of the newer compression utility software programs.
2. Mounting the File — This method loads the ISO file using utility software to make the file appear like a virtual drive with the original CD loaded.
3. Burning the file to CD — This method uses CD-burn software and the ISO file to burn the files onto your own CD to create a physical disk identical to the CSET original.
These methods require separate software utilities. A variety of both free and purchased utility programs available through the Internet will work with the ISO file format. As DHS does not recommend any specific application or vendor, it will be necessary for you to find a product that provides the necessary functionality. Step-by-step instructions for each method are provided below.
Decompressing the File
1. Click the “Download CSET” link above and complete the requested information to download the ISO file.
2. Save the file to your hard drive of choice (i.e., your computer hard drive or USB drive), maintaining the file name and extension (.iso).
3. Open the ISO file with a compression utility program and save the files to your hard drive of choice, maintaining the original names and file extensions.
4. Complete the “Installing the CSET Program” instructions below.
Mounting the File
1. Click the “Download CSET” link above and complete the requested information to download the ISO file.
2. Save the file to your hard drive of choice (i.e., your computer hard drive or USB drive), maintaining the file name and extension (.iso).
3. Run your ISO-specific utility program that is capable of mounting the file. Complete the instructions within the utility software to create a virtual drive using the ISO file. If you do not have an ISO utility application, you will need to find and install one before continuing with these instructions.
4. Complete the “Installing the CSET Program” instructions below.
Burning the file to CD
1. Click the “Download CSET” link at the bottom of this page and complete the requested information to download the ISO file.
2. Save the file to the hard drive on your computer, maintaining the filename and extension (.iso).
3. Insert a blank, writable CD into the computer’s CD drive.
4. Run your CD-burn utility program. Complete the instructions on your utility program to burn the ISO image to your DVD. (If you do not have an application that can do this, you will need to find and install one before continuing with these instructions.)
5. Complete the “Installing CSET Program” instructions below.
Installing the CSET Program
1. Fing the CSET_Setup.exe file in the folder, virtual drive, or CD containing the CSET files.
2. Double-click the CSET_Setup.exe file to execute. This will initiate the installer program.
3. Complete the instructions in the installation wizard to install the CSET program.
4. Read the material within the ReadMe document for a summary explanation of how to use the tool. Help is also available through the User Guide, screen guidance text, and video tutorials.
Using CSET on a Mac
If you are using a Mac, you will need to download Oracle’s VM VirtualBox and set up a virtual PC. Then you can download and install CSET on the virtual PC per the above instructions. Here is the download link for VM VirtualBox: http://www.oracle.com/technetwork/server-storage/virtualbox/downloads/index.html.
About Oracle VM VirtualBox
VirtualBox is powerful Cross-platform Virtualization Software for x86-based systems. “Cross-platform” means that it installs on Windows, Linux, Mac OS X, and Solaris x86 computers. “Virtualization Software” means that you can create and run multiple virtual machines, running different operating systems, on the same computer at the same time. For example, you can run Windows and Linux on your Mac, run Linux and Solaris on your Windows PC, or run Windows on your Linux systems.
Oracle VM VirtualBox is available as Open Source or pre-built Binaries for Windows, Linux, Mac OS X, and Solaris.
Requesting a copy of CSET
If you are unable to download or install CSET from the link, you may request that a copy be shipped to you. To request a copy, please send an email to cset@hq.dhs.gov (link sends e-mail). Please insert “CSET” in the subject line and include the following in your email request:
• Your name
• Organization name
• Complete street address (no P.O. boxes)
• Telephone number
• The error or installation issue you encountered when attempting the download
Assignment
Once you have installed CSET, perform a “Screen Print” of your desktop to show that the icon for CSET has been installed. Open a Microsoft Word document and paste the screen print into the document. Save the document and submit it to the instructor.
Grading Criteria Rubric
1. Proof that the CSET Tool has been downloaded and installed.
Grade points: 100
7.04: Team Activity
Overview
Student teams use the CSET tool to produce a risk assessment report for their sector-based organization.
Team Activity Objectives
• Identify the different risk assessment frameworks.
• Demonstrate an understanding of the CSET tool risk assessment functions.
• Apply the CSET tool to an IT general risk assessment.
• Develop a report using CSET.
• Apply the standard available in the CSET tool to an IT general risk assessment.
Assignment
Run a CSET Risk Assessment on your team’s fictitious organization. Use the standard(s) that apply to your team’s sector-based organization, based on your work in the Module 5 Team Activity.
Use the vulnerability assessment plans you developed in the Module 6 Team Activity to help in your assessment. Import the network diagram your team developed for the Module 3 Team Activity.
Run the CSET tool and follow the steps to perform a risk assessment on tour organization infrastructure. Save the Executive Summary of your assessment as proof that you completed this Team Activity.
Student teams submit their CSET Executive Summary PDF Report file to their instructor.
Grading Criteria Rubric
• Submission of CSET Executive Summary PDF Report file
Grade Points: 100
7.05: Assessment
True/False
Indicate whether the statement is true or false.
____ 1. A risk assessment that uses descriptive terminology, such as “high,” “medium,” and “low,” is called a quantitative risk assessment.
Multiple Choice
Identify the choice that best completes the statement or answers the question.
____ 2. In which phase of the Critical Infrastructure Risk Management Framework is the goal to identify, detect, disrupt, and prepare for hazards and threats; reduce vulnerabilities; and mitigate consequences.
a. Assess and analyze risk c. Implement risk management activities
b. Establish program goals d. Identify assets
____ 3. _________________ is a computerized, open-source risk assessment tool that consists of UML-based packages.
a. OCTAVE c. CSET
b. CORAS d. SNORT
____ 4. _________________ was developed by Carnegie Mellon as a suite of tools, techniques, and methods for risk-based information security assessment and planning; it utilizes event/fault trees.
a. OCTAVE c. CSET
b. CORAS d. SNORT
Completion
Complete the sentence.
5. ___________________________________________________________ refers to the logistics associated with obtaining needed components.
Short Answer
6. Discuss the impact that an industry’s regulatory environment might have on risk assessment. Provide an example of a regulation in a sector that would have to be security tested.
For the answers to these questions, email your name, the name of your college or other institution, and your position there toinfo@cyberwatchwest.org. CyberWatch West will email you a copy of the answer key. | textbooks/workforce/Information_Technology/Information_Systems/Critical_Infrastructure_Cybersecurity_(CyberWatch_West)/07%3A_Risk_Assessments/7.01%3A_Description_and_Objectives.txt |
Description
This module covers how to control risk to the network through appropriate remediation techniques. It introduces the concept of the Security Design Life Cycle (SDLC) and the importance of building security in at initiation, rather than “bolting” it on afterwards. In ICS and other SCADA systems, this may not be possible. Foundation guidelines and policies for controlling risk and personnel behavior will be addressed. An enumeration of network protection systems will be provided, including firewalls, intrusion detection systems (IDS), and intrusion prevention systems (IPS).
The module discusses the importance of digital signatures to providing device authentication, and how vulnerabilities specific to ICS systems relate to remediation techniques. Additionally, it covers common vulnerabilities found in ICS systems and techniques to identify vulnerabilities, as well as remediation techniques.
Objectives
• Describe how risk management techniques control risk.
• Explain the concept of the Security Design Life Cycle (SDLC).
• List the types of security policies and how these relate to remediation.
• Describe how awareness and training can provide increased security.
• Identify remediation techniques in an ICS network, including routers, firewall technology, and tools for configuring firewalls and routers.
• Describe intrusion detection and prevention systems and web-filtering technologies.
• Explain the importance of digitally signed code for pushes of firmware and other updates to automated devices.
• Demonstrate the ability to evaluate and assess vulnerabilities in ICS networks.
• Explain and make recommendations for remediation strategies in an ICS network.
• Describe the hazards (do and don’ts) of the corporate network process vs. ICS network process.
8.02: Presentation and Required Reading
Presentation
[embeddoc url=”http://textbooks.whatcom.edu/phil101...on-Final2.pptx” download=”all” viewer=”microsoft”]
None
8.03: Hands-on Activity
Overview
Students download and install a digital certificate.
Hands-on Activity Objectives
• Demonstrate the ability to research, locate and install a digital certificate.
• Explain the importance of digitally signed code for pushes of firmware and other updates to automated devices.
Assignment
Research what digital certificates are available for your PC operating system.
Follow procedures for downloading and installing a selected digital certificate. Take screenshots of the steps you follow.
Write a short paper describing your research findings on how to download and install a digital certificate. As attachments to your paper, provide screenshots of the steps you followed to install the digital certificate.
Grading Criteria Rubric
• Content
• Evidence of download and installation via screenshots
• References
• Use of American Psychological Association (APA) style in writing the assignment
Grade Points: 100
8.04: Team Activity
Overview
Based on the risks that teams identified for their sector-based organization’s infrastructure in Module 7, student teams identify appropriate security controls to mitigate these risks.
Team Activity Objectives
• Describe how risk management techniques control risk.
• List the types of security policies and how these relate to remediation.
• Describe how awareness and training can provide increased security.
• Identify remediation techniques in an ICS network including routers, firewall technology, and tools for configuring firewalls and routers.
• Describe intrusion detection and prevention systems and web-filtering technologies.
• Demonstrate the ability to evaluate and assess vulnerabilities in ICS networks.
• Explain and make recommendations for remediation strategies in an ICS network.
• Describe the hazards (do and don’ts) of the corporate network process vs. ICS network process.
Using the CSET tool reports and identification of gaps in security from Module 7, develop a list of controls to be implemented to close the gaps and mitigate these risks.
Assignment Options
Option 1: Write a 2-page abstract summarizing the security controls your team would use to mitigate specific risks, based on the CSET gaps report.
Option 2: Prepare 2–3 presentation slides describing the security controls your team would use to mitigate specific risks, based on the CSET gaps report.
Grading Criteria Rubric
• Content
• Evidence of teamwork
• References
• Use of American Psychological Association (APA) style in writing the assignment
Grade Points: 100 | textbooks/workforce/Information_Technology/Information_Systems/Critical_Infrastructure_Cybersecurity_(CyberWatch_West)/08%3A_Remediation/8.01%3A_Description_and_Objectives.txt |
True/False
Indicate whether the statement is true or false.
____ 1. A device that looks for unusual behavior, such as odd protocols arriving at a server, is known as a signature-based IDS/IPS.
____ 2. Web-filtering based on creating a list of unauthorized sites that may not be accessed is called whitelisting.
Multiple Choice
Identify the choice that best completes the statement or answers the question.
____ 3. Purchasing cybersecurity insurance to cover losses in the event of a security breach is an example of risk _____________.
a. Avoidance c. Transference
b. Mitigation d. Acceptance
____ 4. Deciding to delay the implementation of a new system until all security vulnerabilities can be resolved is an example of risk _____________.
a. Avoidance c. Transference
b. Mitigation d. Acceptance
____ 5. Devices such as Intrusion Detection Systems (IDSs) are considered risk ___________ strategies as they reduce the impact of the event through early detection.
a. Avoidance c. Transference
b. Mitigation d. Acceptance
____ 6. George has determined that the impact to the business from an internal server hard disk crash would be \$2,000, including three hours of time to rebuild the data from backups. Historically, server drives fail about once every three years. As an option, he could cluster the server (install a second server to act in tandem with the first server) at a cost of \$5,000 for hardware and installation. Assume he has a three-year equipment life cycle so he would have to replace this equipment in three years. Which of the following makes the most sense as a risk strategy?
a. Install the second server, as any downtime is bad. c. Avoid using the server until hard drives become more reliable.
b. Accept the risk, as it is less expensive than the proposed control. d. Find a new job. He wasn’t hired to be an accountant.
____ 7. In the ___________ phase of the SDLC, the system is performing work, with occasional updates to hardware and software.
a. Initiation c. Operations/maintenance
b. Development/acquisition d. Implementation/assessment
____ 8. Wiping hard drives and destroying software used with a system occurs at which stage of the SDLC?
a. Initiation c. Operations/maintenance
b. Disposal d. Implementation/assessment
____ 9. Establishing guidelines for including security into contracting language occurs at which stage of the SDLC?
a. Initiation c. Operations/maintenance
b. Development/acquisition d. Implementation/assessment
____ 10. The Gramm-Leach-Bliley Act (GLBA) that established security and privacy safeguards on depositor accounts at financial institutions is an example of what type of security policy?
a. Regulatory c. Informative
b. Advisory d. Issue-specific
____ 11. A device that receives packets that need to be sent out to other networks is known as a/an ___________.
a. Firewall c. Router
b. IDS/IPS d. Switch
Completion
Complete each sentence.
12. ________________________ risk is the amount of risk that remains after security controls have been applied.
Matching
Match the remediation technique/control to an appropriate category.
A. Incident Response F. System and Information Integrity
B. Personnel Security G. Audit and Accountability
C. Physical and Environment Security H. Monitoring and Reviewing Control System Security Policy
D. System and Communication Protection I. Access Control
E. Media Protection J. Organizational Security
____ 13. Developing a policy for removing access when an employee is terminated
____ 14. Encrypting all sensitive data in transit
____ 15. Implementing an IDS/IPS
____ 16. Installing an uninterruptible power supply (UPS)
____ 17. Enabling logging of all after-hours access
____ 18. Issuing smart cards to users to enable multi-factor authentication
____ 19. Developing a disaster recovery plan (DRP)
____ 20. Establishing a security officer who has oversight of the system
____ 21. Encrypting all backup data
____ 22. Compliance audit
Short Answer
23. Discuss the difference between role-based security training and security awareness training. What recommendations would you make for how frequently these should occur?
24. You’ve been asked to implement a firewall. Discuss best practices for configuring a firewall.
25. Discuss the difference between a business network and an ICS network.
For the answers to these questions, email your name, the name of your college or other institution, and your position there toinfo@cyberwatchwest.org. CyberWatch West will email you a copy of the answer key. | textbooks/workforce/Information_Technology/Information_Systems/Critical_Infrastructure_Cybersecurity_(CyberWatch_West)/08%3A_Remediation/8.05%3A_Assessment.txt |
Description
Students learn about Incident Response (IR) strategies, including prevention and containment. They also learn how to create an Incident Response Plan.
Objectives
• List some common types of incidents that may occur in SCADA/ICS systems.
• Identify the phases of an Incident Response (IR), as described in the NIST SP 800-61.
• Define incident containment and describe how it is applied to an incident.
• Discuss the IR reaction strategies unique to each category of incident.
• Explain the components of an Incident Response Plan.
• Identify the 14 response core capabilities covered in the National Response Framework.
9.02: Presentation and Required Reading
Presentation
[embeddoc url=”http://textbooks.whatcom.edu/phil101...on-Final2.pptx” download=”all” viewer=”microsoft”]
Required Reading
Department of Homeland Security (DHS). Presidential Policy Directive 8: National Preparedness (PPD-8). March 30, 2011. Download from https://www.dhs.gov/presidential-policy-directive-8-national-preparedness.
Federal Emergency Management Agency (FEMA), Department of Homeland Security (DHS). National Response Framework. Third Edition. June 2016. Download from https://www.fema.gov/media-library/assets/documents/117791.
Federal Emergency Management Agency (FEMA), Department of Homeland Security (DHS). National Incident Management System. Download from https://www.fema.gov/media-library-data/1467113975990-09cb03e2669b06b91a9a25cc5f97bc46/NE_DRAFT_NIMS_20160407.pdf. A copy of the document is also provided below.
[embeddoc url=”http://textbooks.whatcom.edu/phil101...S_20160407.pdf” download=”all”]
9.03: Hands-on Activity
Overview
Students review one NIST case study, either the Olympic Pipeline Explosion or the Maroochy Water Services Incident. They indicate the response steps and describe what went wrong.
Hands-on Activity Objectives
• Identify the 14 response core capabilities covered in the National Response Framework.
• List some of common types of incidents that may occur in SCADA/ICS systems.
• Identify the phases of an Incident Response, as described in NIST SP 800-61.
• Explain the components of an Incident Response Plan.
Assignment
Download one of the two NIST case studies below.
[embeddoc url=”http://textbooks.whatcom.edu/phil101...-Explosion.pdf” download=”all” text=”Download Olympic Pipeline Explosion”]
Pipeline Rupture and Subsequent Fire in Bellingham, Washington June 10 1999.” NTSB/PAR-02/02. PB2002-916502. National Transportation Safety Board.
[embeddoc url=”http://textbooks.whatcom.edu/phil101...udy_report.pdf” download=”all” text=”Download Maroochy Water Services”]
This document can also be downloaded from the Internet: https://www.mitre.org/publications/technical-papers/malicious-control-system-cyber-security-attack-case-study-maroochy-water-services-australia.
Review and assess the case you selected.
Write a short paper describing the response steps and what went wrong in the case study you read.
Grading Criteria Rubric
• Content
• References
• Use of American Psychological Association (APA) style in writing the assignment
Grade Points: 100
9.04: Team Activity
Overview
Teams select one of the risks from their risk assessment and create an Incident Response Plan for their sector-based organization.
Team Activity Objectives
• Identify the phases of an Incident Response (IR), as described in NIST SP 800-61.
• Define incident containment and describe how it is applied to an incident.
• Discuss the IR reaction strategies unique to each category of incident.
Based on your team’s investigation of your chosen sector and fictitious organization, determine which stakeholders to include. Develop a Incident Response Plan document that discusses the steps taken for one of the risks that was identified by your team’s CSET Risk Assessment in Module 7.
Assignment Options
Option 1: Write a 2-page abstract summarizing the Incident Response Plan your team has developed.
Option 2: Prepare 2–3 presentation slides about your Incident Response Plan.
Grading Criteria Rubric
• Content
• Evidence of teamwork
• References
• Use of American Psychological Association (APA) style in writing the assignment
Grade Points: 100
9.05: Assessment
Multiple Choice
Identify the choice that best completes the statement or answers the question.
____ 1. Which of the following is not a common type of incident in a SCADA/ICS?
a. Unauthorized access to system controls c. Vendor goes out of business and can no longer supply critical components
b. A worm infects a network at a nuclear power plant d. Vendor improperly performs a security assessment, resulting in loss of system availability
____ 2. In which phase of NIST’s SP 800-61 would organizations prioritize response to multiple threat actions?
a. Preparation c. Containment Eradication and Recovery
b. Detection and Analysis d. Post-Incident Activity
Matching
Match each core capability of the National Response Framework with its objective.
A. Planning H. Mass Care Services
B. Public Information and Warning I. Mass Search and Rescue Operations
C. Operational Coordination J. On-Scene Security and Protection
D. Critical Transportation K. Operational Communications
E. Environmental Response/Health and Safety L. Public and Private Services and Resources
F. Fatality Management Services M. Public Health and Medical Services
G. Infrastructure Systems N. Situational Assessment
____ 3. Ensure the availability of guidance and resources
____ 4. Relay information on threats and hazards
____ 5. Provide life-sustaining services, including food and shelter
____ 6. Provide communications
____ 7. Establish and maintain an operational structure and process
____ 8. Provide decision-makers with information
____ 9. Deliver search and rescue operations
____ 10. Provide transportation for response
____ 11. Provide essential services
____ 12. Engage the community to develop response approaches
____ 13. Provide lifesaving medical treatment
____ 14. Stabilize infrastructure
____ 15. Provide law enforcement and security
____ 16. Body recovery and victim identification services
Match the following sections of the ICS Cyber Incident Response Plan with their contents.
A. Overview, Goals, and Objectives F. Response Actions
B. Incident Description G. Communications
C. Incident Detection H. Forensics
D. Incident Notification I. Additional Sections
E. Incident Analysis
____ 17. Includes media contacts
____ 18. Incident type classification
____ 19. Addresses how an incident is prioritized and escalated
____ 20. Addresses how to evaluate and analyze an incident
____ 21. Other stuff
____ 22. Discusses business objectives
____ 23. The process for collecting, examining, and analyzing incident data, with an eye to legal action
____ 24. Defines the procedures used for each type of incident
____ 25. Describes how an incident is identified and reported
Short Answer
26. Define incident containment and provide an example of how it would be applied to an incident.
27. Discuss how the response strategy for an incident that was sourced from within the organization would differ from one sourced from outside of the organization.
For the answers to these questions, email your name, the name of your college or other institution, and your position there toinfo@cyberwatchwest.org. CyberWatch West will email you a copy of the answer key. | textbooks/workforce/Information_Technology/Information_Systems/Critical_Infrastructure_Cybersecurity_(CyberWatch_West)/09%3A_Incident_Response/9.01%3A_Description_and_Objectives.txt |
Description
This module covers policies and governance issues. Topics covered include federal Critical Infrastructure policies and legislation, information sharing of threats among agencies, public/private partnerships, and standards and regulations, as well as compliance. Issues relevant to specific sectors is discussed, such as intellectual property, and the roles of HIPAA, Sarbanes-Oxley, Gramm-Leach-Bliley, and PCI (DSS) are reviewed.
Objectives
• Identify information-sharing strategies and initiatives as established by the Department of Homeland Security (DHS).
• Describe threat intelligence information sharing among public and private partners, including Information Sharing and Analysis Centers (ISACs).
• Explain the roles that DHS’s National Cybersecurity and Communications Integration Center (NCCIC) and National Infrastructure Coordinating Center (NICC) play in infrastructure protection.
• Describe issues relevant to specific critical infrastructure sectors, such as HIPAA and other regulations and laws.
10.02: Presentation and Required Reading
Presentation
[embeddoc url=”http://textbooks.whatcom.edu/phil101...on-Final2.pptx” download=”all” viewer=”microsoft”]
Required Reading
[embeddoc url=”http://textbooks.whatcom.edu/phil101.../NIPP-2013.pdf” download=”all” text=”Download NIPP 2013 (PDF)”]
10.03: Hands-on Activity
There is no hands-on activity for this module.
10.04: Team Activity
Overview
Student teams identify the policy and governance issues for their selected sectors.
Team Activity Objectives
• Identify information-sharing strategies and initiatives, as established by the Department of Homeland Security (DHS).
• Describe threat intelligence information among public/private partners, including Information Sharing and Analysis Centers (ISACs).
• Explain the roles that DHS’s National Cybersecurity and Communications Integration Center (NCCIC) and National Infrastructure Coordinating Center (NICC) play in infrastructure protection.
Based on your team’s previous investigations of your chosen sector and fictitious organization, identify the policy and governance issues for your selected sector. Determine what Critical Infrastructure policies and legislation, information sharing of threats among agencies, public/private partnerships, standards and regulations, and compliance requirements would apply to your organization.
Assignment Options
Option 1: Write a 2-page abstract summarizing the governance policies, legislation, partnerships, standards, industry regulations, and compliance requirements that would apply to your sector-based organization.
Option 2: Prepare 2–3 presentation slides that share your conclusions concerning governance policies, legislation, partnerships, standards, industry regulations, and compliance requirements.
Grading Criteria Rubric
• Content
• Evidence of teamwork
• References
• Use of American Psychological Association (APA) style in writing the assignment
Grade Points: 100
10.05: Assessment
Multiple Choice
Identify the choice that best completes the statement or answers the question.
____ 1. ___________________ consists of owners and operators and their representatives, collaborating between government and private sector owners of critical infrastructure.
a. Critical Infrastructure Cross-Sector Council c. Regional Consortium Coordinating Council
b. Government Coordinating Councils d. Sector Coordinating Councils
____ 2. ___________________ is composed of senior officials from federal agencies who facilitate communication and coordination on critical infrastructure security and resilience across the federal government.
a. Critical Infrastructure Cross-Sector Council c. Federal Senior Leadership Council
b. Government Coordinating Councils d. Sector Coordinating Councils
____ 3. ___________________ are organizations, including ISACs, that focus on information dissemination and collaboration on a cross-sector basis through a national council.
a. Federal Senior Leadership Council c. Information Sharing Organizations
b. Government Coordinating Councils d. Sector Coordinating Councils
____ 4. Which of the following is not one of the NIPP’s seven core tenets?
a. Identifying and managing risk c. Adopting a partnership approach to security and resilience
b. Promoting the public dissemination of an organization’s vulnerabilities d. Promoting security and resilience during design stages of systems and networks
____ 5. ___________________ is a dedicated 24/7 coordination and information-sharing operations center that maintains situational awareness of the nation’s CI, serving as a hub between the government and the private sector when an incident is detected.
a. National Infrastructure Coordinating Center (NICC) c. Information Sharing Organizations
b. Information Sharing and Analysis Centers (ISACs) d. National Cybersecurity and Communications Integration Center (NCCIC)
Short Answer
6. Define the role of an ISAC in critical infrastructure protection.
For the answers to these questions, email your name, the name of your college or other institution, and your position there toinfo@cyberwatchwest.org. CyberWatch West will email you a copy of the answer key. | textbooks/workforce/Information_Technology/Information_Systems/Critical_Infrastructure_Cybersecurity_(CyberWatch_West)/10%3A_Policy_and_Governance/10.01%3A_Description_and_Objectives.txt |
Description
This module discusses the future of cybersecurity: the Internet of Things (IoT) and how it creates an entirely new set of risks, and emerging technologies like drones, robots, and “wearables.” Increasingly, companies and organizations are exploring a more “active defense” approach to cybersecurity. Traditional incident response—the rapid deployment of a team to remediate breaches to a network, identify additional threats, and restore functionality—is still necessary but is no longer sufficient. The module gives an overview of how the connectedness of our cyber networks demands intelligence-driven tools and processes that equip leaders with an anticipatory edge.
Objectives
• Identify emerging trends and demonstrate an understanding of emerging technologies.
• Understand the Internet of Things (IoT) and how it expands the cyber “attack surface.”
• Be able to make educated predictions of what the future might look like for the cybersecurity critical infrastructure framework.
• Discuss ethical issues that can arise in relation to new technology and new defense strategies.
11.02: Presentation and Required Reading
Presentation
[embeddoc url=”http://textbooks.whatcom.edu/phil101...on-Final2.pptx” download=”all” viewer=”microsoft”]
Required Reading
The President’s National Security Telecommunications Advisory Committee (NSTAC). NSTAC Report to the President on the Internet of Things. Nov. 18, 2014. PDF file available for download at https://www.dhs.gov/sites/default/files/publications/IoT%20Final%20Draft%20Report%2011-2014.pdf.
11.03: Hands-on Activity
Overview
Individual students write concise reports on a recent trend in the sector they have been studying.
Learning Objectives
• Identify emerging trends and demonstrate an understanding of emerging technologies.
• Understand the Internet of Things (IoT) and how it expands the cyber “attack surface.”
• Be able to make educated predictions of what the future might look like for the cybersecurity critical infrastructure framework.
• Discuss ethical issues that can arise in relation to new technology and new defense strategies.
Based on your team’s investigation of your chosen sector and created fictitious organization, research recent trends in cybersecurity relevant to your team’s selected sector. Identify at least 5 references on relevant trends.
Assignment Options
• Write a short paper describing your findings on how these trends will impact your sector.
• Prepare 2–3 presentation slides on your findings on how these trends will impact your sector.
Grading Criteria Rubric
• Content
• Evidence of teamwork
• References
• Use of American Psychological Association (APA) style in writing the assignment
Grade Points: 100
11.04: Team Activity
Overview
Student teams organize the materials on their sector and their fictitious organization into a final presentation to be shared with the class.
Team Activity Objectives
• Select appropriate vulnerability assessment frameworks and tools as part of a risk assessment of a CI system.
• Identify and describe cybersecurity threats, risks, vulnerabilities, and attacks as they apply to CI systems.
• Identify an appropriate risk management strategy for CISR.
Assignment
Draw on the past work your team has done on your fictitious organization and its sector:
Prepare a summary of your team’s case study project for the class. Be sure that your team’s presentation addresses the following:
• What you discovered about cybersecurity vulnerabilities as they relate to your particular sector
• What mitigation techniques can be used to alleviate these issues
• Suggestions you have for further strengthening your network’s security
• The role of government regulation in the functioning of your organization
11.05: Assessment
True/False
Indicate whether the statement is true or false.
____ 1. Passive defense takes into consideration threat intelligence information that can covertly respond to threat information.
____ 2. Privacy-by-design provides standards for securely collecting and maintaining privacy information, beginning at the point of project initiation.
Multiple Choice
Identify the choice that best completes the statement or answers the question.
____ 3. Attacks continue to evolve. Which of the following is not one that was discussed in the presentation?
a. APTs c. Increased social engineering attacks
b. Increased attack surfaces associated with the Internet of Things d. All are evolving threats
____ 4. Which of the following is not a problem associated with the Internet of Things?
a. Sensors might be placed in public locations where they are prone to tampering. c. Protocols have been used for decades and so tend to be unreliable.
b. Small nature of the sensors makes them difficult to update, or patch, when a problem is found. d. Security is not usually built into the devices, as they are considered disposable.
Completion
Complete the sentence.
5. An attack in which the attacker has gained access and maintains access for long periods of time before detection is called a/an ________________________.
Short Answer
6. The lecture discussed data integrity attacks on power grid or water systems. Identify other critical services that may be vulnerable to a data integrity attack and discuss, generically, how the attack might occur.
7. Discuss at least one of the ethical or privacy issues associated with critical infrastructure protection.
For the answers to these questions, email your name, the name of your college or other institution, and your position there toinfo@cyberwatchwest.org. CyberWatch West will email you a copy of the answer key.
12.01: Sector Reports Out
Description
Each student team presents a summary of its case study project for the class. Team presentations should offer insights into what the students have learned from this course. Depending on the number of teams in the class, it may take more than one class period for all projects to be presented.
Objectives
• Demonstrate the ability to communicate technical and business information in a presentation format.
• Demonstrate the ability to interact with peers and others.
• Demonstrate that professionalism and soft skills that employers look for in employees.
Team presentations should address the following:
• What the team discovered about cybersecurity vulnerabilities relevant to their particular sector
• What mitigation techniques can be used to alleviate these issues
• Suggestions the team has for further strengthening their network’s security
• The role of government regulation in the functioning of their organization
Grading Criteria Rubric
• Content
• Evidence of teamwork
• Professionalism
• Use of American Psychological Association (APA) style
Grade Points: 100 | textbooks/workforce/Information_Technology/Information_Systems/Critical_Infrastructure_Cybersecurity_(CyberWatch_West)/11%3A_Trends/11.01%3A_Description_and_Objectives.txt |
The transition from the Standard Model of education to the alternatives that are emerging to replace it is an incomplete and unpredictable activity. IT managers (along with the teachers they support) who have a framework for understanding the role of technology in the many activities that comprise teaching and learning will design more effective systems than those who do not. In this chapter, technology-rich teaching and learning is deconstructed so it can be understood by IT managers.
Because new information technologies (including hardware, software, and information sources, along with new uses of each) emerge very quickly compared to the periodicity of schools (new technologies appear several times during a typical school year), teachers must adopt and adapt to them constantly. When deciding which technologies to use, teachers are more likely to use technologies that:
• Are easier to use than existing technologies;
• Are more effective than existing technologies;
• Complement existing technologies.
While it may appear easy to select technology that meets these characteristics, those decisions are complicated by the diversity of the devices that emerge as well as the effects the technologies have on students and teachers and culture. In this chapter, a framework with which educators can understand the role of new technologies in their work is described. In addition, strategies for supporting educators’ understanding of technologies in their classrooms are described.
202: Section 2-
A comprehensive IT management plan will articulate a logistic goal related to supporting teachers as they become competent users of IT and teachers with IT. In addition to capturing a central role for information technology in the curriculum, this logistic goal will include all students and will include diverse technology experiences. For example, “Every student will gain experience using technology to access, consume, and create information and to interact with others in all classes.”
Context for the Logistic Goal
The need to articulate a logistic goal supporting technology- rich teaching and learning for all students, and in all areas, arises from the non-neutrality of IT. The information technology common in a society determines what it means to be “literate” in the society, so all teachers have the responsibility to expose students to technology-rich information and interaction in their field. Teachers who ignore IT today are no different from teachers who ignored text in previous generations. We know from the arguments in Chapter 1 that information technology affects individual humans, the organizations humans create, and the culture in which humans live. These effects extend into classrooms as well. To accomplish the logistic goal of using technology in classroms, school IT managers support a) on-going training to use IT, b) learning about emerging information technologies, and c) design opportunities to ensure IT- rich teaching and learning is embedded in all curriculum areas.
Soon after desktop computers arrived in schools, the Apple Classrooms of Tomorrow project studied the interactions of students and teachers in classrooms. One of the findings from that work, and a finding that has been demonstrated ever since, is that putting new information technology in classrooms does not mean it will be used for effective teaching and learning (Sandholtz, Ringstaff, & Dwyer, 1997; Schofield, 1995). In the decades since computers arrived, there has been on-going study of the factors that influence teachers’ use of technology. It is clear from this research that teachers’ beliefs about technology and teaching, the nature of the technology and its support, other’s use of technology, and the availability of curriculum and materials that make of use technology are all important factors affecting the decision to use computers in a classroom (for example (Buabeng-Andoh, 2012; Kim & Reeves, 2007; Mumtaz, 2000; Somekh, 2008; Zhao & Frank, 2003). Even as researchers understood the factors associated with technology use by teachers and the affordance of IT associated with alternative methods, the Standard Model of instruction dominated and technology continued to be a marginal part of students’ experience.
For the most recent generation of teachers, the difficulties of finding a role for technology in the classroom and then fully implementing it has been complicated by three factors. First, the rate at which computers and information technology change has been rapid and accelerating. New technologies emerge and gain wide- spread acceptance in very short time spans compared to technologies throughout the 20th century. For educators, whose technology-rich teaching tends to be cyclic with a one-year period, the obsolescence of technologies that happens on a time scale of months can be disconcerting and disruptive.
Second, the current generation of educators are working at a time when cognitive and learning sciences are challenging much of what they experienced as “good” education when they were students or what they were taught in their teacher education programs. We are understanding the complexities of human brains and the important role that emotions and social interaction play in human learning, so educators can no longer simply be dispensers of information. Creating effective learning environments is more complicated than it was previously regardless of the role of technology.
Third, education has become politicized at a scale that it was not in earlier generations. In the United States, education law and policy is created at all levels of government, and these laws can sometimes be contrary to other laws and they often are contrary to what cognitive and learning science tells us is natural for humans. In educational technology, the United States government has written technology plans in which educational and political leaders articulated new and more sophisticated expectations for teachers and school IT managers:
• Getting America’s Students Ready for the 21st Century: Meeting the Technology Challenge (1996)—The first technology plan focused on ensuring teachers had computers and software and were trained in how to use them; this plan largely addressed the need to obtain computing devices and ensure teachers could operate them.
• e-Learning: Putting a World-Class Education at the Fingertips of All Children. The National Educational Technology Plan. (2000)—This plan continued the focus on hardware, software, and also extended infrastructure to include networks and extended the focus teachers’ learning to the transformation of instructional activities to make use of technology.
• Toward A New Golden Age in American EducationHow the Internet, the Law and Today’s Students Are Revolutionizing Expectations (2004)—This plan changed the focus from technology planning to different types of technology-rich learning, namely online learning.
• Transforming American Education: Learning Powered by Technology. (2010)—This technology plan again refocused technology planning on assessment and measuring student outcomes.
• Future Ready Learning: Reimagining the Role of Technology in Education. National Education Technology Plan (2016)—This plan is comprehensive and includes goals related to infrastructure, teaching and learning, professional development, innovation and assessment.
• Reimagining the Role of Technology in Education (2017)— In this plan that Department of Education of the United States intends to begin more frequent and less comprehensive updates.
In the decades-long history of advice for IT managers from the national education leaders, we can see changes in what they were expected to do locally. While it is reasonable for all organizational leaders, especially leaders of public institutions, to adjust their goals and their planning efforts to reflect new knowledge and developing practice, the changes in direction coming from external and politically powerful influences can produce unintended consequences for local communities. What was “best practice” while one plan was in place is abandoned when a new plan is released. Planners are rarely able to follow through with steps to address one set of goals before the next necessitates they turn their attention to other goals. The result is that educational technology planners have rarely been able to complete their plans and fully understand the implications of their work before the focus changed. Educators are also a non-neutral part of schools; their beliefs, values, and experiences all affect the actions they take. For those who have become deeply engaged with a set of practices and who have invested much cognitive effort in understanding the rationale behind those practices, the decision to abandon them can be distressing. This problem is exacerbated when the decision- makers show little empathy for the knowledge of the teachers and the affective connection they have for their work.
To accommodate these many changing factors influencing IT managers and the environments for which they design systems and to introduce some consistency into the planning for technology- rich teaching and learning, IT managers can use theory to organize their efforts. When work is organized by sound theory, changes in the focus of technology planning can appear less drastic to members of the organization than when new goals cause new priorities. This is particularly effective when they seek to define improvement in ways that can be affected by known factors and that can be observed with known methods. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/02%3A_Technology-Rich_Teaching_and_Learning/201%3A_Section_1-.txt |
Teacher education has traditionally been informed by a framework comprising the content dimension (what is to be taught or the curriculum) and the pedagogy dimension (how it is taught or instruction). Shulman (1987) suggested teachers’ content knowledge and pedagogical knowledge cannot be developed in isolation, so he proposed “pedagogical content knowledge” (PCK) to describe the capacity of a teacher to organize, explain, and communicate ideas so that students understand the content. The adage commonly applied to education, “you never really understand it until you teach it,” captures the interconnected nature of content and pedagogy; educators better understand content through teaching it and they better understand pedagogy by applying it to teaching problems in their classrooms.
In extending Shulman’s concept of PCK, Mishra and Koehler (2006) observed technology had emerged as a distinct type of knowledge. In adding technological knowledge (TK) to Shulman’s model, Mishra and Koehler recognized computer technology is qualitatively different from pencils and paper and the other long-established print technologies, so it enters the model as a separate type of knowledge. It is reasoned that as digital information technology becomes more familiar, its existence as a separate type of knowledge will decrease. Technological pedagogical content knowledge (TPCK) (see Figure 3.3.1) has become a very useful framework for understanding teaching and learning in the technology-rich school. While TPCK does comprise distinct and isolatable types of knowledge, it is presented as a model that “emphasizes the connections, interactions, affordances, and constraints between and among content, pedagogy, and technology,” and “emphasizes the interplay of these three bodies of knowledge.” (Mishra & Kohler, 2006, p. 125).
Figure \(1\): TPCK model (adapted from Mishra & Koehler, 2006)
As a framework useful to inform IT management decisions in school, TPCK identifies seven types of knowledge that can be improved with educators’ increased awareness of new technologies and with their increased knowledge of teaching methods that make use of technology. The state of TPCK within a school community can evaluated from an individual’s perspective and also from the perspective of the entire faculty. Social influences are known to be an important determinate in technology acceptance (Venktah & et al. 2003), so each individual’s TPCK is affected by the group’s TPCK, and the TPCK of influential individuals are particularly important in affecting the group’s TPCK.
TPCK is proposed as a dynamic framework and Mishra and Kohler (2006) anticipated it would change over time. Shulman (1987) did not differentiate books, pencils, paper, and other information technologies into a separate type of knowledge when PCK was first elucidated; he reasoned those were transparent technologies and a stable part of teaching and learning for generations, thus no specific knowledge was necessary to use technology. Given the continued rapid development and diffusion of information and computer technology hardware, software, and network platforms, technological knowledge is anticipated to be an important part of TPCK into the foreseeable future. Further, the nature of the classroom determines how TPCK is defined and instantiated. Mishra and Koehler (2006) observed, “there is no single technological solution that applies to every teacher, every course, or every view of teaching” (p. 1029).
Using TPCK, IT managers can identify and support all aspects of technology in teaching and learning. The model also allows IT managers to identify and clarify the connections between the various types of knowledge. In addition, TPCK facilitates understanding of who must be involved with decisions and who must lead and participate in training, curriculum development, and other professional development activities.
Technological Knowledge
When desktop computers first arrived in schools, leaders found it necessary to provide training and support in the basic operation of the devices. At that time, teachers were unlikely to have access to computers at home and it was unlikely they had been exposed to them during their professional preparation. (In the mid- 1980’s, I was in the minority of my peers enrolled in the teacher education program at out state university who enrolled in the optional “Computers in the Classroom” course offered to undergraduate students; most of my colleagues earned their teaching credentials without any formal experiences with computers.) Simply turning computers on and loading software was the focus of the first computer training for teachers. Software tools such as word processors and spreadsheets were also new, so training sessions introduced educators to the steps of creating, editing, and managing files as well. This reality is also reflected in the goal articulated in the first National Education Technology Plan for the United States which was written in 1996. At the time, educational policy makers sought to address the Technology Literacy Challenge which President Clinton had defined as connecting every classroom to the Internet and ensuring teachers could use it.
In the decades since computers arrived, they have become common household tools and their use is deeply embedded in the higher education courses that are needed to be qualified for almost any position in education. In those same decades, very complex software and network services have been adopted by schools to manage information and provide interaction for educational and business purposes. The result is that educators’ technological knowledge includes that which they must develop and maintain on their own and that which must be developed by IT managers. Part of the screening process for candidates for licensed educators and unlicensed assistants who work directly with students must ensure each who fills one of those positions is capable of operating a computer and common software for professional purposes. Educational professionals arrive at their positions with these skills and maintain them with minimal training throughout their careers. Powering a computer on, logging on to networks, and creating and managing files using locally installed software and cloud-based productivity suites are all tasks educators must be able to accomplish with efficiency, confidence, and independence. In addition, they should be capable of searching and finding credible information on the Internet; this includes multimedia information as well as electronic versions of printed materials. Further, educators should model responsible and ethical use of technology systems and digital media. Finally, educators should be able to adapt to new versions of software and similar upgrades quickly and with little direct instruction.
There are some tools educators should not be expected to use without direct instruction and IT managers must plan for these needs when newly hired educators are “on-boarded” and to support educators during major transitions. The IT systems that require direct instruction include:
• Procedures and credentials for logging on to all systems that are needed by the professional, including local area network, email, and all web services used to manage employment, data, and instruction;
• Instructions for managing rosters and grades through the student information system; these systems are notorious for being “not user friendly,” which can be attributed to the differences between the vocabulary and structures used by designers and programmers and the language and methods used by educators;
• Instructions for posting to the educators’ page(s) on the school web site, the learning management system, social media sites, and other systems for sharing information with both internal and external audiences they are expected to use.
Implicit in this as well is the expectation that educators will be introduced to local policies and procedures relative to acceptable use, procedures to report malfunctioning IT systems, scheduling shared resources, accessing printers, and similar details related to individuals’ use of the specific IT systems installed in the school. On-boarding procedures for new staff must address these aspects of using IT, and changes in how these systems are configured necessitates training for all faculty and staff to ensure efficient and effective use of the new tools.
Content Knowledge
Content knowledge may appear to be the most clearly understood and defined type of knowledge. We all expect, for example, chemistry teachers to understand the concepts, idea, and procedures of chemistry; and this content is found in chemistry textbooks. By successfully completing advanced undergraduate courses in a content area, teacher candidates demonstrate sufficient content knowledge so they understand what they are supposed to teach, including relevant details such as how to recognize when chemistry is being done in an unsafe manner.
The content that future teachers study in their undergraduate courses is developed by those with advanced degrees in the field. Their expertise is assured by the universities granting their degrees, their research, participation in professional organizations, and service to the universities where they are employed. The reality of content knowledge for many educators is becoming more complicated in the digital world, however. Two factors appear to be exerting particularly strong effects on content knowledge as it is experienced in schools.
First, digital technology makes sophisticated information far more accessible than it was in the print-dominated world. For many generations, access to information written by and for professional chemists (for example) depended on access to a research library where the copies of the journals were stored and the professionals who taught at that university could help individuals access and understand that information. Since computer networks have become widely available, access to professional literature (which is now digital) has expanded to every location with an Internet connection and a subscription to a database of periodicals is located. Second, digital information tools are used by individuals, including those with dubious credibility, to distribute information widely. Further, information has become politicized to a greater degree than it was for previous generations, and marginalized and fringe ideas and interpretations of evidence are becoming widely reported and defended.
Together, these factors both afford new opportunities for students and teachers and cause difficulties for those people. Both the affordances and difficulties have implication for efficacious IT managers. These are also the foundation for the pillars of digital learning (Davidson & Goldberg, 2009) (see Table 1.5.1).
In 1644, John Milton composed a pamphlet in which he argues for freedom of expression; areopagitica has been adopted as a term to describe the capacity for individual to compose and distribute any ideas they see fit. Digital tools, especially those called Web 2.0 tools have been interpreted as the realization of areopagetica and students can use these tools to extend and expand the audience of their works. They no longer create solely for their teachers, but they can create for global audiences. This changes the nature of writing and creating for students.
Areopagetica has been adopted by other creators as well, so the vast content available to educators and students includes accurate information from credible sources, fiction packaged as fact, as well as myths, misinterpretations, sarcasm presented as fact. These many variations fill the space between accurate and credible information and purse falsehood. This disparate information led Mark Dueze (2006), a scholar of media and journalism, to conclude the digital media landscape is filled with content creators who “juxtapose, challenge, or even subvert the mainstream” (p. 68) for a variety of reasons.
In a 2016 report on science communication, the National Academies noted a study in which 40% of Americans reported they get science news from Facebook. This contributed to the Committee on the Science of Science and Science Communication (2016) to observe, there are more actors in the media landscape who may, either intentionally or unintentionally, provide inaccurate science information. While today’s science media landscape is likely larger than the declining mass media/newspaper- delivery system of the past, it does not offer clear mechanisms for filtering out false, sensational, and misleading information. More than ever before, citizens are left to their own devices as they struggle to determine whom to trust and what to believe about science-related controversies (p. 4-2).
In the months after the 2016 elections in the United States, the term “fake news” gained in popularity to describe the phenomenon of unverified information in the media. For K-12 educators, navigating and helping students to navigate this emerging information landscape determines the reality of content knowledge. An increasing number of organizations are influencing the contents of recommended curriculum and resources, and this is further complicating content knowledge (CK) for educators. In recent decades, educators’ professional organizations have begun publishing curriculum standards (for example National Council of Teachers of English & International Reading Association, 1996; National Council of Teachers of Mathematics, 2000; NGSS Lead States, 2013). In the 2010, the National Governors Association, initiated the Common Core State Standards, which is an effort to create a national curriculum in the United States. Ostensibly these organizations seek to improve education, but the political nature of governorships makes this a dubious claim; further, educators professional organizations may be motivated to maintain and expand membership rather than affect education.
Textbook publishers also exert strong influences on what is taught. In some jurisdictions, a small number of textbooks are adopted for use by large number of students, publishers approach these areas as mass markets and adopt the strategy of providing the least objectionable content (Johnson, 2006) which allows publishers to sell to the widest audiences with the least potential for offending or alienating large subpopulations so that their media is avoided.
The open educational resource (OER) movement is another factor that is affecting content knowledge in the 21st century school. OER’s are alternatives to textbooks that are published under copyright licenses that allow others to copy, edit, and redistribute the materials without the need to pay the original author or to seek further permission. Typically, an open education resource originates when an expert (often one who teaches undergraduate courses in a field), polishes and details the resources prepared for his or her students, and uploads them to the Internet under the Creative Commons license. An educator who finds the resources and wants to adopt them will download the file, edit it to meet his or her students’ needs, and the focus of the course being taught. The materials derived from an OER source are then made available to the students, and to complete the transaction, the derivative resource is contributed back to the open education community.
Educators and efficacious IT managers are left with the task of providing appropriate access to the vast information sources that are available so they can maintain and update their content knowledge. They provide access to full-text databases for library patrons, they help teachers learn about and design learning activities that give students experience navigating the vast information landscape, and they support educators who participate in OER communities all the while seeking to minimize access to information of dubious credibility.
Pedagogical Knowledge
Of the three individual types of knowledge that contribute to TPCK, pedagogical knowledge is perhaps the most complicated as it is the one with the broadest definition. Relevant technological knowledge is largely defined by the systems available in the school; content knowledge is largely defined by experts who teach the teachers and by textbook and OER publishers. Technological knowledge and content knowledge are clearly bounded and consensus can generally be reached about what constitutes the domain and how it can be improved. Pedagogical knowledge, on the other hand, is defined differently by different scholars and vastly different actions can be called pedagogy. Further, the appropriate pedagogy depends on the goals of the activity as well as the nature of the students and the nature of the curriculum. Pedagogical knowledge is a less clearly understood than other types of knowledge and consensus regarding improvement cannot be easily reached. How pedagogical knowledge is instantiated in the classroom is largely dependent on decisions made by the teacher. Much of the professional discourse on pedagogy, including research and both pre-service and in-service teacher education, differentiates two types of pedagogy. The Standard Model captures one approach to teaching that continues to be supported by various stakeholders. Chris Dede (2010), a scholar from Harvard University, reviewed the many curriculum frameworks that had been produced in the 21st century, and he concluded they demonstrated that educators and education leaders were “systematically examin[ing] all the tacit beliefs and assumptions about schooling that are legacies from the 20th century and the industrial age” (p. 73).
In recent decades, a number of pedagogical models have been presented that call for students to play a more active role in defining curriculum building knowledge, and communicating what they have learned than it typically allowed in instructional pedagogies. While advocates for these many methods differ in the specific details of classroom activity, these methods share the common elements of a curriculum based in complex problems, ample opportunities for social interaction (between teachers and students and among students), students are found articulating their new knowledge, and attending to metacognitive understanding. Advocates for these methods ground their pedagogy in cognitive psychology (rather than behaviorist psychology) and build their rationale around theorists such as Jean Piaget, John Dewey, and Lev Vygotsky.
Instructionism, which is largely used in the Standard Model, is teacher-centered pedagogy, and it has been established that it can be applied with efficacy to the small portion of content that consists of well-known concepts and ideas as well as procedures that can be clearly described. When teachers use instruction, they plan the logical path through the content and they decide when students (either individually or collectively move along). Teachers also measure success by students’ retention of the information and procedures. These methods are grounded in the assumption that learners respond to rewards and punishments; it is reasoned that by rewarding answers and actions that are aligned with expectations (or by punishing those that are not), teachers can promote learning.
Pedagogical knowledge extends beyond understanding the nature of teaching strategies and skill at using those strategies to plan and execute lessons. Educators can approach their work from different perspectives and this affects both what they plan for students and how they present lessons. Douglas Thomas and John Seely Brown differentiate education that teaches about content from education that teaches within the content. When students learn about a subject, they are external to the content and teaching focuses on transferring declarative knowledge and procedures to the learners. Thomas and Brown (2011) suggested this can be mechanistic with “learning treated as a series of steps to be mastered....” (p. 25). When students learn from within the subject, they adopt the methods and approaches of those who work to investigate problems in the field and they produce products similar to those created by workers in the field. This leads to learners developing both explicit knowledge and tacit knowledge, and Thomas and Brown (2011) observed, “the point is to embrace what we don’t know, and continue asking those questions in order to learn more and more....” (p. 38).
Research focusing on learning in informal situations (Lemke, Lecusay, Cole, & Michalchik, 2015) is extending pedagogical knowledge to recognize the role of the learners in the process. Rogoff (1990) described guided participation as a method of informal learning that started with highly-scaffolded modeling and demonstration by mentors early in the experience, but learners assume increasing responsibility for planning, undertaking, and judging the learning products as they develop greater expertise. Caine and Caine (2011) proposed guided experience as a pedagogy that captures the nature of learning that occurs in natural environments, which follows the perception/ action cycle. The perception/ action cycle posits learning is the continuous process of recognizing a situation, interpreting it according to what it is already known, acting, and then adjusting further perceptions according to feedback after acting. Guided experiences are based on three elements:
• Relaxed alertness which find the learners motivated and prepared to learn in a stress-free, but high-expectation, environment.
• A complex experience which finds the learners acting in the same manner as experts rather than learning about what experts know.
• Active processing experience which finds the learners thinking about and making sense of their experiences.
Digital media is also presented as more amenable to guided experience than print. Caine and Caine (2011) even suggest that “technology often plays havoc” with pedagogy designed to transmit knowledge as it “includes student decision making, applying creative solutions to complex and real-life problems, and negotiating with peers and experts” (p. 20). Because more channels of communication, including body language and other movements, are possible with video but nit with text, the nature of the learning that can occur is different when using video media.
Mizuko Ito and her colleagues at the Digital Media and Learning Research Hub seem to have expanded the definition of natural learning as they studied connected learning in young people who comprise the digital generations. That research group observed learning that occurs outside of school tends to be “socially embedded, interest-driven, and organized toward educational, economic, or political opportunity” (Ito, et. al, 2013, p. 6). The students who arrive in today’s classrooms are active and independent learners because of their experiences in the digital world, thus their experiences influence which pedagogies are effective with these populations. Such differences have been recognized by educational scholars for decades, and they led Bereiter (2002) to conclude, “everyday cognition makes more sense if we abandon the idea of a mind operating on stored mental content and replace it with the idea of a mind continually and automatically responding to the world” (pp. 196-7).
As students become more active in creating and communicating new knowledge, basic skills and knowledge can become relevant, so students become motivated to learn the content that is teachable through instruction. As they adopt student-centered methods, many educators are finding a renewed need to include instruction-based methods in their classrooms. This need is less predictable than in instruction and tends to hold the attention of individuals or small groups of students; computer technology and digital media are meeting those needs. Consider the science student who is investigating trajectories of projectiles; she will find it necessary to work with quadratic equations. Using technology, the teacher can direct the student to a lesson reviewing the methods of solving quadratic equations. There is evidence such lessons that include worked examples in which the steps are explicated can be very effective strategies of instruction (Shen & Tsai, 2009). These video lessons can be made available in a learning management system so that students can access them whenever they are needed and can be repeated whenever they are necessary.
It does appear reasonable to conclude that efficacious IT managers will be supporting educators as they create more diverse and flexible learning environments than was necessary for previous generations of learners. The nature of the experiences central to the curriculum will determine the nature of the IT systems that are built and supported. A single approach to using technology in classrooms, or a single type of technology activity will not suffice for learners to participate in the emerging information landscape.
Pedagogical Technological Knowledge
The most efficacious development of pedagogical technological knowledge arises from those situations in which technologists (who obtain and configure test systems) scale up and deploy into production those systems that have been examined by and tested by teachers who identified pedagogical uses. Many of the information technology tools available in schools were developed for audiences and purposes other than education. It is only by investigating emerging technologies and adapting them for teaching that educators gain pedagogical technological knowledge. Those systems that appear to have the greatest pedagogical application with the least consumption of technical resources and the least extraneous cognitive load are those that deserve greatest attention and priority.
Consider social media as an example. Originally developed so that individuals could publish on the Internet (and still widely used for that purpose), many educators have found educationally relevant tasks that can be accomplished through social media, and these can be applied to pedagogical problems in many classrooms. The teacher who finds an excellent solution to a problem in her classroom (perhaps the biology teacher whose students have built an excellent model of a cell) can take a picture of the solution, and post it to a Twitter account. By embedding the feed in her online classroom, the solutions can become part of the resources for all students to use. This exemplifies the adoption of easy-to-use and effective technologies predicted by technology acceptance (Venkatesh et al., 2003).
Other examples of technologies with unexplored pedagogical applications include haptic and full-body motion interfaces (Malinverni & Pares, 2014) which allow for alternatives to keyboard and mouse inputs and for outputs other than printed documents or screen displays. Video games that track motions of bodies have been incorporated into some physical education courses and this is an example pedagogical technological knowledge affecting students’ experiences. Virtual reality, in which technology provides three-dimensional content is another field of developing pedagogical technological knowledge (Ricordel, Wang, Da Silva, & Le Callet, 2017). As these technologies become more fully developed and less expensive, it is anticipated they will become more widely adopted for educational purposes.
Pedagogical Content Knowledge
Just as each content area has its own combination of concepts, ideas, and procedures, each has its own collection of activities that are well-suited to helping students learn that content. In many content areas, the methods used to teach are the lessons the teachers intend to teach. In science classes, for example, laboratory activities in which students plan and set up an apparatus so they can collect data, which they analyze are engaged in methods that teach both the content (the activities are designed to demonstrate important phenomena) and the methods (the activities give experience setting up experiments and analyzing data). Writing courses, also, find the boundaries between pedagogy and content blurred, as the coaching and advice students receive (and give) are intended to improve their writing as they gain experience writing.
Pedagogical content knowledge is an important aspect of on-going teacher education. It has been established that the cognitive and learning sciences are continuing to discover important aspects of pedagogy that were previously unknown and these lead teachers to adopt new methods or adapt their existing practices, so pedagogical knowledge is changing. It has also been established that content is rapidly advancing, so content knowledge is changing rapidly. As a result, the pedagogy used to teach content during a teachers’ preparation is likely to be challenged by new discoveries in the learning and cognitive sciences. The responsibility of supporting teachers’ pedagogical content knowledge falls largely to education professionals and leaders such as department leaders and curriculum leaders, efficacious IT managers will accommodate new demands and needs that are produced as educators continuously redesign and recreate their methods to reflect new discoveries.
Technological Content Knowledge
Technology is affecting how discoveries are made, and even what discoveries can be made, as well as how new knowledge is constructed in many fields. Consider mathematics-rich fields; spreadsheets, statistical software, and graphing calculators have reduced the cognitive demands of manipulating data for recent generations of workers (and students) in those fields. Further, citation management tools and online databases containing the full- text of periodicals has redefined the work of researching in many fields. During a teachers’ professional preparation, he or she will gain experience using the tools employed practitioners in his or her field. In a classroom, many of the same tools will be available and many familiar tools will be replaced by new ones, so teachers must continue to develop and refine their technological content knowledge as it emerges over their careers.
Much technological content knowledge is developed in small and specialized groups, and it is developed to meet very specific goals. A group of math teachers, for example, may develop technological content knowledge around options for graphing functions on mobile devices. As handheld computers have become ubiquitous, students are likely to use many different graphing apps to solve problems. A group of math teachers may plan professional development time to sit with a collection of the problems they typically give to their students, and solve them using the many different devices and applications so they become familiar with the steps for graphing with different software and hardware options. After developing this technological content knowledge, they will be better prepared to both assess and evaluate the choices and support students who may be using different devices. As a result of improved technological content knowledge, tools that are easier to use and more effective are likely to be installed by IT managers and teachers are likely to be more efficacious in helping students use all tools to learn they content they teach.
Technological Pedagogical Content Knowledge
Mishara and Kohler’s (2006) TPCK model differentiates seven different types of knowledge that are relevant to technology- rich teaching and learning. These are useful in deconstructing classrooms into aspects that can be developed and improved in isolation, but efficacious IT managers are cognizant of the fact that these types of knowledge influence each other. A complete framework for understanding teaching requires consideration of and reflection upon all aspects of TPCK; the tools we use (technology), how we use it (pedagogy), and what we teach with it (content) combine to create new and opportunities and challenges in the classrooms.
Efficacious IT managers also recognize that new understandings in one type of knowledge will create permanent and irreversible changes in the others, and that once changes in technology are made, the effects on the others will be permanent and irreversible. Consider IT managers who deploy a learning management system (LMS) so that high school teachers can take advantage of online testing features, resource sharing, and online discussion tools to support face-to-face instruction. Teachers who use that LMS are likely to adopt new approaches to teaching and assessment that are specific for the LMS, but they may find those expand their effectiveness or improve their efficiency so they become a permanent part of his or her practice.
Mathematics teachers can point students to web sites where students can vary the coefficients, exponents, and other constants of functions and the changes are immediately graphed. When I share such sites with students, it is common for one or more to observe, “It is like we were playing with the graphs.” Such a site would affect the content knowledge introduced in the course, as it allows more sophisticated functions to be introduced more rapidly than they are without the technology. These sites also affect pedagogical knowledge of teachers as it introduces play into a topic that is not typically amenable to play and exploration.
It is also likely that those changes which teachers determine to be effective will be immediately adopted, they will exert peer- based social pressure on others to adopt it, and school leaders will exert leader-based pressure all teachers to adopt it. TPCK also provides a framework for ensuring a technological solution is extended and expanded only in those setting where they are appropriate, and inappropriate technology solutions are avoided. While the playful nature of interactive graphing sites can be useful for students who are developing a sense of the nature of graphs and the different effects of each term on the appearance of the graph, but play is unlikely for be effective when teaching students how to interpret graphs.
Efficacious IT managers will recognize the different pedagogical purposes of different technologies. Those who recommend a single tool for every pedagogical problem (or who interpret every pedagogical problem as solvable with a particular technology) are likely to be making decisions and recommendations for purposes other than teaching and learning. Deploying a single technology in every setting is an approach to technology planning that is not supported by leaders who understand TPACK. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/02%3A_Technology-Rich_Teaching_and_Learning/203%3A_Section_3-.txt |
We know that previous learning is an important factor in determining how new knowledge is perceived and what actions are taken in response to new knowledge, so it is reasonable to conclude that educators’ TPCK begins developing long before they even begin their professional preparation and it extends throughout their careers. Efficacious IT managers have a role in ensuring all educators have opportunities to continue to learn about all types of knowledge that affect their decisions and actions related to technology and teaching. This on-going professional learning is necessary for educators to reconcile their existing practice and beliefs about teaching and learning and technology with new tools and discoveries.
While these learning activities sometimes fall within a single type of knowledge, the goal of all professional learning for educators should ultimately be undertaken to improve all aspects of educators’ TPCK. Because TPCK comprises very different types of knowledge and skill, developing each necessitates different approaches to professional learning. Efficacious IT managers understand the differences between and support educators through training, learning, and design activities when each is appropriate. Further, they provide teachers with autonomy in making decisions regarding their professional learning when it is appropriate.
Training, Learning, and Design
Three types of learning experiences (training, learning, and design) are necessary to provide complete support of teachers’ development of TPCK. Training, learning, and design are all professional development activities, but each is designed for different purposes and the type of activity depends of the nature of the desired outcomes.
Training is typically applied to those situations in which learners (trainees) must be able to perform procedures or actions after the activity is complete. When new hardware and software is introduced to classrooms, it is appropriate to train educators in how to use it. It is designed to overcome extraneous cognitive load and increase perceived ease of use. Training is typically organized using instructionist models. In face-to-face training, an expert will lead participants though procedures from the very basic steps through increasingly complex uses of the technology. Trainees follow directions and cues given by the expert, who assumes trainees have no experience in the tools he or she is to demonstrate (see Figure 3.4.1). In training situations, teachers interact with technology, and the interaction is mediated through an expert
There are three outcomes of training in professional development settings for teachers. First, experts seek to make teachers aware of the capacity of the new system. For example, when deploying a new learning management system, training will focus on new or changed functions in the LMS. This is particularly important when new capacity is similar to existing capacity, such as differentiating threaded discussions boards from blogs and journals as tools for interaction in an LMS.
Second, training supports teachers’ understanding of how to configure different tools. Although the details of how to set each option in not reasonable in training, it does focus on opening the configuration tools and being aware of where different options are found.
Figure \(1\): Training: teachers interact with technology, mediated through an expert
Third, training incorporates strategies for ensuring teachers can use the systems with independence after the training ends and this includes resources for further training. Most experienced trainers understand teachers are a diverse group, so multiple methods of developing independence are appropriate for training. It is not unusual for teachers who ask for step-by-step instructions to spend excessive time in training taking their own notes, so many trainers prepare only general procedures and encourage teachers to record the specific steps in language they understand. The opportunity for independent practice while experts are available for guidance and advice is another strategy. Yet another is providing tutorials, annotated screen shots, and video recordings.
Training is often focused on answering questions that begin with “How do I...?” and these are answered with specific steps or procedures. Because training is used to support teachers’ knowledge of steps or procedures, IT managers can define the outcomes that will determine the success of the training. When participants have demonstrated a predefined level of capacity to use the technology, then the training can be deemed complete and successful.
Professional development focused on learning provides teachers with support as they understand the role of a particular technology in their teaching and instruction. As teachers learn about technology, they identify curriculum goals that are important and explore how technology tools can increase the efficiency of learning, how technology can provide more effective learning experiences, or how the technology can support a new curriculum goal (these often arise from new technological content knowledge). While students are rarely considered when planning and delivering training, teachers do consider how the technology will affect students’ experiences when they are learning about technology (see Figure 3.4.2).
Figure \(2\): Learning; teachers considering technology in light of students with the expert playing a support role.
When learning, teachers tend to ask the question “Can I...?” and it is answered with an idea about how technology can be adopted and adapted to accomplish a curricular goal. Compared to training, learning about technology is unpredictable; experts can accurately predict the end products of training, but they cannot accurately predict what teachers will learn about technology. Compared to training, the changes in learners are much more dependent on the context in which the new knowledge will be used, the beliefs and values of the learners, and the purposes of the learning. Once educators become comfortable they can use a particular technology, they have proven the system can be operated and that it can serve a useful or expected function with reasonable effort, they begin to brainstorm the potential uses of the technology in the classroom, which is an important process in learning.
Learning is also an activity that can continue with no end. This is especially true when the purpose of the learning is to improve performance. Learners may end the process according to a predefined rule or through the decision to stop, but the rule is artificially set and does not represent an objective end point. In IT management, the artificial end of learning happens when the decision is made that “We understand what we want to do, now it is time to take action.”
Once teachers are sufficiently familiar with a piece of technology, learning becomes a design process in which their work becomes more relevant and focused on refining materials and plans. In design work, teachers build solutions for the problems and they produce innovations as new methods for teaching and learning are created and integrated into classrooms. Design typically becomes an iterative process as teachers create a solution, then refine it as they observe how it works in the classroom (see Figure 3.4.3).
Figure \(3\): Design: teachers create IT experiences for students
Eric von Hippel (2005), a scholar who studies technological innovations, suggested lead users, those individuals who tend to develop new applications of technology are most productive and contribute the greatest innovation when they are provided with a toolkit that affords:
• The ability to complete the entire trial and error process- This is particularly important for innovations in education as designs cannot be tested unless they are used with learners in the intended situation. While pilot studies can help refine educational designs, redesigns must be informed by feedback from classrooms. Frequently, new educational technology designs are tested with other teachers before being deployed with students.
• Design within the available solutions space- Solution spaces are bounded by the controllable aspects of the local IT systems and budgets. Designs, for example, that require teachers to use expensive extensions to be added to the LMS will fall outside the solution space.
• User-friendly tools- When elucidating the technology acceptance model, Davis (1989) established the role of ease of use in the intention to use technology. This extends to toolkits for designing innovations; teachers are more likely to use tools they find easy-to-use as it is associated with more efficient work.
• Modular libraries- Modules are components that can be designed once and then reused for similar projects. A toolkit that provides this capacity will also lead to greater innovations in technology-rich teaching and learning. Mathematics teachers who use a toolkit with a graphing module can focus their efforts on using the module rather than recreating it for each course.
Whereas training is led by individuals with expertise and familiarity with the technology, learning and design tends to be led by experts in teaching. Etienne Wenger, Nancy White, and John D. Smith (2009), three scholars well-known for their work to understand communities of practice, have described a particular type of technology leader that emerges in many organizations; they call these individuals technology stewards. According to Wenger, White, and Smith; technology stewards are individuals who develop expertise adapting technology to achieve the strategic goals of the organization. In education, we expect technology stewards to be educators who develop greater than usual expertise using technology and the ability to share their expertise with others.
Because they understand the needs and the expectations of the members of the organization, technology stewards are given greater authority in technology decisions, and play an active role in evaluating tools and changes that are under consideration. In the often-used progression of technology projects, technology stewards are involved in determining the proof-of-concept which establishes a technology solution is possible; their advice usually determines which technology projects enter the proof-of-concept stage of planning. They are also involved in alpha testing (which finds a small group of experts determining if the systems provide necessary functionality) and supporting beta testing (which finds the first end users trying the system and providing feedback), and they serve as a liaison between educators who test systems and the technology professionals who will redesign systems to reflect what is learned during tests. When a system moves into production, technology stewards are active in training users, and helping them learn to use the systems, and design activities using technology. In many cases technology integration specialists will fill this role, but the focus of technology stewards’ efforts is as much towards technologist and the decision they make and systems they build as it is on the decision teachers make.
205: Section 5-
Educators appear to have an incomplete and inconsistence awareness of autonomy as a factor that affects learning. Blumenfeld, Kempler, and Krajik (2006) define autonomy to include the “perception of a sense of agency, which occurs when students have the opportunity for choices and for playing a significant role in directing their own activity” (p. 477). Autonomy is implicit in many of the pedagogical strategies that are replacing the Standard Model and that are associated with 21st century skills. It is reasoned that learners who have autonomy are more motivated to study and more engaged with the curriculum than those who have little autonomy. Autonomous individuals approach situations with:
• The ability to recognize a problem, which is typically a gap between the current state and the desired state;
• Knowledge of how to resolve the problem or close that gap;
• The capacity to solve the problem or close the gap;
• The authority to implement their solution.
Despite the value of autonomy in creating classroom that promote deeper learning, there is evidence teachers are allowed to exert little autonomy over instructional practices (Range, Pijanowski, Duncan, Scherz, & Hvidston, 2014).
A limit to autonomy in IT management in schools is that the four aspects of autonomy are controlled by different individuals. Problems or gaps related to teaching and learning must be identified by teachers; knowledge of how to resolve problems must emerge from teachers and technology experts as they design and test IT systems. The capacity to scale test systems into production systems that can be managed with the available resources must be done by IT professionals, and authority to decided which solutions to implement is assigned to school leaders. Efficacious IT management has been constructed as a collaborative endeavor, thus it will lead to greater autonomy, even if it is filtered through others involved with ensures actions are appropriate, proper, and reasonable.
Compared to users of IT in other organizations, teachers do appear to require greater autonomy in technology decisions (Hu, Clark, & Ma, 2003; Teo, 2011), as educators generally are more independent users of IT and use a greater variety of applications and data sources than information workers in other fields, and they are more likely than users in other organizations to test new applications and data sources for usefulness. Autonomous educators who explore and discover effective uses of IT in their classrooms, must have procedures through which their new learning can be translated into IT systems that are available and supported by the IT management team.
Autonomy is a complex variable that affects decision- making and professional activity in a variety of ways. As will be explained in Chapter 8: Understanding Change, autonomy is necessary for change to occur, but individuals who exert autonomy may also reject the vision, direction, or structure of leaders who seek to affect change. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/02%3A_Technology-Rich_Teaching_and_Learning/204%3A_Section_4-.txt |
Technology-rich teaching and learning occurs only in those schools in which IT that has the capacity to perform the task is available and functioning. In recent years, the nature of devices available to the education market has changed, so IT managers deciding what to purchase and how to disperse it in the school. face more and more complicated options than they did previously. The factors that affect these decisions are explored in this chapter.
Computers are systems in the true sense of the word. For several decades, “computer” meant a box that rested on a desk; users controlled software that was installed on a disk inside that box and they created information by means of a keyboard and a mouse that were plugged into the box. The user saw output on a video monitor and sent output to a printer; those peripherals were also attached to the box. A surprisingly small computer chip was inside the box and microscopic circuits on that chips is where information was processed. That processor, along with random access memory (RAM), disk drives that stored information, and all of the input and output peripherals were all attached to a circuit board (called the motherboard). The peripherals are largely what give computer systems their capacity to facilitate teaching and learning. They have expanded in recent years and now include printers, 3D printers, network cards, video cards, sound cards, and all other input and output devices (of course with the increasing use of networks many input and output devices have been replaced with files transferred to and from other computers via networks). The various hardware and software components installed on a computer affect what can be done with the system and each component affects the operation of the others. Together they create a system; the whole is greater than the sum of the parts.
302: Section 2-
Technology-rich teaching and learning requires students use computing devices, so school IT managers will define a logistic goal such as “Students and teachers will have sufficient access to computing devices that are sufficient for the curriculum and learning activities.” (The redundancy of the word “sufficient” in this logistic goal is recognized. It will be demonstrated that sufficiency can take many meanings and many factors affect what constitutes sufficient IT.)
Content of the Logistic Goal
In schools, sufficiency depends on the capacity of the systems to manage and process the information necessary to complete the task assigned to the learner, the number of devices available, and the capacity of the teacher to implement the plans they have designed. Improving any one of those factors can increase sufficiency; because schools rarely have inexhaustible resources, IT managers often find they must negotiate sufficiency.
303: Section 3-
Teaching and learning requires students access and consume information, analyze and manipulate it, and create and disseminate it. Some educationally relevant information tasks, such as consuming text-based web sites (e.g. Wikipedia) and composing text (e.g. writing research papers) require little computing capacity. The rate of data creation is a small, so the necessary processing power is minimal, and the output is simple enough that a low- resolution display, and minimal network connection allows the work to be completed with no impediments caused by the technology. Other information tasks central to the curriculum, such as consuming or creating video require much greater computing capacity as the amount of data necessary to encode video is far greater than transferred for text. A device that is sufficient for a text- based activity may be insufficient for a video-based activity.
The capacity of a computer determines the nature of the information tasks that can be accomplished with it. Systems with greater capacity can process more data in a shorter time so users can use more sophisticated data sources and create more sophisticated data products using systems. When one attempts to use a computer with insufficient capacity, the computer is likely to “freeze” as it becomes unresponsive and many software features stop working. When a computer freezes repeatedly during a task, it lacks the capacity to perform the task.
Capacity is determined by several factors. In general, these factors determine the rate at which a system can access, process, and display information. Devices must be evaluated relative to a particular need, and IT managers will determine the capacity of the systems by evaluating:
• The speed at which the computer can process information- Processing speed is measured in giga-hertz (GHz); a processor operating at a speed of 3 GHz can perform 3,000,000,000 operations in one second. For the first generation of IT managers in schools, the processing speed of the computers was important as it determined the performance of the machines. For most of the 21st century, IT managers have been more concerned with the number of processors installed in parallel on the systems they purchase. The increasing processing capacity of computers has been referred to as Moore’s Law, and it has continued unabated for more than 50 years.
• The amount of random access memory (RAM) available to the processor- RAM has always been important in determining the capacity of a computer. It is relatively cheap and easy to increase, so RAM upgrades are a common method of increasing the capacity of computers. For some devices and for some purposes, however, increasing the RAM will have little effect on the perceived performance of the system. For example, if a student is using a computer that has 4 GB RAM installed to access G Suite and its performance is adequate, then doubling the RAM to 8 GB is unlikely to provide any better performance.
• The efficiency of the operating system- The OS manages memory and other system resources, and the rate at which it performs these tasks affects users’ perception of the computer’s performance. Over time, updates and changes to the operating system can decrease its efficiency and computer systems on which excessive applications or extensions to the operating system or web browsers can also interfere with operating system efficiency.
• The sophistication of the applications- Applications are the software used to manage and create information; many appicatons are sold in different versions. For example, schools can install and support various levels of video editing software. IT users can select from simple video editing software (sometimes packaged with the operating system) up to the same software used by professional video editors. Professional level software provides very sophisticated functions, but it requires hardware be upgraded frequently and it requires time and effort to use at its fullest capacity.
• The data rate at which the system can send and receive information on networks- This factor is increasingly a determinate of sufficiency. For many users, the capacity of computing devices is less about information processing and more about interaction enabling. Access to networks also expands the information capacity of our devices.; we update our software through the network and we move photographs from our devices onto network storage systems to free memory for more images (for example).
These variable aspects of computing systems that determine its capacity cannot be considered in isolation, as each contributes to the others. Consider the smartphones that many of teachers and students carry into school in their pockets. These are the latest in a series of “pocket-sized” technologies that have been evolving for decades and these have evolved together with networks. The processing speed and memory in pocket-sized devices exceeds that available in desktop computers manufactured only a few years ago, they connect to networks that make multimedia content available, and they allow users to create and share multimedia content with little effort. These devices have evolved through a combination of manufacturer push and consumer pull; as devices made more tasks possible, the demand for the products increased and motivated manufacturers to further improve and expand the devices they sold. Perhaps the best example of this effect is the co-evolution of the displays and the network capacity to access video. Better networks afforded users capacity to receives video and improved displays make the viewing experience acceptable which increased the demand for networks capable of delivering video to users of mobile devices. The reality of evaluating device capacity is even more complex than presented, as even more factors affect the capacity of some devices and the all of these factors continue to evolve. The battery technology necessary to power the devices in our pockets is one example. Improvements in batteries means they can to power devices longer than previous generations of batteries and they recharge more quickly. Network security is another. Engineers are developing more sophisticated methods of securing the networks we use and the data we store on them. Like other technologies, these methods are being refined through market pull and industry push but also through reaction to threats posed by the devices themselves and by misuse of the devices. The Internet of things (IoT) is the label given to the growing range of consumer devices that are connected to the Internet, and the IoT represents a vastly extended collection of source of input for computing systems, and it is possible because of increasing capacity of processors, expanding wireless networks, and decreasing size of circuits that has contributed to the mobility of technology.
Despite the evolution of a greater diversity of computing devices, which is likely to continue into the foreseeable future, schools are likely to be places where the original model of computing will continue to dominate technology-rich activity. Learning will find students accessing information, composing text, and creating media using general computing devices managed by the school and running software supported by the school. The fleets of devices managed by school IT professionals will be more diverse than the fleets managed by previous generations of IT managers. They will obtain, configure and install, manage and support computers with full operating systems, devices with mobile operating systems, and Internet-only notebooks.
Systems with Full Operating Systems.
Of the devices marketed to schools, those with the greatest computing capacity will arrive with full operating system installed on the hard drive. Full operating systems include Windows, the Macintosh OS, and Linux (and open source operating system that can be used for free) which are installed on desktop and laptop computers. Of the devices on the market at any moment, these will have the most processing power (with both the fastest and the most parallel processors), the greatest RAM, and support the most sophisticated applications.
Full operating systems are available in multiple versions, and publishers will maintain the versions for several years; eventually operating systems reach the end of life when the publisher no longer releases security updates. In addition to the versions of the operating systems installed on user devices, there are versions of these operating systems available for servers as well as for mobile devices. Full operating systems are designed to connect to servers and, together, the OS on the users’ device and the network OS provide the most flexibility and most control of the software environment for IT professionals. They can be configured to use network resources, allow for multiple user profiles, and support network-based management. Obviously, the price of a unit will vary depending on the specifications, but IT managers who are asked about the cost of obtaining new machines that arrive with a full operating system are likely to estimate \$1000 per unit.
These devices tend to have the greatest longevity of all of the computing devices available in schools. It is not unusual to find desktop computers still operating and providing educationally relevant functionality more than five years after they were first purchased and installed. Laptop models tend to last less than five years, as they get damaged through rough use compared to desktop models. Decreasing performance of batteries and other components also limit the functional lifespan of laptop computers as well. Over the life of a computer with a full operating system, users will find it is characterized by decreasing performance as operating system and application updates require more system resources. IT managers accommodate for this by decreasing the number of applications installed, so it can continue to be used for tasks requiring the least capacity. The rationale for purchasing systems with full operating systems is typically grounded in the sophistication of the software that can be used on these devices. Students using a computer with a full operating system can use the same software that is used by professionals, so they can create sophisticated products. Further, they can use sophisticated output devices and peripherals. That software and those peripherals both add to the cost of the systems, but in many cases, that cost is necessary to provide the computing capacity necessary to meet the goals of the courses in which students are enrolled.
Consider, for example, a high school in which theatre students write and produce one-act plays. Teachers may be interested in having students record the performance on multiple cameras, then use those recordings to create a single video version of the performance that incorporates different views. Editing and rendering such a video requires sophisticated video editing software that can be used only on a computer with a full operating system. In addition, the size of the files that must be managed to produce and render such a project require the processing power and the amounts of memory that are available only on a relatively expensive computer system with a full operating system installed.
Mobile Operating System.
The two mobile operating systems that dominate the consumer and education markets are Apple’s iOS (which is installed on iPads and iPhones) and Google’s Android (which is installed on a range of tablets and phones). Microsoft makes a version of Windows available for mobile devices and the open source community also makes version of Linux available, but these are much as less-widely used than iOS and Android. Mobile operating systems do allow users to adjust the settings and configurations, but these devices feature a single user profile on the device, so the changes that are made affect everyone who uses the device; this fact limits the usefulness of mobile devices in some schools. It is not unusual or IT professionals to find school leaders become strong advocates for purchasing mobile devices once they realize the ease of use that characterizes mobile devices. Those school leaders are not always fully aware of the difficulty of managing devices intended for single users in a school where devices are used by many different users for many different purposes.
Among the populations that have found the greatest success using tablet computers that use mobile operating systems are those educators who work with special education students. A number of factors, including the mobility of the devices, the individualization that is possible with the apps installed on the devices, the multimedia nature of the devices, and the haptic control are all features that have been identified as useful for this particular population of students.
Devices with mobile operating systems tend to be more affordable than those with full operating systems. Depending on the size of the screen and the quality of display and size of the memory, the same IT manager who estimated \$1000 per unit for desktops or laptops would probably estimate \$400 per unit that uses a mobile operating system, but he or she would be hesitant to make a final estimate before the option for managing the devices was specified. For example, some IT managers who purchase iPads decide to purchase a desktop computer and reserve it for the purpose of managing the devices through a third-party system.
A further concern for deploying devices with mobile operating systems is the capacity of the wireless network. Mobile devices are designed to function the best when they are connected to the Internet. While users can take pictures, record video, create documents, and otherwise be productive on a mobile device with no network connection, there are limited options for adding software, sharing files, and otherwise using the devices when they are not connected to the Internet.
Internet-only Operating Systems
The newest type of device to enter the educational market is the Internet-only notebook. When these devices were first marketed, they had no functionality without the Internet, but later generations have added some offline functionality. Still, however, these devices are most useful in schools when they are connected to the Internet.
The dominant device used in school that uses an Internet- only operating system is the Chromebook which is available from many manufacturers and in several configurations, but that all use the Google Chrome OS. With this device, one logs on to the device and the Internet simultaneously using a Google account. The only applications installed on the notebook is Google Chrome, which is the popular web browser. Productivity software (such as the word processor, spreadsheet, and presentation software) is provided through the user’s G Suite account; all other productivity tools that are used on the Chromebook must be available via a web service.
There are limited options for using peripherals on a Chromebook, and printing is managed through Google’s Cloud Printing service. This service requires an administrator of the school’s Google Domain to configure a computer to be the print server, and it accepts and processes print jobs from any user assigned to the cloud printer. Managing a fleet of Chromebooks in a school finds an IT professional logging on to the online administrative dashboard provided by Google and selecting for the options available from Google or from third-party publishers; Google has a history of providing both G Suite and Chromebook management tools at no cost to schools, but many third-party services require a paid subscription. In addition to being limited by the options provided by Google and its partners, the decision to purchase Internet-only devices for students and teachers makes a functioning wireless network absolutely necessary in a school.
Of the three types of devices marketed to school IT managers, Internet-only notebooks are the most affordable. The IT manager making a rough estimate of the cost would likely give \$300 as a price per unit. That estimate would depend, of course, on the capacity of the wireless network in the school where to devices were to be deployed. The actually cost of deployed functional Internet- only devices may depend on upgrading network capacity. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/03%3A_Access_to_Sufficient_Computing_Devices/301%3A_Section_1-.txt |
While an undergraduate student studying science education, I judged a science fair at a middle school located near the university where I studied. In my journal, I noted, “Students had printed graphs of their results and taped them on their displays.” I also recorded my conversation with the teacher, “she said they drew their graphs on paper, and then when she had approved them, they went to make them on the computer and printed them out.” I also had a sketch of one of the science classrooms which showed the location of the two desktop computers on the counter in the back of the room. In the same journal, I recorded a visit to another school a year later; during that trip, I joined a teacher with her students in the computer room. I observed, “students work in pairs on their Oregon Trail trip, taking notes on their trip;” the students then used those notes to compose a narrative of their voyage. My record of the conversation with the teacher detailed, “the Westward Expansion social studies unit is three weeks long and the students work on this during our time in the computer room.”
In those two cases from the mid-1980’s, we see computers being relatively marginalized and specialized part of the curriculum. In both cases, there was a very specific purpose of using computers, and the teachers played an active role in scheduling and controlling access to the machines. In the graphing example the teacher appears to have deliberately slowed the process of digitizing the graphs by insisting they be approved on a paper copy before the students accessed the computer. In the Oregon Trail example, the role of the computers in the curriculum was different as all students were using computers at the same time, and I noted in my journal, “the teacher encouraged students to work quickly as this was the last day in the lab.”
Those cases also illustrate one of the first major transitions in how computers were made available to students in schools. When they first arrived, desktop computers were installed one or two at a time in the classrooms of teachers who wanted them. Once installed, they were used in ways the teachers directed, and students used the computers while classmates were engaged in non-computer activities. This model is illustrated in the graphing example. As computers arrived in larger numbers, and the demand for instruction about computers increased, large numbers of computers were installed in special classrooms (usually rooms that had been retrofit with additional electrical receptacles). Once computer rooms were installed, the model of technology-based teaching changed. Teachers would “take their classes to the computer room,” and all students would use computers at the same time (usually for the same purpose), and no student used them until the class returned to the computer room during their next scheduled session.
Around the turn of the century, it was reasoned that students needed experience using computers in their classrooms where their other learning materials were located and to demonstrate the computers were useful for all learning not simply for specialized activities. “Technology integration” became the preferred model of technology-rich teaching. Technology integration was possible because of the coincident maturing of mobile computers and wireless Ethernet, so the computers that were moved into classrooms could be connected to the network. Of course, one cannot ascertain which of these was the motivating factor, but technology integration, mobile computers, and wireless Ethernet all arrived in schools at about the same time.
In the 21st century, it is common for IT managers to provide computers to teachers and students in three ways. Computer rooms, both (in-place and mobile), one-to-one initiatives, and bring your own device initiatives. Each has implications for IT management and for technology-rich teaching and learning.
Computer Rooms and Other Common Resources
While computer rooms have largely fallen out of favor, they continue to be maintained in many schools. As more diverse computing devices have entered the educational market and Internet-only notebooks became more popular, computer rooms have become more important for providing capacity for specialized purposes that require sophisticated software that must be installed on devices with full operating systems installed and that meet other hardware requirements.
For example, high school students working on the school newspaper may use their smartphones to capture images and draft stories using G Suite which is accessed via Chromebooks. When the students prepare the newspaper to print, however, they will use desktop publishing software that is installed on workstations in a computer room. That software allows for far greater control over the layout of the printed newspaper (and the production of electronic editions) than is possible to devices with less capacity that are used for early drafts of articles. Both are necessary for producing the final product.
While some computer rooms are filled with newer desktop computers with the greatest capacity of the various machines deployed in the school, other computer rooms are filled with the oldest machines. In schools, IT managers tend to extend the life of devices as long as possible to ensure long-term value from the purchase, so older computers are nursed along with little software installed and provide minimal, but still useful, functionality. Teachers whose students need to find information on the Internet or who need to create word processor documents, presentations, or spreadsheets may find a five-year-old desktop computer with only an office suite installed to be perfectly sufficient. Some faculty even prefer to use such systems with their students as they provide fewer distractions to students than systems with more tools installed.
One strategy that has become popular (among some IT professionals in schools) for leaving computers in service when the operating system is no longer supported is to install Linux on the computers. Linux is an open source operating system, so it can be installed without paying licensing fees, and it tends to be updated by the community indefinitely, and it (generally) requires less processing capacity than commercially available operating systems, so it can stay in service longer and on older machines. A teacher who creates a valuable lesson using a particular Linux application will find it continues to be available, in an unchanged form for as long as the computers are functional. A teacher who creates a similar lesson using a web-based application or a commercial operating system may find the site removed or the application becomes incompatible with the operating system before he finds a more suitable replacement.
Even in one-to-one environments, there are situations in which teacher must share computing resources. This can include computers with full operating systems, specialty printers, high resolution projectors and similar devices. Such devices that cannot be provided in indefinite numbers must be shared among teachers. The need to share and access to those resources and schedule time and activities so that everyone has similar access is a part of managing the technology-rich classroom. Sufficiency decisions must be made with the respect to the other demands of financial and support resources, and one support system that must be maintained by the IT professionals is a public system for viewing schedules and reserving time to use shared devices. It is reasonable to insist such schedules be available and that teachers use them.
One-to-one Initiatives
The state of Maine, in the northeastern United States, is widely recognized as the first large jurisdictions to implement a oneto-one initiative when the state purchased Macintosh laptops for each seventh-grade student starting in 2002. The rationale behind one-to-one initiatives is that it ensures all students will have access to a computer at all times and in all places in the school. One-to-one initiatives have become more widespread as Internet-only devices (which we have seen are a fraction of the cost of laptop computers) have become more popular. Beginning a one-to-one initiative does introduce several complications into IT management.
Some IT managers deploy on-site one-to-one initiatives, which means each student is assigned a device, but it stays at school. While this does minimize the risk to damage of devices while they are being transported and loss of devices (or power supplies) when they are off campus, it does restrict the technology-rich activities to the school building.
A reasonable case can be made that on-site one-to-one initiatives put disadvantaged populations at a further disadvantage. If the one-to-one device is the only computer to which the student has access, and if access to the device is needed for off-campus learning, then restricting a students’ use can limit his or her opportunity for an education. One-to-one initiatives that deploy Internet-only devices can also be criticized because of the demand it puts on families to purchase Internet access and to install a wireless network at home, so the school’s device can be used (to its full capacity) there. Further, if individual students do not have devices (because it is not charged, or it is broken, or has been taken away for violating the acceptable use policy), then the student’s ability to engage with the curriculum may be diminished.
When deploying devices one-to-one, IT managers also take care in writing and communicating a clear acceptable use policy. This is especially important with devices that are taken home, and are going to be used on networks and in settings that are not protected and managed in the same way that a school network is.
IT managers also must plan steps to improve technology support so one-to-one devices are repaired quickly. These steps include seemingly simple, but often overlooked, steps such as providing power strips so laptops can be used even if they are not charged, and purchasing extra devices so spares can be deployed while malfunctioning devices are repaired. Many schools that deploy one-to-one initiatives will configure the devices so that files saved on the local disk drive are automatically synchronized with a cloud storage system (such as G Suite) which minimizes loss of data when computers fail. All of these support steps can increase the total cost of owning devices in ways that are not predicted when the initiative begins.
Bring Your Own Device
While a one-to-one initiative is designed to ensure that all students have consistent access to a computing device that is provided by the school, bring your own device (BYOD) initiatives are those designed to deploy a one-to-one initiative in which students and their families purchase and own the devices they bring to school and use to interact with the curriculum. These efforts are grounded in the observation that students arrive in school with smartphones and laptops, and there is even evidence suggesting parents are willing to provide devices for their children to use in school (Grunwald Associates LLC, 2013). Deploying a BYOD initiative does have important implications for both teachers and for IT managers.
First, because the device is not owned by the school, IT managers can exert minimal control over what software is installed. Consider the mathematics teacher teaching in a BYOD environment. She may encourage students to use their devices to graph functions. While she may have a preferred tool, students may arrive to class with a variety of graphing tools installed on their devices, so she may face the challenge of supporting students as they use many different tools. Further, students may be less able to help each other if they are using different tools. Of course, some perceive this to be an advantage of a BYOD initiative, as students are likely to be exposed to many different tools for (in this case) graphing functions, so they are becoming more adaptive users of technology than if they are taught on a single device. This situation can also motivate professional development such as that described in “Chapter 2: Technology-Rich Teaching and Learning.”
Second is the problem of providing software. A teacher who has prepared a template for an assignment using Microsoft Word, for example, may find that students who do not have that program installed on their devices may be unable to work with the template. Either the teacher must make the resources available in a manner that can be opened by every device, or Word must be provided to every student and on every device. While the common use of cloudbased productivity suites (see “Chapter 5: Web Services”) is minimizing the instances of the problem, it remains and can be problematic, especially if learning activities include tasks that require the advanced features of applications.
Third, the school has little control over how the device is configured, so BYOD can increase the need for malware protection and other steps to ensure the security of the network and the school’s data. In many BYOD environments, procedures are in place for ensuring devices that connect to the school’s network meet minimum security standards, and network administrators in these schools are prepared to prevent devices known to be malicious from connecting to the network.
Finally, the expectation of support can be problematic in BYOD situations. Technicians employed by the school cannot be expected to provide troubleshooting and repair services for the diverse collection of devices used in a BYOD school. Further the school must assume liability for damage done when their employees provide service. The result is that BYOD initiatives may find students without devices as they await repairs by other technicians, and they may find their devices quarantined from the school network if it is found to be the source of malware. These can all limit students’ access to devices that may be necessary for their education.
The Reality
In the preceding sections, several models of dispersing computing devices in school have been presented. It is unusual to find schools in which a single method is used. Especially as Internetonly devices have been purchased, computer rooms are maintained for projects necessitating greater capacity, and many educators use mobile devices for professional purposes (and encourage their students to use mobile devices for educational purposes). Consider Riverside School, a hypothetical small rural school enrolling students in grades 7-12. Riverside has a one-to-one program for students in grades 9-12; each high school student is given a laptop with the Windows operating system installed and full productivity suites, along with a host of other tools that are used in specific content areas. About 15% of the students decide to provide their own device (usually a Macintosh laptop) rather than using the computer supplied by the school. The IT managers have purchased “take home rights” for some software titles, so students can install licenses purchased by the school on their own computers while they continue to be enrolled, and students are pointed to open source software to install for some classroom activities. In addition, there are carts with laptops shared amongst the classrooms in each wing of the school; those laptops are used primarily by students in the middle school grades who are not yet included in the one-to-one initiative.
Further, there are two computer rooms in the school. One is located in the library and it is filled with older machines nearing the end of their life. These desktop models are used for accessing Internet, including G Suite, the school’s cloud-based productivity tool. Some teachers prefer to use that space rather than laptops in their classrooms as the library affords more space. The other computer room is filled with 16 newer and more powerful desktop computers than those available in the library. That space is used primarily for desktop publishing, digital photography and video projects, and other specialized courses and projects. This leaves teachers with options. They can choose the system that meets their need, and the needs of all users can be met with minimal disruption. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/03%3A_Access_to_Sufficient_Computing_Devices/304%3A_Section_4-.txt |
To this point in the book, teachers’ technological pedagogical content knowledge (TPCK) (Mishra & Kohler, 2006) been proposed as a theoretical framework that affects teachers’ understanding of technology and its role in the classroom. It is reasoned that teachers who have access to sufficient devices and who have developed sufficient TPCK will use technology for teaching and learning. This is not always, the case, however. Baldwin and Ford (1988) suggested the transfer of lessons learned in training to action depends on a) the training design, b) characters of the individuals, c) and the work environment. Among the most important aspects of work environment that determine the degree to which training and new learning leads to changes in professional action is the availability of mentors and the availability of resources. Efficacious IT managers will employ professionals who serve as mentors to teachers and also support systems whereby curriculum and instruction resources can be stored and shared.
Technology Integration Specialists
For decades, those responsible for organizing and presenting in-service professional development for educators have used a variety of models for providing learning experiences for teachers, and these have been designed to support all aspects of TPCK and to accommodate the needs of individual learners. These activities tended to reflect training in other professional organizations (especially for technological knowledge) and graduate courses (especially for pedagogical and content knowledge), so the professional learning occurred largely outside of the classroom and in the absence of students. In recent decades, professionals who are given various titles but who function as technology integration specialists have emerged as a specialty within the teaching workforce. These individuals are typically licensed educators who have received additional training (often earning advanced degrees) in educational technology. These individuals play active roles in as technology stewards (Wenger, Smith, & White, 2009) who advocate for technology solutions aligned with teachers needs and they also fill the role of lead user (von Hippel, 2005) who create innovative uses of technology in the classroom and disperse those innovations.
Mentors with greater than usual expertise have been found to be a characteristic of communities and organizations in which innovations are accepted and diffused. Eric von Hippel (2005), a scholar who studies innovation in diverse organizations and fields, notes lead users “are ahead of the majority of users in their populations with respect to an important market trend, and they expect to gain relatively high benefits from a solution to the needs hey have encountered there” (p. 4).
Technology integration specialists who serve the role of mentor participate in planning and delivering training, promote learning about the role of technology in learning, and support design efforts. In addition, these professionals play an active role in modeling and coaching mentees. Technology integration specialists are often found in classrooms (or computer rooms) when teachers are using technology for teaching and learning. In this role, he or she supports both the teacher and students in their activities. In some instances, these specialists will even teach classes (or co-teach), so the teacher can find a comfortable entry point into using technology.
In idealized circumstances, technology integration specialists spend most of their time supporting colleagues as they become competent and confident so they develop as independent users of and teachers with technology. Three common obstacles do interfere with the work. First, especially in smaller schools, a technology integration specialist may have fill this role on a parttime basis and have other teaching responsibilities. This can introduce scheduling conflicts that can limit opportunities to work with some other teachers. Second, the personal characteristics of some teachers may lead him or her to become dependent on the support of the technology integration specialists. Self-efficacy has been widely studied and appears to affect the intention to use technology and the transfer of that into practice (Abbitt, 2011; Yerdelen-Damar, Boz, & Aydın-Günbatar, 2017), and there is tendency among those with low perceived self-efficacy to rely on support to meet minimal technology expectations for their classrooms.
Third, because technology integration specialists are among the most visible technology professionals in the school, they are often the first contact for initial troubleshooting help. While this often leads to quick repairs and can lead to opportunities for both students and teachers to receive lessons in troubleshooting, this work does direct technology integration specialists away from their primary responsibility of mentoring teachers.
A final mentoring role for technology integration specialists is to support IT professionals as they develop experience creating systems to meet the unfamiliar needs of educational populations. They advocate for teachers’ and students’ needs when IT professionals are designing and configuring IT systems, and they interpret educational users’ experience so the IT professionals understand unmet needs and systems that are perceived to be too difficult to use or ineffective.
Curriculum Repositories
Teachers’ capacity to use technology in classrooms is also improved by the easy availability of technology-based activities and lessons that are aligned with their curriculum needs. Dexter, Morgan, Jones, Meyer (2016) observed that accessible resources (those that could be incorporated into classrooms with minimal adaptation) were associated with greater use of technologies. This led those scholars to conclude, “leaders must provide unfettered access to technologies beyond personal computers… and provide learning experience in the pedagogical strategies that support integration those technologies into teachers’ instruction” p. 1208). Curriculum repositories (Ackerman, 2017) are systems that facilitate sharing of resources and strategies among the professionals working a local community.
A curriculum repository is an online space, typically a course created in the learning management system provided by the school, where educators can engage with each other to find and create resources to support all types of TPCK. Training that is part of on-boarding new teachers is necessary so they are prepared to use those systems; by posting the materials used during those training sessions to the curriculum repository, IT managers can make the repository a valuable resource for educators when they first arrive.
Curriculum repositories are often modeled after existing open education resource (OER) communities. Several communities of OER developers have created web sites where visitors can search for and find documents, media, simulations, and other resources created by members. These sites are available to general users of the Internet, and membership is lightly restricted, so these tend to be vast and rich repositories that many users find them overwhelming. Curriculum repositories are modeled after OER sites as users can upload and curate and share resources, but the collections are more limited and participation is restricted to teachers (and others) in local communities, so the resources tend to me more closely aligned with specific curriculum expectations and the they tend to be created by individuals with similar technology available. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/03%3A_Access_to_Sufficient_Computing_Devices/305%3A_Section_5-.txt |
This chapter focuses on the need for efficacious IT managers to provide access to sufficient devices so that teaching and learning needs can be met. Sufficiency is a complex concept grounded in:
• The number of devices that are available (too few impedes sufficiency);
• The nature of the devices (to little capacity impedes sufficiency);
• The manner in which the devices as available (more inflexible options limits sufficiency);
• The preparation of teachers and the support they receive (teachers who lack the competence or confidence to use technology impede sufficiency).
Each of these factors can be limited (for example by budgets or other resources), so the sufficiency of devices is often negotiated, and IT managers seek to improve access to minimize the adverse effects of these negotiations on teaching and learning.
Price versus Capacity
When making purchase decisions, IT professionals must negotiate cost and capacity. In general, devices that have greater capacity are more expensive; this can be seen in comparing the cost and capacity of devices with full operating systems (most expensive and greatest capacity) with Internet-only devices (least expensive and least capacity). There is an inverse relationship between cost and capacity and the number of devices that can be obtained per unit of a budget. Using \$1000 as the estimated price per unit of devices with full operating systems and \$400 as the estimated process per unit of Internet-only notebooks, IT managers would budget \$25,000 for a classroom full of computers, but only \$10,000 for the same number of Internet-only devices.
While the lesser cost of the Internet-only devices may motivate IT managers to opt to purchase those devices, they are going to provide limited capacity on each device. The result is that IT manager must reconcile financial considerations with educational considerations. To avoid limiting educational options through their technology decisions while also minimizing the cost of purchasing devices with greater capacity than is necessary, IT managers can diversity the fleet of devices they manger. They can purchase a large number of inexpensive devices with minimal capacity, and small number of devices with greater capacity. This strategy makes the most devices available for the least demanding (but most frequent) information tasks (such as using word processors) while also making some devices available for the most demanding (but least frequent) information tasks.
Further, those devices will affect decisions about the network, and may result in changes to how technology personnel do their work. In all cases, it is the instructional users who must decide the sufficiency of access. School and technology leaders who reject decisions that emerge from instructional users must take responsibility and be transparent. If the devices are “too expensive,” then the school leaders must articulate that and defend the decision. If devices are “too complicated” to install and maintain with the current level of knowledge or staffing, then that rationale must be made clear, and school leaders must support IT professionals so they can manage the systems teachers need.
There are no heuristics that can be used to determine what is appropriate levels of and types of technology useful for students, and in many cases, a diverse fleet of devices affords the greatest pedagogical uses, but requires the greatest expertise for managing it. As a student proceeds through her day in a typical high school, she may encounter a variety of information tasks that each require a different type of device.
Capacity versus Information Task
Another common negotiation is between the available capacity and the nature of the information task in the curriculum. In situations in which the complexity of the information task is beyond the capacity of the devices, teachers may reconcile the complexity of the tasks with the capacity of the devices. Consider video editing, which is a task that can be completed on a range of levels. While Internet-only devices may be sufficient to access and a web-based video editing system, those systems provide far less video editing capacity than a full video editing application. (These limits include the length of the video that can be produced, the options for editing it, and the resolution of the final product.) Especially as students gain experience and seek to create longer and more complicated video products, the browser-based products will be insufficient.
Teachers must decide when their students and their goals have extended beyond the simple tools and full applications are necessary. This negotiation is informed by the nature of the students, the goals of the video project, and the availability of the full devices (which might be shared among many teachers).
IT managers must recognize that the information tasks teachers anticipate including in their lessons are likely to become increasingly complex over time. As teachers’ and students’ skill increases, they will expect to include greater capacity more frequently. A solution that provides low levels of complexity, but that can be accomplished with minimal capacity may prove insufficient as skill increases. Efficacious IT managers will respond to changing levels of expertise in teachers and they will also attempt to be proactive by anticipating need and encouraging teachers to participate in IT planning.
Educational Usefulness versus Device Management
In the previous sections, an oversimplified version of technology decision-making has been presented. Cost (a very important consideration for reasonable decisions) and computing capacity (also important in consideration for ensuring sufficient computing is available) have been identified as the factors relevant to purchase decisions. While cost and capacity may be the dominant factors when deciding how to provide sufficient access, other characteristics of the devices will have implications for which devices are purchased and how they are deployed.
Boot speed, which determines then length of time it takes for a user to power a device on and have it ready for use, is an important factor in many educational situations. A slow boot speed can lead to students being distracted from the learning task or frustrated that he or she is falling behind others. A device with a full operating system is likely to have the slowest boot speed; especially older models of desktops and laptop computers that store the operating system on a mechanical hard drive which is slower to start than one that stores the operating system on a solid state hard drive. In most schools, devices that run a full operating system also connect to a server to authenticate users and to load permissions and other services. All of these factors can extend boot time to the point where it impedes some educational uses of the devices.
Further delaying boot time in some configurations of full operating systems is the need to install updates. If computers have been unused for an extended time (for example during a school break), then the first users may find the devices will not function until updates are installed. In some cases, a computer can be unusable for tens of minutes while updates are installed. To minimize the disruptions due to slow boot time, IT managers can purchase devices with solid state hard drives or they can purchase devices with mobile or Internet-only operating systems.
For several decades, enterprise networking has provided centralized control of user accounts. In the typical enterprise network configuration, users authenticate against a single directory, and the user is assigned to groups depending on his or her role in the organization. Access to network resources (such as file storage, printers, and applications installed on servers are all controlled by rules managed by the network operating system that manages those permissions on the device with full operating systems. Many of those permissions were set to control access to devices and to prevent unauthorized access to network resources. The arrival of mobile devices and Internet-only devices challenges the methods of network management and security that are well-established; these new devices cause IT system administrators to change their practices.
As teachers develop greater technological pedagogical knowledge (TPK) of the devices they have available, it is reasonable to expect they will discover and refine more sophisticated uses of the devices they use and that they will seek capacity beyond that provided by existing technology. For these reasons, efficacious IT managers avoid single-device fleets. Although these can provide easier management and consistent capacity, they can result in schools maintaining unused capacity and can limit access to devices or to devices with sufficient capacity. The pedagogical implications of sections may be unpredictable to IT professionals and the management implications may be unpredictable to educators. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/03%3A_Access_to_Sufficient_Computing_Devices/306%3A_Section_6-.txt |
• 5.1: Introduction
• 5.2: Logistic Goal
Efficacious IT managers will articulate a logistic goal such as “The school will create and maintain a robust and reliable network (including a wireless network) for students, faculty, and staff to access the LAN and Internet.”
• 5.3: Networking Starts Here
It is possible to “break” the network by disconnecting the wrong cable or turning off the wrong device in the wiring closet. To keep the network safe, the prudent IT professional will secure the closest and the devices it contains, but the prudent school administrator will understand how to gain access if necessary.
• 5.4: Networking Concepts and Processes
Ethernet is the dominant type of network installed in schools; it uses cables (consisting of eight strands of copper wires and shielding to prevent electromagnetic fields from interfering with the signal) that connect to ports in computers, switches, and other devices with plastic clips that close circuits throughout the network.
• 5.5: Wireless Networks
• 5.6: Network Management
04: IT Networks
Especially as Internet-only devices have gained in popularity, a robust and reliable IT network has become essential infrastructure in schools. These networks connect students and teachers to data, information, and interaction within the local community and across the Internet. Whereas educators once deferred to IT professionals in the design and deployment of IT networks, they can no longer avoid knowledge of and input into how these systems, which are vital to teaching and learning, function.
Computers were originally designed to accept input (especially mathematical information), manipulate it according to rules programmed into the device, then create output (typically on paper, video monitors, or magnetic tapes). Once the capacity for computers to send output to other computers emerged, the first networks were created. As the number of computer systems increased (and the amount of digital data increased), there was increased value in connecting them so that information could be shared between them and users could operate the machines from remote locations.
Despite being used by academic researchers and the military for decades, networked computers did not become widely used in the consumer and education markets until the mid-1990’s when hypertext transfer protocol (the origin of the http:// that begins web addresses) was added to the Internet protocols. The World Wide Web (built using hypertext markup language or html) was developed which opened the Internet to vast numbers of users, and both the hardware and software for connecting desktop and laptop computers to the Internet became a standard part of almost every computer system.
Since the turn of the century, computing and networking have become almost synonymous. Many devices are of limited usefulness without a connection to a network, personal data and files are stored on web servers, and applications are increasingly accessed via web browsers. For the first generation of school IT managers, much attention was placed on obtaining computers for students to use, and only after they had large fleets of devices did they turn attention to developing robust local area networks for instructional purposes. Increasingly, local area network (LAN) resources are being replaced with services provided via web browsers (which are described in “Chapter 5: Web Services”), and access to those depends on reliable and robust networks. All of these changes, and the deep dependence on networks for teachers and students to access educational materials, make an information technology network an essential part of school infrastructure.
402: Section 2-
Efficacious IT managers will articulate a logistic goal such as “The school will create and maintain a robust and reliable network (including a wireless network) for students, faculty, and staff to access the LAN and Internet.”
Context for the Logistic Goal
The adjectives “robust” and “reliable” are used to describe IT networks. Robust describes the capacity of the network to provide a connection that delivers the network information that each requested in a timely manner. A robust network will allow many users in a classroom to connect with little delay, and there will be little latency observed in the network traffic. (Latency is the term used by IT professionals to describe slow connections which cause performance of web services to suffer.) Reliability refers to the amount of time the network is available, accepting new connections, and sending and receiving authorized data packets. In general, a network that is not robust will fail when a large number of users connect, while one that is unreliable will fail intermittently regardless of how many users are connected.
For most computer users in schools, “the network is down” (because it is not reliable or not robust) is an unacceptable situation, so IT professionals seek to improve the capacity of the network to provide and maintain connections and manage network traffic. While IT professionals understand the work of building and managing reliable networks, collaborative IT management depends on educators who understand the nature of the network as well as school leaders who understand enterprise networks so they do not place unreasonable demands on the IT professionals. The intended audience of this chapter is the school leaders and teachers who are involved with efficacious IT management, but who are unfamiliar with the many aspects of enterprise networks. The purpose of this chapter is to provide an overview of the hardware, software, and practices of managing networks; all of these can be upgraded to improve the performance of school networks. For IT professionals, this chapter represents the information they should expect the educators who are involved in IT management to understand. It is upon this level understanding that educators can begin to grasp the nature and challenges of IT management. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/04%3A_IT_Networks/401%3A_Section_1-.txt |
Opening the door and peering into the wiring closet where network devices are installed can be an intimidating experience. These rooms tend to be filled with white noise (generated by fans moving air which is cooled by air conditioners that operate day and night during all seasons) and racks of switches with large tangles of cables connected to ports with blinking green (or at least you hope green) lights indicating healthy connections. Other devices found in those rooms have far fewer ports and cables, but they are the most important devices as one (the unified threat management appliance) protects the network and its data from malware (viruses) and other threats (including hackers who attempt to hijack your data for ransom or use your network to their own purposes) and another (the gateway) connects all of the devices on your network to the Internet.
It is possible to “break” the network by disconnecting the wrong cable or turning off the wrong device in the wiring closet. To keep the network safe, the prudent IT professional will secure the closest and the devices it contains, but the prudent school administrator will understand how to gain access if necessary.
Who has access to the IT network can be a contentious topic in school IT management. IT professionals know how to configure it and they (very reasonably) want to minimize unskilled and unauthorized individuals from accessing it. School leaders can generally be considered unskilled in regard to IT network administration, so it is reasonable to limit their ability to access certain features of the network configuration. At the same time, school administrators are the individuals who are ultimately responsible for what happens in schools and who might need to take steps to prevent previously authorized individuals from accessing the network. In most situations, IT professionals and school administrators are professional and ethical (even when they disagree), but IT networks (and the data contained on them) are too valuable to be controlled by too few individuals.
As computers and networks have become vital for school management and teaching and learning, it is no longer appropriate for school leaders and teachers to avoid understanding the many services that keep the IT networks in their schools functioning for students and teachers. Everyone involved with IT management in schools must be able to differentiate local area networks from the Internet (to understand the total costs, management options and limitations, technology support); and also differentiate consumer, business, and enterprise networks (and the complexities of the management tasks that arise from large scale networks).
Local Area Networks
Local area networks (LAN) entered most educators’ experience in the mid-1990’s when the first servers to be regularly accessed by teachers and students arrived in schools. Early uses of LAN’s in schools included connecting multiple computers to a shared a printer and sharing files using a folder (or directory) on a server which multiple users could access. As educators began to understand the advantages of networks, the LAN’s in buildings became connected in more sophisticated ways. In many school districts, different campuses were connected to a single LAN so teaching resources developed in one school could be used in another, computers in different buildings could be accessed from a single location, and business operations could be consistently and efficiently managed from all sites.
In textbooks that introduce computer networks, readers often find descriptions of metropolitan area networks which are networks that extend across cities. Few network administrators use that term, and school IT managers are likely to hear IT professionals refer to the LAN which connects users access across many campuses. In rural areas, LAN’s can connect schools separated by many miles.
As Internet technologies matured and became more sophisticated, they have been used for many purposes that were once fulfilled with servers located on local area, but LAN’s continue to be an essential aspect of school infrastructure. The easiest way to differentiate the LAN from the Internet is to answer the question “Who has physical access to and control over the devices?” Those that an individual can physically touch in a school building are part of its LAN, otherwise it is likely an Internet resources. Of course, actually touching a server requires access to the locked wiring closet where they are secured; select IT professionals and school leaders should be the few who have keys to those doors. Network users also access many LAN and Internet services via web browsers another application, so the experience of using network resources often is the same for LAN and internet resources for the user.
As the boundaries between the Internet and LAN services have blurred, it has become more difficult to predict which services are provided by LAN resources and which are provided by Internet resources. Consider the example of library card catalogs. The long drawers filled with index cards documenting a library’s collection were replaced with databases decades ago. (I used the drawers until I earned my undergraduate degree in 1988. When I returned to the same library two years later when enrolled in a graduate course at the university, the cabinets had all been replaced with computer terminals.) Because the databases containing library catalogs are large and they are accessed frequently, the first digital card catalogs tended to be installed on LAN servers. Requests to view records were sent through circuits to a server located quite close to the client computer from which a library patron requested the record. Technicians and LAN administrators configured and managed the hardware and software that made the card catalog available to library patrons by going to the library and unlocking the closet where the computers were running.
As we will see in the next chapter, card catalogs are now web-based services and schools pay a fee to store their card catalogs on the Internet. Librarians continue to maintain the database storing their collection, but the computers on which the information is stored are maintained by technicians at other sites (sometimes sites far removed from the school). This change has been possible, inpart, because the network connections between the library and Internet are sufficiently robust and reliable that patrons get library information as quickly over the Internet as they did over the LAN previously.
Fundamental Concepts of Networking
Fundamentally, computer networks are simple systems. To build a network, one provides a pathway to move data from one node to another (through electrical signals transmitted over wires or radio signals that travel through the air), gives every node a unique address (so the network “knows” where to deliver packets), and then keeps track of it all (so the network “knows” where to direct each packet of information so it arrives at the correct address).
A consumer network can be set up for less than \$100 and has sufficient capacity to provide robust and reliable connections for (perhaps) 10 devices using it at any moment. To create a consumer network environment, one visits an electronicsstore or office supply store (or web site) and purchases a device that functions as the gateway between the computers and the Internet and routes traffic from the small network to the Internet; the same device assigns addresses to each node, and sends packets to each node within the small network. The nature of the cable that connects the gateways to the circuits outside the building depends on the service purchased from an Internet service provider (ISP); sometimes it is a coaxial cable, sometimes an Ethernet cable, and rarely a telephone cable. Typically, one configures the following on a consumer network as well:
• Wireless access, so that mobile devices can connect to the network;
• Filtering to prevent access to certain sites or to set other rules to limit what can be accessed, when it can be accessed, and on which computers can access the network;
• Firewall to deny unwanted incoming traffic access to the network.
The ease with which one can set up a consumer network can lead technology-savvy consumers (including teachers and school leaders who may be involved with IT management in schools) to misunderstand the task of managing the networks necessary to provide robust and reliable network connections to the hundreds of networked computers and devices in schools. Those require IT professionals to install and manage business or enterprise networks. Consumer devices are designed to be “plug-and-play” systems, so many of the essential functions are preconfigured into the devices as defaults settings and these will work with the defaults settings that are set on consumer devices. As long as nothing is changed, and the number of devices is fewer than about 10, a consumer network will be reliable and robust.
Business class networks are built using network devices with circuits that provide robust and reliable connections to several tens of users. In all but the smallest schools, enterprise networks are necessary to provide sufficient performance. Enterprise networks are very sophisticated and the devices necessary to provide adequate performance on an enterprise network are far more expensive than consumer or business grade devices. Consider, for example, switches; these devices provide additional ports, so devices can share a single connection to the network. On a home network, one might use a switch to allow three desktop computers in a home office to access the Internet through a single cable. On an enterprise network, the system administrator might use a managed switch to connect two new computer rooms full of desktops to the network. The switch (with five ports) for home would cost less than \$50, but the enterprise switch (with 48 ports) would cost around \$5000. Notice the difference in relative price; consumer ports cost about \$10 per port. Enterprise ports cost more than \$100 per port! | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/04%3A_IT_Networks/403%3A_Section_3-.txt |
Ethernet is the dominant type of network installed in schools; it uses cables (consisting of eight strands of copper wires and shielding to prevent electromagnetic fields from interfering with the signal) that connect to ports in computers, switches, and other devices with plastic clips that close circuits throughout the network. Regardless of the size of the network, data rate, addressing, and Efficacious Educational Technology 98 routing are characteristics that determine how its robustness and reliability.
Data Rate or Bandwidth
Data rate refers to the amount of information that can be transferred over a network in a given time; the term bandwidth is also used to describe data rate. Ever since networks were installed in schools, IT managers have sought to provide broadband access to schools, this term refers to connections to circuits providing the greatest available data rate.
Broadband the term used to describe “large amounts of bandwidth,” but it is a relative term. In 2015, the Federal Communication Commission in the United States of America increased the minimum data rate for connections to be considered broadband to 25 megabits per second (Mbps). In a commonly-cited summary of Internet speeds, Akamai (2017) reported the average connection speed in the United States was 18.7 Mbps. While that is several times faster than the global average (7.2 Mbps), it is much slower than those countries with the fastest average broadband connections. These measurements of data rate are difficult to place in a meaningful context, but the simple observation “the more bandwidth the better the performance of the network” is correct. IT professionals in school enter into contracts with a local Internet service provider (perhaps a telephone company, perhaps a cable television provider) to access a connection to the Internet with a specified bandwidth. The bandwidth provided by the ISP depends on the nature of their circuits and the price the school budgets to pay for the service.
The bandwidth available to users in at a location is limited by the bandwidth of the connection to outside circuits, and bandwidth is a zero-sum quantity. A system administrator might complain, “when those students start streaming music, they are using up all of our bandwidth.” The students are using the finite capacity of the network to transfer information to listen to music, thus it is unavailable for other uses.
All network devices (including the network adapters installed in computers) have a data rating, which reflects that maximum amount of information that can pass through device per unit of time. The first Ethernet networks installed in schools typically used 10Base-T devices which had a maximum data rate of 10 Mbps (megabits per second). It is difficult to understand exactly what that means, but it is important to know those devices were replaced with 100Base-T devices around the turn of the century, and those have largely been replaced with devices rated at 1 Gbps (gigabits per second), and many IT professionals have “future proofed” their networks by upgrading LAN devices to data ratings of 10 Gbps. These new network devices can move data 1000 times more data in a given time than the networks first installed in schools. While an IT manager who has installed 10 Gbps devices can rest assured that the network is likely to be robust and reliable, they know they will replace these devices with devices with greater capacity in the coming years.
The total data rate of a network connection is determined by the device in the network that has the least data rate. Consider a school in which all of the devices were upgraded to provide 1 Gbps data rate, except for the switch that serves the computersin one wing of the school where a 100 Mbps switch remained. In that wing, the greatest possible data rate would be 100 Mbps. Further, if one of the teachers in that wing is using an older computer in which the network interface card (which connects a computer to the network) was rated at 10 Mbps, then that teacher’s maximum data rate would be limited by that card. For users in the wing connected through the 100 Mbps switch, an especially that teacher connecting through a 10 Mbps network interface card, the more reliable and robust 1 Gbps network would not be enjoyed.
Factors other than the data rate rating of devices contribute to the actual performance of the devices as well. Many network devices (such as switches) include a small computer and software that helps with sending packets of information to the correct address and other functions. Just like all computers, if those are asked to perform too many functions, they stop working; those frozen processors can reduce data rate to almost zero. The effects of an overloaded gateway can be particularly troublesome as all network traffic flows through it to the Internet. I was once asked to help troubleshoot a network with curious symptoms. Early in the day and late in the day, the network performed adequately for the few adults who were in the building and using the network. Once students arrived and began sending and receiving traffic over the network, it slowed to a crawl and the data rate was about 1% of what was provided by the ISP. The technology coordinator was convinced the problem was due to increased traffic due to computer that was infected with malware, but there was no evidence of a specific computer generating excessive traffic when the symptoms were demonstrated and LAN traffic was unaffected. As it turned out, the network had been configured so that a single switch was responsible for managing much of the network traffic; the result was the switch was unable to handle the traffic and, despite it being rated a 1 Gbps, its effective data rate was a small fraction of maximum. Once network management was shared among several devices, performance returned to normal.
If the maximum bandwidth available from the local ISP is inadequate for the demands of a schools, then the IT professionals must purchase additional lines and divide the network traffic between them; this arrangement provides more robust and reliable availability during those times when network demand is greatest as there is extra bandwidth to handle to extra demands. In recent years, many schools have converted to voice over Internet protocol (VOIP) telephone services. Because of the importance for robust and reliable telephone service to schools, IT professionals will work with the vendors who provide the system to ensure it is served by adequate bandwidth (most often with a connection to the Internet that is separate from the data network). Changes such as these typically require expertise beyond that of the LAN administrator employed by the school, so they hire network engineers to ensure the reconfigured network provides necessary operation during the switch over.
The data rate needed by an organization is heavily affected by the nature of the data being transferred. A text document contains relatively little information, so sending a word processing document over a network consumes little bandwidth; video, on the other hand, contains great amounts of information, thus consumes much bandwidth to transfer. The network demands of a classroom full of students reading Wikipedia pages will be far less than a classroom full of students watching YouTube videos. This explains why many technology coordinators and network administrators will limit access to “broadband hogs” like YouTube by decreasing the bandwidth available to and from the site. They can configure routing and switching to minimize the bandwidth that is available to transmit packets from those sites.
Measuring data rate or connection speed is a very easy task, and it is one of the first steps in troubleshooting a network that is not performing as expected. There are a number of web sites that can be used to perform a speed test for a connection. This is accomplished by the sending files of known size to and from your computer. Time stamps on the packets are used to determine the actual bandwidth of the connection from the user to the site.
Another step in troubleshooting unreliable, and non-robust networks is for an IT system administrator to use packet analyzing software to observe the network traffic in detail. In the vernacular, this is called “network sniffing.” Using this software provides one with many details of which nodes are using bandwidth and for which purposes. Computers found to be generating excessive or questionable traffic by analyzing the packets originating from it and passing to it can help IT administrators identify malfunctioning or malware-infected computers. Consider, for example, some Internet games transmit traffic using a specific hypertext transfer protocol port. System administrators can configure the network to not use those ports, thus preventing access to the Internet gaming site from the school network.
Addressing
Each device on a computer network must have a unique address. This is unsurprising as networks operate when information is sent from one node to another, and without a unique address, it would be impossible for packets of information to make it to the correct location. There are two types of addresses (one that never changes and one that varies each time a computer connects to a network), and understanding networking requires knowledge of each.
Every hardware device that connects to a network has a media access control (MAC) address, which is sometimes called the physical address of the device, which is programmed into the hardware when it is manufactured. This address never changes, and it is useful for identifying with precision a computer or device on a network; networks sniffers, of example, will be able to identify the MAC address for every device sending or receiving packets.
Software on a single network device (perhaps a server or router or gateway or unified threat management appliance) has dynamic host configuration protocol (DHCP) operating, and it assigns Internet protocol (IP) addresses to devices attached to the network. The IP address is a temporary address; each time a device connects to a new network it is given an IP address for that session. On its next connection to the network, the device may be assigned the same address or it may be assigned a different one. The DHCP server is configured to have a pool of addresses that can be assigned, when the pool is exhausted, no more devices can be connected.
Into the second decade of the 21st century, Internet addresses were assigned following the rules known as IPv4 (Internet protocol version four), so every Internet node was assigned a quaddot address which consists of four numbers separated by dots, for example 192.168.1.100. Each IPv4 address consisted of 32 bits, so the Internet consisted of up to 232 (almost 4.3 billion) nodes. Because that number of address was going to be exhausted (which would have stopped the growth of the Internet), computer systems have been configured to use IPv6 (Internet protocol version six), which uses 128 bits to identify an IP address. This expands the number of possible Internet nodes to 2128 (3.4 x 1038); an example of an IP address written in IVv6:
2001:0db8:85a3:0000:0000:8a2e:0370:7334
Almost every network is configured to allow information to be addressed to nodes using either version of the protocol, and IPv4 is still used to configure LAN’s.
Internet protocol addresses in either IPv4 or IPv6 are strings of digits, or digits and letters, so they are difficult for humans to remember. So that World Wide Web sites can be accessed via names that are meaningful to humans, domains name servers (DNS) are connected to networks and these convert the universal resource locator (URL) such as http://www.google.com of a web site into its IP address. Occasionally a computer will malfunction in a curious manner; it will appear as if it is connected and some network operations will continue to function, but users cannot open web sites unless one knows the IP. In this case, the technician is likely to say, “It is not resolving addresses,” and he or she will track down and resolve DNS problems on the machine.
Gateways which are the devices through which all of the computers on a school LAN connect to the Internet have at least two network adapters. The external adapter is assigned the gateway’s IP address on the Internet which is set by the ISP. All traffic requested from the Internet by a computer on the LAN will be sent to that external IP address. The traffic then passes to the circuits that comprise the LAN and the internal adapter of the gateway is set with a static IP address. Software on the gateway differentiates packets of information requested by a LAN IP from those incoming packets that originated from elsewhere and have not been requested. IT 2001:0db8:85a3:0000:0000:8a2e:0370:7334 security professionals spend much time ensuring incoming data that is not authorized is not allowed to pass into the LAN.
Each network has 255 addresses (in reality, subnets are used to increase the number of addresses, but let’s keep it simple in this example). Following the model of an IPv4 quad dot address, one can assign 192.168.0.1 through 192.168.0.255 (the first three sets of digits are always the same on a subnet). These 255 addresses comprise the pool of addresses that can are assigned and managed by the single DHCP server on the network. A system administrator uses up to three options when configuring the DHCP server; two are used on almost every school network, and the third is used less frequently.
First, devices that are always connected to the LAN and always powered on (such as switches, wireless access points, threat management appliances, and printers) are given static IP addresses. This type of IP address is configured in the network settings on the device, and then it is removed from the pool of addresses assigned by the DHCP server. Often, the LAN administrator removes a series of IP addresses from those that can be assigned dynamically, then assigns those to devices necessitating static IP addresses. Devices are assigned static IP addresses when they are likely to receive frequent connections from throughout the network, so both humans and computers benefit from increased performance by always using the same address. In effect, a static IP address establishes a permanent IP address so the IP address and MAC address are both unchanging (until the IP address is changed on the devices by an IT administrator).
Second, devices that are connected intermittently (including desktop and laptop computers, Internet-only notebooks, and mobile devices) are assigned a new address each time it connects by the DHCP server; this is called a dynamic IP address as it frequently changes. The system administrator will typically specify the IP addresses that can be assigned, and once that pool is exhausted, no mode device can connect to the network until the system administrator configures another subnet on the LAN.
A third option, which is less commonly used is to have the DHCP server assign an IP address that is reserved for the device each time it is connected to the network; of course, this method is not really dynamic, but it differs from a static IP address as it is assigned by the DHCP server, and not configured on the device itself. Reserved IP addresses are removed from the pool of dynamically assigned addresses as well. This method is most often used when configuring a static IP addresses requires unfamiliar steps. Some technical schools, for example, connect simulated electrical panels and other industrial interfaces to their networks for training purposes. The computers on the panels cannot be accessed using a familiar graphic user interface, so configuring the static IP address is done by the server rather than configured into the device.
Figure \(1\): Three options for assigning IP addresses
Routing
For networks to function, packets of information must make it to the correct destination. This depends on accurate addressing and also on effective routing. As the name suggests, a routing is the network function in which packets are sent via a route to their destination. Routing occurs between the LAN and the Internet and this is accomplished through a device called a router (or more likely this function is assigned to routing software on a network appliance that serves multiple roles).
Within the LAN, information packets are routed by devices called switches. The unmanaged consumer switch that one can purchase at the office supply store for \$50; it works with the default configuration (which cannot be changed) for the small number of devices it connects. The switches on enterprise networks are have sophisticated software that allows them to manage packets sent to and from many more nodes and rules for how packets are sent and received can be configured. This software is one of the reasons enterprise switches cost about ten times the cost of consumer switches per port. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/04%3A_IT_Networks/404%3A_Section_4-.txt |
The term “wireless” can be applied to two types of IT networks that are commonly accessed by students and teachers. Individuals who carry smartphones and some tablet computers into school buildings connect those devices to the network of cellular phone towers. Those connections depend on the owner having an active account with a provider and the network signal extending throughout the school. Those devices are connected directly to the Internet, and traffic sent to and from those devices does not pass through the school’s devices and LAN resources are (generally) unavailable from those networks. These wireless networks are beyond the control of school IT managers.
“Wireless” also describes wireless Ethernet (commonly called wifi) which is the technology used to connect mobile devices (and desktop devices to the LAN with wireless adapters) to the Ethernet network in schools through radio signal rather than Ethernet cables. The phones and tablets students and teachers connect to the cell phone network can also be connected to the wifi in a school. When using those connections (which is typically preferred), the traffic does pass through networks owned by the school. Installing a wifi network requires devices called access points be installed and configured.
Usually access points are connected to the Ethernet network via a cable and given a static IP address, then attached to the ceiling where their health is indicated by LED lights. Inside each access point, there is an antenna that transmits and receives radio waves. To connect to the wireless signal, a computer or mobile device must have a wireless adapter installed and configured; this provides an antenna similar to that installed in the access point.
Access points are configured to broadcast a service set identifier (SSID). Typically, these are given names that are descriptive and a security code is assigned to the SSID. Modern operating systems will notify users that SSID’s (or “wireless networks”) are available, and the user can select the SSID to which she or he wants to connect. If necessary, the user will be prompted to provide the security code, and those settings can be saved so the device connects automatically when the SSID is available. System administrators can set SSID’s so they do not broadcast to users. If a system administrator wants to create an SSID that only technicians use, then it can be hidden and only technicians know the name of the SSID and the security code used to connect it.
An access point can be configured to offer SSID’s to devices within its range (typically a few tens of meters depending on building materials and power rating), and the SSID’s can provide different capacity. A common configuration is to make three SSID’s available on a school network. An “administration” SSID is hidden from users, and is used to connect the devices of system administrators, school administrators, and others who need the most secure connections. A second “teaching and learning” network is used to connect mobile devices owned by the school most of the bandwidth is dedicated to this SSID, and it allows users to authenticate using the servers, and LAN devices (printers, etc.) are all available when connected to this SSID. A third SSID is available for “guests” to connect their own personal mobile devices. Typically, this SSID has limited bandwidth and is does not provide access to LAN resources.
Two coincident factors motivated school IT professional to transition from installing wired Ethernet networks in schools to installing wireless Ethernet networks in schools. First, mobile devices became increasingly used; smartphones, tablet, and Internet-only notebooks are not manufactured with Ethernet ports, so the only option is to connect these through an SSID. Second, advances in the design of wireless devices provided sufficient bandwidth that wireless connections performed as well as wired connections, but the installation costs are a fraction of installing wired networks. The result is that plans for new networks are likely to call for sufficient access points to cover the entire footprint of the school with strong wireless Ethernet signals, and these provide sufficient data rate to provide robust connections.
One of the challenges in establishing a wifi network in a school (or any other building) is ensuring each space is served by a single access point. If there are equally strong singles from two different access points in a classroom (for example), wireless adapters on computers in that room will connect to one, then drop the connection to connect to the other. This process can be repeated continuously and frequently (drops and reconnects that occur every minute are not unusual). During each connection cycle, the network is unusable, so the network will be very unreliable in that classroom. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/04%3A_IT_Networks/405%3A_Section_5-.txt |
All aspects of enterprise networking require quite specific expertise. Schools employ network professionals to maintain and manage the networks installed and they also retain outside network professionals including engineers and technicians to design and install network upgrades (both hardware and software) and extensions (for example adding wireless capacity).
Planning and Installation
An information technology network is much like other technologies as the expertise needed to design and build it is much more specialized and expensive to than the expertise needed to manage and operate it once it exists. Consider how an IT system in a school is similar to an automobile. Planning and building each requires engineers and designers who have detailed expertise and expensive tools, but they are not needed after the automobile exists. Technicians who keep them operational have lesser (but still considerable) skill and tools. Users can take some minimal steps to keep both operational.
When designing new networks or major upgrades, most technology managers in schools will contract the services of network engineers. Typically, these professionals work for companies that also sell, install, and service the devices included in the engineer’s plans; so, installations and upgrades tend to find schools entering into extended contractual relationships for service and repair work on the infrastructure. While these services are very expensive, after school leaders consider the cost of the devices and the potential liabilities of insecure networks, they recognize the value in this expense.
Network installation and upgrade projects are laborintensive and may cause interruptions in network availability and usually necessitate technicians work throughout the building. To minimize the disruptions caused to teaching and learning, network projects can be scheduled during the times when the school is largely empty of students. The vendors whose engineers plan the installations and upgrades will also have large numbers of technicians available, so projects that require many hours of labor can be accomplished in small lengths of time through many workers.
Engineers design and technicians build IT networks. System administrators operate and manage the networks once they Efficacious Educational Technology 108 are installed. Serious problems are brought to the attention of the engineers who have more complete knowledge of the system to identify a solution, but most functionality can be sustained by individuals who have been properly trained and how have adequate resources.
A key aspect of planning and installing a network is mapping and documenting the network. IT networks are very interesting systems. From the inside (when connected to the network on a computer that has network sniffing software installed and running), the network addresses of devices can be located with precision and very quickly, but the physical location cannot be easily determined. From the outside (when looking at the physical device), there is no way to know with certainty its network address or the purpose it serves. A good network map will identify both the network address and physical location of devices (the devices will also be labeled with appropriate information). Most network devices (switches, routers, security appliances, access points, printers, and most other devices which are given static IP addresses) include a web server installed on a small computer in the device. By pointing a web browser to the devices’ IP address, system administrators can log on to a web page located on the device to monitor its operation, change its configuration, update its software, and otherwise mange its operation. This interface can be used to supplement a network map, but it does not replace network documentation.
Network planning, including mapping, is an important part of managing IT resources is schools, but it is often not given the attention that it needs. IT professionals are typically overworked, so they spend much time addressing technology problems that are very pressing; the work of documenting the network can be left undone. While this is seemingly a necessary approach to resolving technology problems in schools, it can lead to greater difficulties later. When outside agencies need to access the network (perhaps because the system administrator is unavailable) or when the school seeks to document network resources and budget for network replacement, a network map can save many hours of work that is billed at a far greater rate than is earned by an IT professional employed by the school.
Managing Users, Resources, and Data
Once IT infrastructure has been installed, IT professionals hired by the school adjust the configurations of devices installed by the engineers and technicians so the network is secure, robust and reliable. They configure settings to authenticate users; give them access to servers, printers, and other devices; and adjust addressing and security functions as devices are added to and removed from the network. Often these are established before the network is installed (network planning is a vital part of upgrade and replacement projects and finds school IT professional and network engineers meeting for many hours to devise and refine the planned installation).
Accounts are granted permissions according to the users’ role in the school and the resources each is authorized to use. The accepted network management practice is to provide individuals who are responsible for managing the school network with two types of accounts; they log on with standard user accounts when simply using the network, but then they log on with an administrator account when they need to change network settings.
In schools, most standard users accounts are assigned to groups such as “school administrators,” “teachers,” and “students.” Student groups are further grouped into organizational units such as “high school students” or “middle school students.” With users being assigned to well-planned organizations units, network administrators can quickly and easily deploy changes by applying them to organizational units.
One commonly used practice for managing user accounts on the network is to avoid recording users’ passwords. If it becomes necessary for a network administrator to log on as a specific user or to restrict a user from the network, then a system administrator can change the user’s password. The user regains control over the account by using a one-time only password from the system administrator, and reset her or his password when first logging on to the system. This step is taken to preserve the user’s privacy and to properly account for all the activity using the account. When my password has been changed by the administrator then I am locked out of my account and I cannot be held responsible for changes done under my account. Once I regain control of it, then I am responsible for it.
In addition to managing user’s access to the LAN through user accounts, IT administrators can control devices that are connected to the network by adjusting the network configuration. For example, they can send operating system updates to desktop computers, install and update applications, install printers, and set other configurations from one location. Just as user accounts are placed in organizational units to facilitate management of individuals’ account who have similar needs, computers can be assigned to organizational units, so (for example) all of the computers in a particular computer room can be adjusted by applying changes to the OU to which the computers belong.
One often-used feature of operating systems connected to network that is used to manage devices is remote access. When this is enabled, an individual who knows the IP address (or host name) of a device can use client software to log on to a computer or server from a different location on the network. This feature allows, for example, technicians at one LAN location (perhaps even in a different building) to take control of a user’s computer to troubleshoot problems or observe symptoms. In rural schools that are separated by many miles, but that are connected via a single LAN, this can be very useful as an IT professional can take control of a computer without the need to travel to the site. This increases the efficiency of technicians and minimizes travel expenses.
A well-designed network built with devices of high quality that are properly configured will typically be reliable and robust with little input from IT managers. Of course, networks are systems, so they degrade over time. IT managers in schools spend time and other resources to slow the rate at which networks degrade. One important job in keeping systems operational and secure is updating software, including operating system software, applications, and drivers (which is the software that allows computers to communicate with peripherals such as printers). Sometimes these updates introduce conflicts to the system, so those must be identified and resolved as well.
Occasionally, and despite the best work of IT professions, devices fail in sudden and very noticeable ways; this type of sudden degradation is rarer than the on-going degradation that can make introduce gradual degradation of performance and ultimately failure, but they do happen. System administrators will troubleshoot malfunctioning systems and repair or replace devices that have failed. A well-documented network map will facilitate the work of configuring replacement, so IT managers can restore a robust and reliable network quickly.
Managing the resources and protecting the data on a network also includes ensuring a disaster recovery plan is articulated, familiar to multiple technology and school leaders, and properly followed when (not if, but when) a disaster strikes. A fundamental aspect of disaster recovery is ensuring data and systems are backed-up to servers that are off-site. Many school IT manager contract with services that specialize in backing up the information in organization’s LAN’s on redundant servers.
Managing network resources also includes investigating proposed changes and upgrades to the system to ensure existing functions are preserved and that new systems are compatible with existing systems. Incompatibilities most often become apparent when operating systems reach the end of life, so they must be replaced. Small schools and early adopters of particular technologies are populations that encounter problematic incompatibilities as well. Small schools tend to purchase student information systems, accounting software, and similar data management applications from publishers whose products are less expensive than others, but that are less likely to be updated. The effect is that these users are locked-in to less than optimal systems by the expense of converting records to new systems.
Network Security
Perhaps the most important function of a school IT administrator is ensuring the network is secure. There are many potential threats to the IT infrastructure installed in a school and the data stored on it, thus network security is multidimensional and necessitates the participation of all members of the IT planning teams. In general, network security is designed to ensure only those who are authorized access systems and data (confidentiality), that the systems and data are accurate and unaltered (integrity), and that those who need access can get it (availability). These three aspects of security are somewhat contradictory; confidentiality and integrity can be ensured by limiting availability, but unfettered availability poses threats to confidentiality and integrity.
Confidentiality is especially important for school IT professionals. The Family Educational Rights and Privacy Act (FERPA) was enacted to ensure sensitive information about students and families are kept confidential. Much of the data about students and families that are stored on school-owned or schoolcontrolled IT systems are covered by FERPA protections; school and technology leaders may be found liable for failing to take reasonable care in protecting this data.
When designing network security measures, IT planners and managers take steps to prevent threats from damaging the system or its data. For example, they limit the individuals who have access to administrator accounts on computers and network devices to those who are trained and authorized, they deploy unified threat management devices which scan network traffic for malware, and they block access to sites know to distribute malware. They also prevent unauthorized incoming network traffic from gaining access to the network.
Securing networks can be a particularly challenging endeavor in those schools where devices owned by students and teachers, and other guests in the school, are allowed to connect their own devices to the network. This is necessary in those schools that have deployed a bring your own device (BYOD) initiative, but there are other situations in which devices not controlled by the school are added to the network. Typically, IT managers provide a “guest SSID” that provides very limited service and others’ devices connect to that wireless service.
Network operating systems, and software added to networkconnected devices, can monitor and log network traffic and other unusual events; reviewing the logs generated by this software is a regular task for IT professionals in schools. If threats are detected, the IT managers will take steps to remediate damage. This may include, for example, removing a virus infected computer from the network, increasing the settings of threat detection, or restoring data from back-up copies.
It is even possible for IT managers to prevent particular devices from accessing the network. If a student brings a personal laptop to school, for example, and it is known to contain viruses and other threats to the network, then IT managers can use network sniffer software to identify it, then add it to a “black list” on the DHCP server, so whenever that device is prevented from obtaining and IP address, thus switches can neither send no receive data over the network to that computer. (Take a look again at this final paragraph. If you are a teacher or a school administrator who understands what it means, then this chapter has accomplished my goal for you.) | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/04%3A_IT_Networks/406%3A_Section_6-.txt |
Whereas they once purchased servers that they installed and maintained servers in the schools where they worked, IT managers now purchase computing capacity and storage space on servers owned and operated by others. In this way, computing is similar to electricity generation. Just as we buy electricity that is produced at central locations, we purchase and use computing capacity on servers in a central location. We use the web to access those resources and our web browsers allow us to use those resources and create and manage the information needed for teaching and learning.
The Internet is a collection of computer networks that extends across the globe and even into space as packets of information are transmitted across continents and oceans via satellites. Originally, users of the Internet selected from several protocols or methods of transferring information (for example file transfer protocol or simple mail transfer protocol) or using a computer remotely (for example telnet). The World Wide Web is built upon hypertext transfer protocol, which was added to the collection of Internet protocols early in the 1990’s; compared to the other protocols http is a late addition.
In the decades since its inception, the World Wide Web has matured in several ways, and improvements have resulted from both advances in hardware and software of all the computer systems from client through transmission to server and back to client. The computers feature more RAM, more and faster processors, and network adapters with greater bandwidth ratings. The software on the servers that host files and the web browsers on the clients used Efficacious Educational Technology 114 to access files have also become more efficient and capable of displaying more dynamic and more sophisticated data as well. In addition, the capacity of the circuits over which information packets are transmitted has increased; fiber optic cables are becoming increasingly common and these provide vastly more bandwidth than the plain old telephone circuits used to move data in earlier decades.
The capacity to move vast amounts of information across the globe means efficacious IT managers can use the web to perform many data storage and processing functions that once required local systems. It also means that educators can use web-based tools for many teaching and learning activities that once required local systems. In this chapter I explain the web services that are improving teaching and learning.
502: Section 2-
Efficacious IT managers will articulate a logistic goal such as “Web services will support teaching and learning and the systems will be selected and managed to be easy-to-use and effective.” The services that are provided depend on local factors including budgets (some of the web services described in this chapter can be very expensive), the existing network capacity (web services that cannot be reached are worthless), and the needs and expectations of the communities and populations served by the school.
503: Section 3-
At its most basic level, the World Wide Web is a collection of servers; these computers are always powered on and connected to the network. Files on a web server are contained in a directory that is configured to allow outside users to read the contents, and the file is read when a visitor uses a web browser on his to load the file at a particular web address. When visiting a web page, a visitor requests the file be transferred to his or her computer and the web browser uses the directions in that program to display the file on his or her computer. While the basic organization of the World Wide Web has been unchanged since its invention, the languages used to write web pages have matured in the same way the other software and hardware that are the foundations for the web matured.
The first generation of web pages (sometimes called Web 1.0 by those looking back) were simple files. The programs typically displayed only text and low-resolution images. The text was formatted and the location of the images on the pages were determined by tags added in hypertext markup language. Hyperlinks, which connected web pages to other web pages, were also written into the program. Updating a web page included downloading the html file to a local computer, opening the local copy in a text editing program to change the contents of the page including links and formatting tags. The edited files and additional images were then uploaded to the web server and placed in the directory that was replicated in the URL for the page. To see the changes made to edited files, one would open the address, then use the refresh function in the web browser to reload the program; the new page is transferred from the server to the local computer and built in his or her web browser.
Creating content on Web 1.0 was modeled after the print and electronic broadcasting media models that preceded it. Creating and disseminating web content required sufficient capital and expertise to obtain and manage servers and create and format content, so it tended to be controlled by relatively large and wealthy organizations and highly skilled individuals. Early advocates for the World Wide Web in education envisioned an “infinite library” were students and teachers could find vast information written and produced and curated by professional writers, artists, and editors. What has emerged, of course, is something far different.
By the turn of the century, Web 2.0 tools had emerged and were gaining popularity. These allowed users with more limited expertise and fewer resources to create content and publish to the web. Whereas the first generation of creators of content for the web were largely programmers, the Web 2.0 creators needed only to be able to use a word processor.
When comparing Web 1.0 to Web 2.0, three differences are important. First, Web 2.0 allows users or visitors to sites to create content. Whereas the first web pages contained information created by the owner, Web 2.0 sites rely on the users of the site to create the content. This content ranges from simple comments that are added to pages created by others to entire pages designed by the user. Creating content on these sites usually requires one have an account on the site and to comply with the terms of service, but those are easy to meet.
Second, the programs used to create pages on Web 2.0 are much more complex and the programs utilize information stored in databases in a way that Web 1.0 did not. These databases both manage user accounts, so owners of servers can control those who create content on their sites, and the databases are used to manage the files and resources needed to create dynamic and media-rich web sites. For example, with databases a retailer can create a page that displays different products in different colors depending on search results or selections made by the visitor. Web pages created with content stored in databases are often called “mash-ups,” and media (especially including video and other interactive elements) can be created on one site and embedded in many pages and sites around the web.
Third, web pages are created so that mouse clicks and other input from users can be used to both create content that is stored in databases and to call programs that are executed on the server or in the web browser. These aspects of Web 2.0 make the contents and the functions of Web 2.0 sites much more dynamic than Web 1.0 sites. Using all of the features on Web 2.0 requires one use a web browser, and that web browser must be updated. Older versions of web browsers cannot be used to create and view much Web 2.0 content.
There are several different web browsers (including Microsoft’s Edge which is replacing Internet Explorer, Firefox, Google Chrome, Apple’s Safari, and others), and these exist in many versions. The details of how web browsers interact with databases and display data depends on both the exact version of the web browser, the version of the operating system installed on the client computer, as well as which extensions that have been installed on the client. Screen resolutions, installed fonts, and media players are all locally managed options that affect web page appearance and function as well. The result is that web pages are not consistently displayed across all systems and the viewing experience can be very different for different users. This is further complicated by the emergence of mobile computing devices and versions of web browsers for those devices. To accommodate these many variations, most web authors are adopting html 5, a programming language that produces web content that is more consistently displayed on different systems than previous versions. Web authors are also constantly and frequently checking their work on multiple systems as part of their development work.
For many people, the World Wide Web is being replaced by social media sites as the dominant method of accessing online information. Social media comprises a large collection of sites in which the owner of the site provides no content (other than system announcements and advertisements), and the content is primarily viewable by those who are connected in some manner to the individual who posted it. When one visits Facebook, for example, one seesthe content produced by her or hisfriends or advertisements that are based on one’s viewing or clicking history. Social media has become the dominant online experience. Greenhow, Sonnevend and Agur (2016) observed, "Social media are transforming sectors outside of education by changing patterns in personal, commercial, and cultural interaction. These changes offer a window into the future of education, with new means of knowledge production and reception, and new roles for teachers and learners (p. 1)."
Some observers have begun to use the term Web 3.0 to differentiate the more participatory web that has emerged as the web matured, but it is unclear how web 3.0 is different from 2.0. What is clear, however, is that many computer applications and functions that were once performed by applications installed on the hard drive of a computer that was sitting on a desk or on a lap are now available as web services. IT professionals choose the best service model to provide these web services; and those decisions are determined by the purpose, the level of expertise they have, and the need to scale the services. For simple file storage, they can choose an infrastructure as service (IaaS) model. If they need to design online applications (that create and use databases, for example), they choose a platform as service (PaaS) model. They can also use software as service (SaaS) which finds them accessing software created on others’ platforms. Of course, single services can provide multiple models. G Suite, for example, provide a productivity suite (SaaS model), but also unlimited storage (IaaS model) for educational users.
When using a web service, one points a web browser to a site, logs on, and then has applications running in his or her browser. Web services have replaced productivity applications, media creation tools, and data management tools that were once installed on computer hard drives. While web-based productivity tools tend to have fewer features than the applications install on a computer with a full operating system, they do provide sufficient services for many purposes and the information one creates using a web service can be used to create dynamic web pages for other purposes.
School IT managers provide and maintain multiple web platforms including those for:
• Internal clients to use for teaching and learning (including cloud-based productivity suites and virtual learning environments);
• Internal clients to use for business and information management (including student information systems and document management);
• External audiences to use for interactivity (especially email; but also chat, video chat, and messaging);
• Disseminating information to external audiences (including web sites and social media).
Web services include both those created specifically for educational purposes and audiences (such as student information systems) and also those intended for general audiences (for example social media which I include as web services) that have adapted for educational purposes. Because school IT managers are providing web services for potentially sensitive audiences (children) and that contain potentially sensitive information (about students and other people), they take precautions to ensure the privacy and security of their users and the data for which they are responsible. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/05%3A_Web_Services/501%3A_Section_1-.txt |
The information one is able to see, the content one is able to create, and the interactions one is allowed when using any web service depends on the permissions that are assigned to the user. When a web service is first configured, super administrator accounts are created, and individuals who log on with those credentials exert complete control over how the services are to be configured for all of the users within the organization and also make some changes that can affect the operation of the software for all users in the organization. Typically, the super administrator will create other administrator accounts to manage the regular operation of the web service and will use the super administrator account only when changing the configuration of the services.
Many school IT managers find it necessary to support web services from multiple publishers (for example different vendors are likely to provide email, productivity suites, learning management system, student information system, electronic portfolio system, library catalog, and full-text databases for library patrons). Logging on to each system can require a different set of credentials, and users who follow the recommended security practice of having different passwords for each site, are likely to find they forget passwords, so it is necessary to reset passwords frequently.
To minimize the barrier presented by multiple credentials, IT managers can select web services that allow for single-sign on (SSO). A common SSO strategy is to use Google’s application program interface (API) to connect web-services to the G Suite domain managed by the school. This allows keys to be shared between web services, so that the credentials on G Suite are used to log on to other systems. In addition to minimizing the number of credentials that users must remember, this scheme provides for centralized management of credentials as changes made to the G Suite profile are reflected in all connected SSO systems.
Regardless of the methods used to manage accounts on different web services, one challenge facing school IT managers is that significant parts of their populations are under 13 years of age. Laws in the United States prevent organizations that provide web services from keeping personal information about young users without explicit permission from the youngster’s parent or guardian. For this reason, school IT managers take extraordinary care in vetting the web services that will be used by students and they take extraordinary care in configuring account settings to minimize potential threats to students’ information.
Systems for Internal Clients use for Teaching and Learning
Internal clients are those users who belong to the organization and who are subject to the policies and procedures of the organization. In schools, this includes teachers, staff, and students, but also in some situations parents and outside consultants who have the need to access information about the students. School IT managers do provide and support web services that facilitate efficient instruction as well as those that provide for interaction and collaboration for internal clients.
Web Services for Instruction
It has been established that some concepts and skills can be broken down into steps and proceduresthat are clearly and explicitly presented and practiced, and performance can be clearly measured. Because of these characteristics, they can easily be translated into computer programs. Further, the databases used to store Web 2.0 data can be used to record a wide range of information about students’ progress through instructional materials that are on the web.
Designing effective instructional materials can be very expensive as it is time-intensive and requires content expertise, design expertise, and programming expertise; but deploying digital instructional materials is done via the web at minimal marginal cost. Once the materials exist, the cost of having additional users access it are small; IT professionals would say, “they scale well.” For these economic reasons, instructional web services used in a school are often provided by organizations external to the school. In the most frequently used model, school leaders subscribe to a service that entitles them to create and manage accounts for students and teachers. Teachers then select the content that will be available to their students, and the content is accessed as part of the school day and also at times outside the school day. A variety of statistics regarding students’ use of the system and performance are displayed on a dashboard. In most of these systems; the content, the path and pacing, and the performance are all controlled by algorithms and information programmed into the system.
Instructional web services are available for many content areas, but vendors tend to produce materials for mathematics, computer programming, test preparation, and similar well-known and easy-to-measure content areas. Both commercial entities as well as non-profit organizations create such content. Khan Academy is a non-profit organization that is well-known for the instructional materials it has made available and the tools that can be used to track learners’ performance. While many have considered the role of such content in both K-12 education as well as higher education, the role of these organizations in a system of accredited educational institutions has yet to be resolved education (Gebril, 2016; Zengin, 2017). For school IT managers, providing and managing web services for instructional purposes is largely focused on working with teachers to vet the systems they identify as meeting their needs, and that conform to the acceptable technology use policy of the school. They configure the systems for easy access and monitor access to ensure it conforms to the terms of service of the producer. IT professionals then ensure there is sufficient data rate (bandwidth) and the LAN is configured to provide robust access and that web browsers are updated to ensure instructional web services are fully functional.
Cloud Productivity
Productivity suites are collections of computer applications that are used for creating documents and information; a productivity suite will include a word processor, spreadsheet, presentation software, and other applications depending on the tools that have been developed by the publisher and the version of the suite that is being used. For most of the history of desktop computing, Microsoft’s Office (with Word, Excel, and PowerPoint among other applications) has been the most popular productivity suite and it is widely used in both schools and businesses. Using Microsoft Office requires one to purchase a license and install the software on a hard drive of the computer upon which one intended to use it. Unless the suite had been installed on the computer that you were using, it was unavailable to you.
Creating word processing documents, presentations, and spreadsheets has been a fundamental purpose of using computers for generations of students and teachers, and these types of files continue to be one of the functions essential to computer users in schools. While Microsoft Office continues to be very widely used, productivity suites that are available as web services and that find users creating word processing, spreadsheet, presentation, and other documents in a web browser are gaining a large share of the educational users. Google’s G Suite, formally called Google Application for Education (GAFE), is the most dominant cloudcomputing platform; Microsoft’s OneDrive and Zoho are other examples, but G Suite is by far the most dominant in the educational market. (Questions about the degree to which Google has monetized public schools and the data about interactions in school are recognized, but have been excluded from consideration here.) All cloud productivity suites operate under similar models. Once they log on, users will see the tools for creating and managing files, files they have created earlier, and even files that have been created by others and have been shared with them in their web browser.
Compared to managing local computing resources, there are several advantages of providing cloud-based productivity. For the students and teachers, cloud-based productivity suites make files and software available on any computer where there is an Internet connection and an up-to-date web browser. Prior to this web service, files were stored locally on computer hard drives or other read only devices (disks of various materials or small circuits of flash memory that plugged into USB ports). While those media were very portable, they were also localized, and without access to the media, there was no access to one’s files; forgetting a USB drive meant spending the day without access to one’s files. With cloud productivity, files can be accessed from any device with an Internet connection.
Incompatibility between home computers and school computers is another problem that has been resolved with cloudbased productivity suites. In the past, it was not unusual for files that were created on computers at home to be unreadable on computers at school and for files creates at school to be unreadable at home. This was the result of different software and versions of software being installed on the different computers. These problems were avoidable by using universal file formats such as rich text format for word processing files, but this step was often forgotten or ignored. Now that web-based productivity is available to school users, this problem has largely disappeared, at least for productivity files. Files can be read and edited with any computer that is connected to the web and that has an updated web browser installed. This interoperability is perhaps the most useful advantage of adopting cloud-based productivity tools in schools.
Because the files created on cloud-based productivity suites are stored on the web, each file has a unique web address and each file is associated with the account that was used to created it. Owners of these files can share them with other users’ accounts. Using this feature, students can collaborate on files and they can share files with teachers so they can edit them, comment on them, or simply view them. For example, a science teacher can share an outline for a project with students so they can view it, and they can click to make their own copy and then the members of a group can be given permission to edit the group’s copy of the outline. These files can also be embedded in other places on the web (used as mash-ups), so others can view students work as well. The teacher can then comment on the final version to give feedback.
In addition to advantages of cloud productivity suites for end users, it provides advantage for IT managers. Security, upgrades, backups, and other management tasks fall to the providers of the system. One effect of accepting others’ management decisions is that the providers of the system can push changes to users, and those can be implemented without the input or consent of the users (this is not true of all changes, and most changes are announced months in the future, but features that teachers use only occasionally may be changed before teachers can respond). Consider, for example, a teacher who prepared to teach students to write research papers using specific tools in Google Apps for Education. If the managers and engineers of GAFE decided to deprecate or remove one of those tools when upgrading to G Suite, then it will be unavailable to the teacher and her students who now use G Suite; they have no choice but to adapt to the changes made by Google.
For a variety of economic reasons, many providers of cloud productivity suites do make them available at little or no cost to educational populations. As productivity suites do provide educationally relevant (and important) functionality, and because proprietary productivity suites can be very expensive, many schools realize significant cost savings when they adopt cloud productivity.
Virtual Classrooms
Content management systems (CMS) are web content creation and publishing platforms that incorporate many Web 2.0 tools into a single site; users with accounts on the CMS can add, edit, and manage information and media on those parts of the site they have permission to edit. Some content management systems have been designed specifically for managing content and interaction for educational purposes, and these are typically referred to as learning management systems (LMS). Open source LMS platforms have matured to the point where they are easily and inexpensively available and can be installed by school IT managers with modest skills and modest budgets. These tools can be used to support many aspects of teaching and learning in schools (Ackerman, in press).
By providing and supporting an LMS, IT managers in schools enable teachers to offer online sections of courses and they enable blended or hybrid courses in which online activities supplement face-to-face lessons. With an LMS installed, teachers can engage students with a wide range of digital tools from one site. A full service LMS will provide:
File sharing, so teachers can make templates, word processing files, PDF copies of articles, presentation files, and other files available to students who can access the materials independently;
• Html editors, which support embedded media, so teachers and other course creators can build content pages that incorporate both the content they compose and media from other sources;
• Tests and quizzes that include items (such a multiple-choice questions) that can be graded by the system and those that must be graded by the instructor;
• Assignment drop boxes, so students can submit digital files that are time stamped and grading rubrics, mark-up tools, and other options for providing students with feedback;
• Gradebooks that display both assignments and tests that are part of the LMS as well as columns for off-line work;
• Discussion boards, blogs, journals, wikis and chat rooms that facilitate both asynchronous and synchronous interaction and collaboration.
While these functions are all available on separate platforms, IT managers in schools who implement learning management systems cite several reasons they support the LMS rather than separate participatory web services as the most efficacious method of providing these services. First, IT managers are responsible for supporting the technology that is used for teaching and learning. If teachers are allowed to select the participatory sites they use with students, then IT managers must either learn multiple platforms or they must provide less than adequate support. It is unreasonable to expect technology support professionals to support many and disparate systems, and it is unreasonable to expect students to become facile users of different systems that provide the same functionality. By using a single collection of digital tools, teachers reduce the cognitive load (Sweller, Ayres, & Kalyuga, 2011) that students experience if they must learn to use multiple systems for the same purpose.
Second, using an LMS allows teachers to share grading rubrics, assignments, and other resources across all of the courses taught in a school through templates. Consider a school that offers many sections of a social studies course. Using the capacity to create a course template in an LMS, IT managers can efficiently deploy all of the resources needed by students who enroll in a social studies course. The syllabus, resources, readings, links, assignments, and other features common to all sections of the course can be deployed immediately in the template, then teachers can customize their sites on the LMS for the sections they teach. Further, templates can be used to create courses to allow for more consistent appearance of the site and for consistent tools which can decrease the extraneous cognitive load of using the site and interacting with course materials.
Third, by using the one LMS provided by the school, teachers have more access to support (in using the site) and in troubleshooting and custom configurations than they do when using disparate and separate Web 2.0 tools for teaching and learning. The convenience extends to students as well as they can access materials for all course through a single site and the navigation strategies used in one class will be effective in all other classes.
Fourth, by using an LMS, IT managers and teachers allow data flow between the school’s student information system and the LMS. While this frequently requires additional configuration (including programming help from the providers of the systems which is similar to the level of support needed when installing network upgrades), it can allow for automated enrollment management and transfer of grade information. Administrators of an LMS also have access to users’ accounts, so they can both manage and troubleshoot accounts and assess and resolve problems. A student who forgets his or her password to a participatory web site may spend many minutes resetting it through email, but the same process on an LMS managed by the school can be completed in far less time. Managing user accounts on an LMS can also be eased by connecting other Web 2.0 accounts to the LMS. For example, using Google applications programing interface (API) and extensions to the LMS, IT professionals can configure an LMS so that G Suite accounts are used to log on.
A final reason IT managers prefer teachers use the LMS they support rather than web sites available to the general population is that participatory web sites are unlikely to allow sufficient control over users’ accounts to satisfy local technology policies and procedures, and the terms of use may violate school policy. Using a participatory web site requires one accept the publisher’s terms of service (TOS), and those terms may expose students’ information to unknown or unforeseen parties. In some cases, teachers use of the participatory web may actually violate the TOS, especially if they are directing students to use the “freemium” version of commercial sites.
Freemium sites allow limited use of sites, generally for personal purposes, and users of the free version see advertisements embedded in the pages they use. Those advertisements may include products inappropriate for students, and requiring students to view advertisements as part of their school work may violate teachers’ ethics and school policy.
Once IT managers decide to provide an LMS they have several decisions to make. First, they must decide on the LMS platform to obtain. There are several options, including those from proprietary publishers as well as those developed and supported by open source communities. The functions available are largely the same on each, and how they function depends on the exact version that is installed as well as the features that are enabled and the thirdparty extensions that are installed.
Second, IT managers must decide a server on which the LMS will be installed. Typically, they select to a) purchase space on a server provided by a company that specializes in hosting the LMS, b) install the LMS on a LAN server, or c) install the LMS on a web hosting service. Each choice has advantages and disadvantages including cost, responsibility for backing-up files and configuring access, and the flexibility of configurations. In general, IT managers can purchase complete LMS management functions and server management, but the cost can be unreasonable for many schools. Managing an LMS and the server on which it is hosted can become a full-time job, however, so those costs can become unreasonable. For IT managers, providing and managing an LMS requires negotiation to ensure it is an efficacious part of the educational technology in the school.
In many organizations and businesses, employees and members use portals that resemble an LMS for many purposes. Employees maintain institutional profiles (which allow them to be paid) and they access work schedules, and receive both organizational training and professional training through online portals. Higher education is increasingly adopting online and hybrid courses, as well. Because portals and online learning are ubiquitous outside of school, many K-12 educators believe experience with an LMS is an essential aspect of middle and high school curriculum to prepare students for the digital landscape of work and school after graduation.
Electronic Portfolios
Whereas the effects of instruction are generally understood to be determined by measuring learners’ ability to answer questions in a testing situation after the instruction has concluded, the outcomes of authentic learning (Herrington, Reeves, Oliver, 2014) are generally understood to be demonstrated in products and performances. Artifacts of those products and performances (along with learners’ reflection in the importance and meaning of the artifacts) are collected in portfolios. A range of web services can be adopted and adapted for creating electronic portfolios.
As with all web services, IT managers collaborate with educators to make decisions about the web services to be supported for students to make electronic portfolios. Among the important decisions that determine which technologies meet the need are the nature of the artifacts that will document the work (for example audio and video files necessitate different capacity than simply images), the physical and virtual location of the files the be included, and the intended audience for the portfolios. The nature of the students is a further consideration; the needs of high school students preparing their first professional portfolios are far more sophisticated from the needs of elementary students documenting their first project-based learning activities.
In some instances, IT managers will recommend using existing web services as a platform for electronic portfolios; the web site tool in G Suite is a popular choice. Others choose tools that are specifically designed for creating and managing electronic portfolios; Mahara (n.d.) is an example of an open source package that is used to create web-based electronic portfolios. For those students who are graduating and who are adults, IT managers sometimes recommend social networking sites as the appropriate platform for electronic portfolios as students can maintain them once they leave the school and there are already active networks of professionals on those sites that students can join.
Regardless of the web service used for electronic portfolios, they serve several purposes in schools. Eyon, Gambino, and Török (2014) compared the performance of students enrolled in courses that included electronic portfolio with students in courses that did not use that tool. They found evidence that creating an electronic portfolio was positively associated other indicators of student success in college including pass rates of courses, grade point average, and retention rate. They concluded these effects are grounded in the greater levels of reflection and metacognition that are necessary for a portfolio-based program than for one without that experience. Further, they conclude electronic portfolios become a valuable source of information to educators and school leaders as they make decisions regarding programmatic and curriculum changes and improvements.
Web Services for Libraries
As the World Wide Web has matured, many tools have been adopted by and adapted for library services. As the web has become the “place” where patrons access digital research and reference materials, librarians have both embraced digital tools to expand and extended their reach, so—despite the temptation to rely on Google for all of our information needs—libraries continue to play an essential role in schools.
In the 21st century, card catalogues in school libraries first became digital with web interfaces that pointed to databases stored on servers connected to the LAN and located in the school. Today, the databases containing catalog of collections (that can’t really be called card catalogs since the cards have bene gone for decades) have been uploaded to the web, and patrons point web browsers to pages where can browse, search, and check the availability of collections.
The nature of the periodicals available to library patrons has changed because of web services as well. School librarians purchase subscriptions that allow patrons to access full-text databases of periodicals. The list of titles that are available depends on the subscription purchased by the librarian, and the subscription may also have some other limits, but in general, library patrons can access effectively infinite collections of peer-reviewed, professional, and popular periodicals from any computer in the world.
Perhaps the most useful tool available to those who use fulltext databases, at least from the researcher’s point of view, is the automated bibliographic tools. Once a valuable resource has been identified and read (for those of us over a certain age this reading is from a paper copy were have printed and that contains our own hand-written notes), a click of a mouse button can display the full reference in the researcher’s choice of several popular style guides. Compiling a reference list requires one to copy (or export) the citation generated by a web service, paste (or import) it into a word processor, then check the format is correct. All aspects of library management and library-based research have been transformed by web services, so librarians are among the most active collaborators with IT managers in schools. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/05%3A_Web_Services/504%3A_Section_4-.txt |
Schools are information-rich places; teaching and learning necessitates individuals and groups have access to, manipulate, and create information, but running the organization also necessitates administrative and human resource information needs be met. While many of these functions fall outside the focus of this book on teaching and learning, there are several web services that are ancillary to teaching and learning, but that have important implications for teachers.
Student Information Systems
There are a wide range of records kept about students in schools. The list includes demographics (to confirm residency and accurate knowledge of legal guardianship), health records, disciplinary and attendance records, and academic records. These records must be accurate as they are reported to governmental agencies and other schools and organizations, and they may be used as evidence in legal actions. While some student records are still maintained as paper records, most new student records are maintained on student information systems (SIS) that are provided as web service.
Compared to traditional paper-only student information systems, one of the greatest advantages of web-based SIS is the ability of IT managers to query the data contained in the SIS to answer questions and generate reports regarding students’ performance or other aspects of students’ experiences at school. This has resulted in the emergence of the data analyst as a specialized role within the collection of IT professionals employed in schools. These individuals manage the data in the SIS and write programs to create necessary reports, so the information about schools is more available to leaders and governing bodies than it was when student records were paper.
A web-based SIS can also make information more available to parents and guardians than paper records. When fully configured and deployed, a web-based SIS will allow authorized individuals to create accounts that are given access to view students’ information, including grades in individual courses. The online gradebook can be a technology that leads to controversy in communities. While it can be a source of information about performance, many teachers see this as an inappropriate intrusion into the learning community they create within their classrooms.
In schools where both an SIS and an LMS are used, teachers and IT managers typically face a decision about which gradebook to use—either the one in the LMS or the one that is part of the SIS. While parent or observer accounts can be created in LMS’s to allow others to see a user’s grade, this capacity usually requires additional configuration of the LMS beyond the base installation. Student information systems marketed to K-12 schools have parent or observer accounts as part of the core installation. In some cases, the LMS and the SIS can be connected so that grades entered in the LMS can populate the SIS. A particular concern when configuring the SIS is the Family Educational Rights and Privacy Act (FERPA) which protects students’ privacy. When configuring a SIS that is a web service, school IT managers must take steps to ensure that systems are protected from internal and external threat that would expose that data to unauthorized audiences.
Document and Business Management
Schools are social organizations, thus places in which policies and procedures requires information be shared so that tasks can be assigned and resources managed in an equitable and efficient manner. Teachers and others are involved in recommending budget and other decisions, they request purchase be made, and events be scheduled. All of these depend on web-based services that are reliably available and that provide the necessary information in an effective and easy-to-use manner. While these tools fall outside of the collection of tools that directly affect teaching and learning, educators who can manage these aspects of their professional work have more time for their most important work.
Two information management tools for internal audiences that are not specific to teaching and learning, but that have important implications for the technology-using experiences of teachers and their students are the system for scheduling shared resources as well as the system for requesting technology support. Both of those are detailed in “Chapter 6: Technology Support Services.” | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/05%3A_Web_Services/505%3A_Section_5-.txt |
The “I” in IT ostensibly refers to “information,” so IT professionals have expertise in operating and managing information technologies. The roots of the Internet as a venue for interaction among dispersed researchers is well-known. This history, along with the emerging dominance of social media and other messaging tools, suggest the “I” in IT could easily refer to interaction, so IT professional have expertise in operating and managing interaction technology. While “interaction technology” is a term that few would find meaningful, IT managers do recognize that the systems they create and operate are widely used so that faculty, students, and staff can communicate with others within the school and with individuals outside the school. These tools are used by those within the school to initiate contact as well as for outsiders to initiate contact with school employees.
The first tool for technology-mediated interaction to gain widespread use among educators was electronic mail (email); and the number of messages (along with attachments) sent between accounts is astounding. Most messages that arrive at one’s inbox in a given day are SPAM (unwanted email). The work of separating the important messages from the noise of the SPAM has led many technology-savvy individuals to adopt other methods of interaction for important messages. Most colleagues know the best way to contact me in a way that will get a quick response is to send me a text message, others know to contact me via FaceBook Messenger, and still others via Twitter or LinkedIn. Despite the decreasing Efficacious Educational Technology 132 importance of email for professional communication, it is expected that email will continue to be an essential method of interaction for both internal and external communication.
An email inbox points to a location on the Internet; and like all locations on networks, it is unique. Information is sent to this location, then read (or ignored) by the person who has been given permission to see the messages (and send similar messages to other inboxes). Digital records, including those central to web services, are stored in databases, and a requirement of every database is that each record contain a unique identifier. Because they are all different, email addresses serve as unique identifiers in the databases of users for participatory web sites. Also, email address can be used to manage identities, so passwords for participatory web sites can be reset through an email account. For these reasons, email will continue to be a vital, but less important, method for technology-mediated interaction within school populations. In addition, email accounts are likely to be assigned to individual regardless of the degree to which they are used to send and receive messages.
Because email has not been completely replaced as a tool for digital communication, and because many adults choose to separate their professional and personal communication (in some cases they seek to separate multiple professional and personal identities) school IT managers provide email accounts to teachers, staff, and some students. They also tend to articulate expectations regarding how responsive teachers will be to messages received via email. Many parents, vendors, community members, and others expect educators to have access to email accounts and they expect to be able to communicate with educators through email. There are other implications of making email accounts available to members of their communities that school IT managers must recognize and plan to address.
First, school IT managers must decide how to make email accounts available to the public. While it may seem reasonable to publicize email addresses of faculty and staff who are public employees, there are software bots that troll the web searching for that “@” and “.” within a word that characterize email addresses. Once these bots find email addresses, they become the target of SPAM and other threats. While this may seem innocuous, the additional messages can place excessive demand on both IT infrastructure and on professionals’ time as they seek to manage these messages. Minimizing these demands is particularly important once one recognizes that SPAM is a significant point of entry for viruses and other malware into an organization’s LAN.
Second, school populations include those who are under 13 years of age, and their personal information is protected under Children’s Internet Protection Act and the Children’s Online Protection Act each identify actions IT manager and school leaders in the United States must take to protect this information. Regardless of the laws in any jurisdiction, most school IT managers feel a professional responsibility to protect all students from threats through email. At the same time, educators have a responsibility to give students who are young adults experience using email and managing interaction. Further, some students may have no access to an email account outside of school, and this is an important tool for the transition to post-secondary education or to work.
Third, electronic communications can become evidence in legal proceedings; as a result, school IT managers are expected to archive email and other electronic communications. The length of time such records are maintained depends on the local policy and procedures, but five years is generally regarded as the length of time email and other electronic communications are archived. Such records are kept for the protection of both the sender of messages as well as the recipient of messages. Some school leaders are taking steps to ensure those who contact individuals through school email addresses or other electronic means understand the messages are archived and may be used in legal proceedings.
While email will continue to be a part of school IT used to facilitate interaction within internal audience and between internal and external audiences, the asynchronous nature of these messages interferes with some communication. Chat and video chat are methods of synchronous communication in which information is shared over the web either between individuals or small groups. These tools complement email, and they have common uses in education. Chat is widely used in technology support and sales by vendors. IT technicians who are trying to resolve problems or who are communicating with manufacturers’ IT support are likely to be logged on to chat rooms and interacting via typed messages with company representatives. Video chat is a real-time full video link between locations. This is a bandwidth-rich form of interaction, so it tends to be used for very specific activities in which full video is useful.
For many reasons, school IT managers have historically sought to minimize access to chat and video chat. As a result, it was common to find those protocols to be blocked on firewalls and unified threat management appliances protecting school networks in the past. School IT managers can improve interaction with external populations and make these tools available to students and teachers by accommodating requests for chat and video chat in those circumstances in which it is appropriate. They should also be prepared to facilitate both the end capacity and the network resources to use these tools to be robust and reliable in schools.
Disseminating Information
The World Wide Web was originally designed to make it easy for users to access information. For the first decades of the history of the World Wide Web, it was only marginally used by public institutions (including schools) to disseminate information. As the web has matured, it has become available to much larger portions of the population and it is accessed through more types of devices, so there is growing expectation that schools and educators will have an active web presence. Educators make information available on the web so that it can be accessed using diverse devices. As a result, IT managers are supporting educators who disseminate digital information through a variety of platforms. This information includes both policy and procedure announcements and other seemingly mundane (but very important) information such as the school lunch menu and details of students’ performances.
In addition to being space for members of the community to learn about their activities, a school web site is often the first place members of the general public go to find out about a school community. Many candidates for job openings will visit school web sites to learn about the school and get a sense of values and beliefs of the community, and these become to focus of candidates’ questions to interview committees. Real estate agents visit school web sites to find information for their clients as well. In secondary schools, guidance departments share details regarding college selection and application for students and their families on the web.
Mobile devices, especially smartphones, are becoming the dominant tool for accessing the World Wide Web for many users. Mobile devices differ from computers and laptops in two important ways. First, the web browsers installed on mobile devices have less capacity than the web browsers on computers with full operating systems. Second, the screens on mobile devices are smaller than the screens on desktop computers and laptops. These can affect the way information is displayed on these devices. A growing expectation is that a school web site will be “mobile-friendly,” so school IT managers are adopting strategies to build, test, and maintain these sites. In many cases, the CMS used to create the site can be configured to vary how information is displayed when visitors use a mobile web browser.
Increasingly, schools are supplementing their web presence with a social media presence. Social media are the tools on the participatory web that make it extremely easy to publish information, and that information is pushed to specific audiences as well as being available for the general Internet user. Concerns over bullying, distraction, and other problematic uses led many school IT managers to block access to social media sites on school networks. This is proving less effective in minimizing use of social media during school hours that it was previously as students and teachers access social media sites by connecting over the mobile wireless networks and their cell phones, thus by-passing the school network.
Managing an active social media presence does require a web information be posted to multiple platforms. Of course, there are differences in the nature of the information posted on the various social media platforms. By using the embed feature or by creating widgets, social media masters for schools can ensure media posted on social media platform are available in other online spaces as well. Many social media sites allow users’ content to be embedded in html pages, and those pages are updated as social media content is created. Some of the social media with educational applications are described below:
• Facebook- This social media platform that boasts billions of users. (Of course, it is easy to count numbers of accounts on Facebook, but is not possible to reliably know how many people are users of Facebook.) Users of Facebook post messages (in text, audio, and video) to their space (the name given to the wall or timeline has changed over Facebook’s history); posts are available for “friends” to see and friends may reply or repost or they may tag other users so they see it. One can even stream live video for friends to view. One’s profile can be made public or private and groups can be created so that all members use Facebook to communicate.
• Twitter- This microblogging platforms was originally based around 140-chatercter text posts which were seen by followers. Twitter posts can now include more characters and media; hashtags are also used to contribute to wider discussions and to help curate conversations and posts. Two popular uses of Twitter in schools are to update followers on sporting events and to embed Twitter feeds in web pages so that announcements are both sent to followers and posted to the web immediately. Both of these are examples of “live Tweeting” in which information is posted to the social media to make it immediately available to others.
• Instagram- This social media platform is designed to allow users to post pictures. It was acquired by Facebook in 2017, and continues to be widely used.
• Periscope- This is a social media site that is used to live stream video’ followers can see what the user’s camera is pointed at in real-time (or with a delay of a few seconds). A Periscope feed can also be embedded in a web site, so live streaming can be viewed by any visitors to a web site.
• YouTube- This well-known video site is a social media site that has been part of Google since 2006. Users can upload video (that can be public, unlisted, or private), they can subscribe to others’ channels, and they can post comments that appear on the page where a video is displayed.
• Pinterest- When using this social media site, users “pin” stories, sites, images, video, and web sites in order to build collections of related content. This model emerged from the original practice of bookmarking which found users saving the addresses of useful and interesting sites in his or her web browsers. Pinterest is the latest platform for social bookmarking which finds users sharing their bookmarks over the web.
• LinkedIn- This social media site is similar to Facebook, but it tends to be used for professional purposes. While I post pictures of my visits to baseball stadiums and similar events Efficacious Educational Technology 137 on my Facebook page for my tens of friends and family to see, I post short essays and similar items to my LinkedIn profile for my much larger network of professional associates to read.
As the label of the sites makes clear, social media is designed to facilitate interaction, so comments and reactions from other users is a part of life on social networks. Users have little control over the comments others make on their content, and there is little recourse if comments are uncomplimentary, inflammatory, or false. School IT managers can take steps to minimize exposure to unsavory comments. For example, by publishing to Facebook, but not accepting friends, school IT managers can reduce the potential for (but not eliminate) unsavory comments. Further, school leaders can take be active users of social media and model professional interaction and responding in a public manner. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/05%3A_Web_Services/506%3A_Section_6-.txt |
Sufficient devices, reliable and robust networks, and effective web services all depend on a system being in place and functioning to ensure the hardware and software is maintained, updated, and repaired so it works for teachers and learners. Efficacious IT managers ensure systems are in place to keep IT infrastructure in good repair.
Computers break; they break frequently. Operational computers, laptops, notebooks, and tablets (that connect to reliable and robust networks through which access to effective web services) exist only where there exist systems in place to quickly repair malfunctioning devices (and networks and services). Effective and efficient systems of technology support are multi-dimensional. IT infrastructure must be selected and designed to facilitate effective planning and repair, procedures for communicating when systems need to be repaired as well as communicating that repairs are done must be in place, and the proper personnel must be retained to affect the repairs and those personnel have the necessary training and budgets sufficient to meet their needs.
602: Section 2-
School IT managers will define a logistic goal that is similar to “Malfunctioning IT systems are repaired quickly.” Implicit in this goal is that malfunctioning devices or networks do not interfere with teachers’ ability to plan for technology-rich lessons or with students’ ability to experience those lessons. Also, implicit in this goal is that these systems are supported by the financial resources to supply the people doing this work.
Context for the Logistic Goal
Computers and the information stored on them have become mission-critical (excuse the business jargon) to schools. Without these devices, neither teachers nor students can accomplish what they must to achieve strategic goals, nor can administrative staff ensure the smooth operation of the organization. Whereas “the network going down” or “the computers being updated” represented a minor disruption to previous generations of educators, either of these situations can cause a major disruption today. Just as planning for purchases and installation of devices can no longer be entrusted to IT professionals alone, the design and implementation of support systems ensuring appropriate, proper, and reasonable IT must be a collaborative effort among all school IT managers.
603: Section 3-
One of the often-overlooked aspects of technology support systems is ensuring that systems are effectively repaired. Effective repairs will result in the system better meeting the needs of teachers and students and the systems being more responsive to their needs. It has been established that IT users in schools are different from the IT users in other organizations, so IT professionals who rely on the clear planning that leads to the effective design of single-purpose systems for business users will find they are less effective for school users. Just like all aspects of managing school IT, selecting the correct systems and designing them for school users is a collaborative endeavor.
In schools where technology support systems are organized into a planning cycle (see Figure 6.3.1) in which technicians affect changes and upgrades to reflect those identified as necessary by teachers, repairs and upgrades appear to be more effective than schools that lack such organization (Ackerman, 2017). In general, IT managers react to situations to increase their efficiency, but they are proactive to ensure changes to systems are more effective.
For technology support to be improved by following this planning cycle, technicians and IT professional must be given responsibility for building solutions in a manner that is secure and compatible with existing technology, and teachers must use the systems in the way the technicians designed them, but decisions regarding the sufficiency of the solutions depend on teachers’ perceptions of the solutions, especially when interpreted in light of technology acceptance.
Figure \(1\): Technology planning cycle (adapted from Ackerman, 2017)
The planning cycle provides efficacious IT managers with a procedure to follow to ensure support decisions and actions are fully implemented before they are deemed a success or a failure. When the planning cycle is combined with the unified theory of acceptance and use of technology (UTAUT) (Venktesh et al., 2003), and feedback is given in terms of effort expectancy and performance expectancy, then systems and the repairs made to them are more likely to be judged effective by users. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/06%3A_Technology_Support_Systems/601%3A_Section_1-.txt |
Technology support is a process in school that bridges two very clearly bounded groups of people; teachers, students, and other users of IT comprise one group and the IT professionals who affect repairs of IT comprise the other. This book is grounded in the assumption that individuals in these groups understand technology differently; implicit in this assumption is that they will use different language when communicating about IT and that the same language may have different meanings for the groups and for individuals. For these reasons, efficacious IT managers take steps to ensure clear communication between these groups. In the jargon of business management, communication between different groups is called horizontal to capture the movement of information across the different groups (in school IT management these groups include teachers, IT professionals, and school leaders).
Effective communication for technology support is enabled by two web services, one to schedule shared resources and another to manage requests for assistance. Effective communication also depends on transparent and clear procedures when support ends, including when repairs have been affected and when individuals leave a school.
Scheduling Shared Resources
The collection of computer resources available in a school will include those that are too expensive or too infrequently used to justify purchasing them in large numbers. Compared to Internetonly notebooks that can be purchased for relatively low per unit cost and can be used for productivity purposes in many settings, computer rooms along with specialized devices such as large format color printers, 3-D printers, and high-resolution projectors are example of computing resources needed, but in smaller numbers, in schools. Because the devices are in fewer numbers, they must be shared, so efficacious IT managers provide a method whereby teachers can schedule the resources for their students to use.
Effective tools make the schedules public, so they can be viewed in the Internet without logging on or passing through other barriers. (The most effective schedules will be mobile-compatible, so the harried teacher who is finalizing plans for the day can say to a student, “hey, go check the schedule to see if we can print our posters in the computer lab today,” and the student will be able to access and view the schedule on his or her phone.)
Once a student confirms the resource has not been scheduled by another, the teacher can log on to the system to add a reservation, but not edit others’ reservations. Further, each account can have specific permissions so that he or she can reserve only the resources appropriate for the user. For example, only those who have received training in using the 3-D printer are allowed to schedule time on it, or only those teachers whose course necessitate special software can reserve certain computer rooms.
One of the difficulties that is commonly encountered with using scheduling tools in schools is the unusual time increments that characterize the daily schedules in many schools. While many scheduling tools are designed for businesses that are likely to break days into 15-minute increments, schools break days in various chunks, and it is not unusual for different days to be divided into different chunks. Further, some schools have multiple bell schedules, for example students in grades 7 and 8 may follow the “middle school schedule” but the students in 9-12 follow the “high school schedule” in schools enrolling students in grades 7-12. IT managers can increase the use of scheduling tool by making them easy to use, including allowing users to select time blocks on the schedule that correspond to the daily schedule blocks used in the school. All of these can complicate the problem of sharing common computing resources, but none generally are a barrier to sufficient access.
Reporting, Ticketing, and Triage
The web service for managing repair requests that is webbased are often called “ticketing systems,” because one submits a “help ticket” that summarizes a problem; the ticket is assigned to someone with the skill and network credentials to fix the problem, and notes regarding steps that are taken are added to the ticket. Once the problem is resolved, the ticket is marked “closed,” and the technicians moves on to new assignments.
The value of a fully functioning ticketing system is that it facilitates communications regarding several aspects of managing a large fleet of computer devices:
• Users can report malfunctioning devices with little effort, so the system facilitates communication from users to IT staff. Most IT managers place a link to “create a ticket” in multiple places that computer users frequently visit (the school web page, the LMS, and other portals). In addition, IT managers create an email address, so users can send an email to create a help ticket. Ideally, the ticketing system is part of the collection of tools that use a single sign-on scheme, so the individual who submits the ticket is identified automatically and submitting a ticket does not require on to log on to a different system.
• The technicians can triage malfunctioning devices and decide the best use of their limited resources. While the individual who submits the ticket can usually assign a priority to the repair, the technicians can override those settings, and repairs that will affect a greater number of users or that restore critical systems can be given higher priority.
• A history of each device is maintained. Devices that are troublesome despite repeated repairs are known. Likewise, technicians can track similar problems throughout the fleet. This is particularly helpful when a design or hardware (or software) problem affects the same model; steps taken to resolve a problem on one unit are likely to resolve the same problem on other units. In this point and the previous point, there are examples of how the system facilitated communication within the IT staff.
• Ticketing systems also provide a database on which the inventory can be kept up-to-date. This helps IT professionals understand their fleet and it helps leaders understand the need to plan and budget for replacement devices.
• The total number of repairs performed by technicians and the time they spend on them can be recorded in the ticketing system. This information is used to assess the efficiency and effectiveness of the systems, so that support can be improved by refining systems and by supporting those who support IT users. In this point and the previous point, there are examples of how the system facilitated communication between IT staff and school administrators.
Avoiding Cold Closure
To avoid wasting instructional time preparing to use technology that may or may not be functioning, teachers are likely to avoid those devices that are malfunctioning (or even rumored to be malfunctioning) until they are assured they have been repaired. When a help ticket has been fixed, the technician closes it, then moves on to other duties. While most ticketing systems notify the individual who initiated the case that it has been closed, this can be called a cold closure and it is opposed by a direct closure.
A direct closure occurs when the technician speaks with the individual who reported it and confirms the issue has been resolved. In the ideal situation, direct closure is done in-person, but a telephone call or voice mail are better than cold closure. A teacher who hears, “let me know if it does not work,” will have the confidence to begin using the repaired systems.
Avoiding cold closure helps technicians reduce the occurrence of a troubling situation. If the person reporting the problem either inaccurately describes it or describes a situation with incorrect terminology, then the technician can arrive at the computer and not see what the person who submitted the ticket thought she or he reported. Not seeing the anticipated symptoms, the technician closes the ticket and moves on to other work. The individual who reported the malfunction may return to the machine to discover it still malfunctions because the technician affected no repair or the technicians fixed different symptoms.
While direct closure does reduce technicians’ efficiency, it can increase the effectiveness of repairs and it leads to more accurate repairs being made (which ultimately increases efficiency). Closing this loop of the repair process can be automated by ticketing systems, but many recipients of those messages find them to be confusing rather than informative. Consider the configuration of communication that is set up in many ticketing systems. When the message is entered into the database, a message is generated to tell the individual who reported it “your message has been receive,” and the individual who reported it may find additional messages generated as the repair proceeds. While keeping individuals up-todate is important, many educators who receive these many messages claim, “they just fill my inbox with unnecessary information.” The excessive messages from the ticketing system can be especially problematic for individuals who use the ticketing system frequently. Because most IT managers insist problems be reported through the ticketing system as it provides important information regarding the fleet of these devices he or she manages, they must take steps to make them easy-to-use and effective.
On-Boarding and Exiting
The term “on-boarding” is used to describe the process of ensuring new employees understand policies and procedures related to the organization. In recent decades, organizations have added IT training to the on-boarding procedures. The details of the school IT systems that must be the focus on on-boarding training have been described previously, and comprehensive on-boarding training decreases the need to support later.
Equally important are the steps taken when an individual leaves a school. Separation can be for a variety of reasons, and efficacious IT managers are prepared to transfer information as is appropriate for those circumstances. In some cases, there are reasons that separation must be immediate; in those cases, school administrators are likely to direct an IT professional to immediately prevent the separated individual from accessing systems, usually by changing the individual’s password. Amicable and planned separation is much more common and school IT managers seek to implement exit procedures that ensure individuals can access information they created while associated with the school.
Education is a creative endeavor; students and teachers both create intellectual property as they work. In general, students who are minors own the intellectual property they create (keep that in mind the next time you copy a student’s paper to show colleagues). For teachers, the ownership of the intellectual property they create is more complicated. The works teachers create while being paid is “work for hire,” thus those are owned by their employer. Works they create while not being paid (during school breaks for example) are not, and in other situations determination of who own teachers’ intellectual property can become very complicated. In most cases of amicable separation, school leaders are content to avoid the conflict over ownership of work created for hire by allowing educators to retain a copy of all works he or she created. Educators are content to avoid conflict by avoiding selling of works they created for a specific teaching position without significantly revising the materials, so they represent new works.
To accommodate educators and students who seek to retain the works they created while employed at a school, school IT managers can communicate to teachers recommended methods for archiving and transferring them to their own devices or accounts. With the widespread availability of cloud-based storage, many technicians recommend copying contents of cloud-based folders or LAN folders to cloud storage using accounts owned by the educator or student who is leaving.
The situation can be more complicated when the educator who is leaving has had a role in supervising and evaluating other professionals or when the information containing works created by the educator may pose a threat to one’s privacy or violate FERPA regulations. Consider the principal who negotiates with the school board to keep the laptop she or he has been using as part of a retirement or resignation agreement; IT managers have a responsibility to ensure that records of teacher observations and evaluations, copies of letters sent to parents, and other sensitive information is removed from the computer before ownership is transferred to the separated principal. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/06%3A_Technology_Support_Systems/604%3A_Section_4-.txt |
For much of the history of computers in schools, the “timeliness” of repairs was ill-defined and repair deadlines were not critical. When computers were only one or two per classroom and they were only marginally used in the curriculum, a computer being inoperable for a few days or even weeks posed little disruption to students’ work. This was largely due to the fact that computers were simply replacing other technologies; for example, the middle school students I visited as an undergraduate replaced graph paper and pencils with computers to create graphs for their science fair projects. Most of the information those students created and consumed information was on paper, students could be engaged even when “the computers are down.” As computer rooms arrived, dysfunctional computers posed a greater obstacle to learning, but only if the number of students exceeded the number of operational workstations and as long as needed files were not on malfunctioning computers.
As electronic digital information and interaction has come to dominate, and computers have become vital to how information is accessed, analyzed, and created in diverse classrooms; it has become essential that malfunctioning computers be repaired in a timely manner, with timely being defined in hours or days rather than weeks. Especially in schools were one-to-one initiates are underway, teachers plan their lessons based on the assumption that students will have access to devices, so repairs need to be addressed quickly to minimize the disruption to learning that arise from broken computers. Responsive technology support systems, as a result, are designed to increase the efficiency of technicians so that the time between reporting it and it being resolved is minimal.
IT professionals adopt several strategies to increase their efficiency. Interesting almost all malfunctioning IT can be traced to software; files become corrupt, new devices or new hardware introduce conflicts, and other temporary faults are introduced with updates. Almost all of these software problems can be avoided or resolved with a few strategies. Imaging allows technicians to reset the software on entire systems, freezing prevents changes to the software in systems, and remote access systems allow technicians to log on to computers that are connected to networks from remote locations and then affect software repairs.
Imaging
In the vocabulary of IT technicians, imaging refers to the process of creating a file that contains the copy of a computer hard drive, then sending that to the hard drives of other computers. This strategy is particularly useful in situations where there are a large number of the same model installed in one place.
Imaging occurs in three steps. First, a single computer is configured exactly as it (and the others) needs to be. The operating system and applications are updated, network settings established, printers configured, and old data files are removed and unused applications uninstalled, and any other maintenance tasks completed prior to creating the image.
Second, the computer is restarted using software that bypasses the operating system on the hard drive. This may be done with software installed on a USB disk or that is stored at a network location. Typically, this includes a minimal operating system, so keyboards, network adapters, displays and similar tools function as the software to create and receive the image file is loaded into the random access memory. Third, the software imaging software is used to either create an image or receive an image (overwrite the current hard drive with the contents of a stored image).
There are several complicating factors in creating and using images including:
• Images are model-specific. If a school distributes five different models of laptops to teachers, then the IT staff must manage five images, and they must be sure to deploy the correct image to each model. More recent imaging software is minimizing the need to manage different images for each model, but the IT managers must still be clear about exactly which software titles (including drivers and extensions and configurations) need to be installed on each model.
• It is essential that systems used to make images be thoroughly tested before its images is made and deployed. An error in setting up network printers on the computer used to make the image, for example, can make a whole fleet of computers unable to print if its image is deployed. Technicians must confirm all settings are correct to avoid the need to repeat the process.
• Some reconfiguration of recipient computers may be necessary. Several factors such as the types of software licenses that are on the hard drive used to create the image and the specifies of how devices are named on the network and the methods used to create user profiles determine how much unit-specific configuration is necessary after it receives an image.
• Imaging does irreversibly erase the contents of a hard drive, so data that has not been backup-up is lost. For this reason, technicians ask, “Do you need the data on this computer?” more than once before reimaging a computer.
Typically, a technician will reimage a computer when it is observed to have unusual and difficult-to-troubleshoot symptoms; technicians are frequently heard to say, “Well, that is weird,” immediately before deciding to reimage a computer. If a technician suspects a computer has been infected by a virus or other malware, then he or she is likely to reimage it as well. The great advantage of this strategy from the technician’s point of view is that the system will be set back to a “known good” configuration with a well-known and standard practice. Further, in the hours that it takes for an image to overwrite the hard drive on a malfunctioning computer, the technician can attend to other repair, because the process completes without further input from the technicians once it is started. Imaging takes a few minutes to initiate, and several minutes to reconfigure unit-specific settings, but when the image is being received, the technician can attend to other work.
In addition to repairing malfunctioning computers, imaging is used for large upgrade and maintenance projects on fleets of computers. A common addition to the “to do” list of technicians over the summer is to “image the computer room” (which may be either desktop or laptop models). This finds a technician creating an image then sending it to all of the computers in the room. This does necessitate large amounts of data being transferred, so it can interfere with network performance when it is underway (which explains the need to do it over the summer). In all uses of imaging, it is a method of resolving software problems with great efficiency.
Freezing
While imaging is a reaction to software changes that have adversely affect the performance of a system, freezing is a strategy that prevents software problems from occurring. A technician installs the application that provides the freezing function and then configures the system exactly as he or she wants it to function. Just like imaging, all updates and applications are installed and the network configuration along with network printers and other peripherals are installed, configured, and tested. Once the configuration is confirmed, the technician calls the freezing software (which is running in the background, unseen by the user) and enters a password which provides access to the controls that can be used to change the state of the computer to “frozen” and restarts the computer. Until it is “unfrozen” by a user who provides the password, then each time the computer is restarted, it returns to the state when it was frozen.
As software to freeze computers has been used, additional features have been added. For example, the directories in which operating systems updates are installed can be left “unfrozen” so that necessary updates are not deleted when the computer is restarted. Also, some user directories can be unfrozen, so that documents created by users can be saved to a frozen computer. While it does prevent many software-induced problems, there are several reasons that IT managers may avoid using this solution:
• Commercial software to freeze computers can be very expensive;
• Unless the version of the software allows for unfrozen directories, it necessitates files be stored on systems other than the local frozen hard drive;
• Unless properly configured, it can remove critical system updates or data;
• As hard drives have approached and exceeded terabytes of storage, the freezing process can lead to noticeable delays in start-up which interfere with the perceived performance of computers in many school settings.
Maintaining Extra Inventory
Especially in those schools in which there is an active oneto-one initiative, some IT managers will purchase extra computers so that dysfunctional computers can be immediately replaced for students. In some school IT shops where there is extra inventory maintained, a student who finds his or her computer malfunctioning for either software or hardware reasons will find a technician who removes the hard drive (containing the operating system, applications, network settings, and the students’ data) and installs it in another unit that is identical to the first. This allows the students to return to learning as normal and the technician to troubleshoot the broken devices or return it for repair by the manufacturer after updating the inventory and ticketing system so those records are accurate.
On-Site and Remote Service
The efficiency of IT repairs can be improved by both increasing the access to repairs on-site and increasing the capacity for technicians to affect repairs remotely. While this may appear to so obvious to be superfluous, the strategies and implications for IT managers are quite different.
Assigning trained IT technicians to work in specific school buildings and ensuring the technicians are well-known to students and teachers and having them work in accessible and well-equipped shops does result in repairs being more efficient, but hiring and retaining employees tends to be a very expensive option in schools (and all other organizations). The question is often asked by school leaders, “How many technicians do we need given the size of our fleet?” Many variables (including the age of the machines, the operating system and other applications installed, the nature of the network, the robustness of the design, and the type of use to which the machines are subjected) affect the number of repairs needed in a given time and the complexity of those repairs. Because of these many variables, there is no reliable heuristic for calculating the number of IT technicians that are needed for a fleet. If the load of repairs overwhelms the available technicians on a regular basis, then steps must be taken to improve their capacity to affect repairs; this can be by providing the technicians with more training or better work conditions, or hiring additional technicians to chare the work.
Placing a technician in every school to be the primary source of IT support at that site does improve efficiency of repairs but coincidently it increases dependence on that technician, thus efficiency can actually decrease. When teachers and others depend on the technician, they are unlikely to develop their own troubleshooting skills, so rather than resolving a problem with a few minutes of troubleshooting, productivity (or at least technology-rich productivity) stops while the technician is summoned then arrives to affect the same steps that are within the capacity of other adults. Not only does a technician-dependent teacher demonstrate poor capacity to learn and to problem-solve, but he or she can delay opportunities for learning while waiting for technicians to become available. Further, this can take technicians away from jobs that require their expertise, so both repairs are delayed. For these reasons, when on-site technicians are place in schools, there must be clear rules about what constitutes an IT emergency, and clear expectations of troubleshooting steps and procedures teachers are trained to take and are expected to take prior to seeking assistance.
Technicians also increase efficiency by using remote access tools log on to desktop and laptops computers form any place on the network. Using remote access, they can install and update software, change configurations, troubleshoot, and otherwise manage those workstations over the network. Access to remote access tools is closely managed by IT managers as it can be used for nefarious purposes as well as legitimate troubleshooting and repair. These tools often use protocols and ports that can be exploited by malware, and using these tools can expose the computer systems and the data stored on them to the threat of unauthorized access.
Information technology professionals comprise a diverse group of professionals and the skills necessary for one specialty within the field are not necessarily transferable to others. Hiring professionals that fulfill the needed role in a school with the appropriate skills necessitates school leaders understand the specialties within IT professions. It is also important for school leaders to accurately and clearly define expectations and that IT professionals can clearly match job descriptions with his or her skills. Accurately describing and filling positions also avoids the waste of paying for skills that are unused or for needing to provide unbudgeted consultants to fill gaps in the knowledge or skill of hired individuals.
Regardless of the positions funded in budgets and the staffing decisions made by school leaders, all of the roles described in this section must be filled by individuals if a technology support system is to be comprehensive and complete. The titles given to the positions that fill these roles vary and the nature of the individual retained to fill the roles are determined by local circumstances, but strategies utilizing full-time employees, part-time employees, shortterm employees, and consultants have been effective, and of course, a single individual can play multiple roles. It is rare, however, to find one individual who can fulfill each role with expertise.
Chief Information Officer
It is only recently that educational organizations have adopted the practice of using “c-level” title for those in management positions. Chief financial officers (CFO) manage the business operations of schools and chief academic officers (CAO) are responsible for all aspects of teaching and learning within schools; individuals in these roles report to the chief executive officer (CEO) who typically hold the position of superintendent of schools. Added to the c-level of management in organizations including schools is the chief information officer (CIO) who manages all aspects of the information technology systems within the organization.
Of course, no c-level executive managers work and lead within a vacuum, so—at the highest level—decisions are made to satisfy the needs and limitations of the entire organization, but the c-level manager is then responsible to carry out the implementations those decisions within his or her area of leadership. The role of the CIO in schools is to advise the other top-level leaders on the nature of the existing technology, the steps necessary to maintain it, and the potential changes that will improve it. Of the many decisions made by the CIO, perhaps none is more important that those involved with installing and upgrading information networks. The individual who fills this role in a school has a level of responsibility similar to those of the other c-level managers and will be qualified by having a comparable level of experience and credentials (including having earned advanced degrees). The CIO will be compensated at a similar level as well.
For much of the history of computers in schools, a single individual was allowed to decide what technology to buy and how to install it. The rationale behind this practice was that those individuals held quite specialized expertise and educators were willing to defer to those with greater expertise. In many cases, that method of decision-making led to technology that was ineffective and even led to conflict as technology decisions were made for technology reasons. As CIO’s have been integrated into technology decision-making in schools, there has been a shift towards making technology decisions for teaching and learning reasons. The specific role of the CIO is to advocate for technology that both meets the need of member of the organization and that is reliable and robust. He or she will advocate for rational decisions regarding infrastructure planning, personnel decisions, and support, at the same time he or she ensures technology decisions do not hamper teaching and learning or other organizational goals.
In some colleges and universities, the IT decisions related to teaching and learning are made by the CAO and the CIO builds and maintains the systems deemed necessary by the academic leaders. That model has yet to become wide-spread (especially in K12 education), but it is anticipated it will become more common.
System Administrators
Once computer networks are installed and configured (usually in consultation with external engineers and technicians), system administrators employed by the school ensure they remain operational and functional. These professionals listen for network problems by both attending to reports of malfunctions from users and by monitoring system logs, and they both resolve problems that are identified and they take steps to ensure continued health of the network.
Among the specific responsibilities of IT system administrators is ensuring users and devices can access network resources, configuring software to backup files and checking those files are being created as expected, upgrading the operating system and driver software on the servers, and otherwise maintaining network hardware and software. They also play an important role in planning for and deploying software and hardware upgrades, and this individual pays particular attention to potential conflicts that may be introduce when networks are changed. In general, if changes are made to a device that manages local area network traffic or that stores data accessed from across the LAN, it is the system administrator who performs the task. This individual will also work closely with technicians to ensure that use devices are properly configured to access the LAN and Internet.
Most system administrators have completed an undergraduate degree in information systems, and they are also likely to hold credentials awarded by IT vendors and professional organizations. In many cases these credentials require effort and understanding that is comparable to graduate certificates and graduate degrees in their field. As a result of their level of training and expertise, system administrators should be compensated at a rate similar to teachers, but their salary should reflect the year-round nature of their work.
Technicians
Technicians are the individuals who have one of the most important roles in IT system operations in schools as they are the face of the IT department to most members of the organization. A technician is likely to spend his or her day troubleshooting and repairing end users’ devices such as PC’s, laptops, printers, and other peripherals. Because these professionals spend they time interacting with teachers and students, it is essential they have excellent customer service skills and are comfortable interacting with teachers when they are in stressful situations (due to malfunctioning computers) and with frustrated students. On those staffs with multiple technicians, the group can be very interdependent; they collaborate on solving problems and give each other tips. By documenting the repairs they make (ideally in the ticketing system), technicians contribute to the emerging knowledge of the IT systems and they are identifying those that are becoming so dysfunctional as to need replacement. A further role of technicians is to identify network problems that need to be resolved by the network administrator.
The CIO plays an active role in ensuring the technicians who are working in the school receives the professional courtesies and the on-going support they deserve. Many technicians arrive in these positions with an associate degree or similar levels of training that prepare them to understand the systems that will repair, but in many cases, they do not have experience with the specific devices or the specific practices in use in a school, they must receive training as part of their jobs to stay current and to provide on-going support.
Data Specialists
A relatively new specialist to join the IT staff is the data specialist. The need for this specialist arises from both the skills necessary to manage the databases in which demographic, health, behavioral, academic, and other information that is housed regarding students and the increasing demand for data-driven practices. Schools store vast amounts of data in sophisticated databases; while inputting the data is a minor aspect of the work and it requires limited expertise, the expertise necessary to prepare and run queries of the database so that questions regarding correlations and performance can be answered requires much greater expertise. Often this work includes creating scripts that produce reports that are used to support decisions made by school administrators and teachers.
These professionals represent one the first ventures into the field of educational data analytics by schools. In this field, educators seek to apply the methods of data science to predict student needs and performances. It should be noted that these methods have proven informative for some aspects of learning (Macfadyen, 2017), but findings suggest they are not useful in predicting deeper learning (Makani, Durier-Copp, Kiceniuk, & Blandford, 2016).
606: Section 6-
Regardless of the role or she fills, all IT professionals who work in schools should be expected to demonstrate excellent customer service skills. “Customer service” is not a term commonly associated with education professionals, but they are skills needed for those who are providing technology support. Exactly what is meant by customer service also depends, but—in general—users and managers recognize those who can identify problems and can resolve them quickly and with a pleasant disposition as having good customer service skills.
Individuals identified as demonstrating good customer serviced skills typically have excellent knowledge of the systems or products they are supporting. In addition, they have the capacity to resolve problems in creative and flexible manners, especially when the standard methods prove ineffective. Together, these elements of customer service represent professional knowledge that can be applied efficiently and effectively.
In addition, those with good customer service skills have patient and empathetic personalities. This nature allows them to listen carefully so that they clearly and accurately understand the problem being presented and they recognize its importance. They also avoid the temptation to blame the use for problems with the computer. At the same time, a technician with good customer service skills will see problem solving and troubleshooting as an opportunity to teach the user strategies for avoiding similar problem and resolving them with independence if they arise.
Regardless of the role an IT professional plays in a school, good customer service skills are important. Improving these will increase the efficiency and effectiveness of IT support systems. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/06%3A_Technology_Support_Systems/605%3A_Section_5-.txt |
IT planning is a necessarily collaborative endeavor in schools. Because it requires those with disparate skills and approaches to collaborate, efficacious IT managers adopt methods they will use to make decisions and they use data upon which they agree. Especially as they begin collaborative planning, school IT managers can benefit from framing their work as discourse that leads to and is informed by design which makes use of research-like data.
Schools are organizations in which leaders are constantly seeking to improve performance. Improvement and performance are difficult concepts to define and quantify, but (like many inexact concepts) we can recognize it when we see it. Improving school IT requires managers to decide what improvements they seek to make, what evidence will indicate success, and how to make them. Improvements can be made by deploying new interventions, refining how existing interventions are instantiated, ceasing those that are ineffective or inefficient, and consolidating others. The strategies used to make these decisions can influence the support the decisions receive in the community and the ultimate success or failure of the decisions from the perspective of the many stakeholders.
For school IT managers, planning is made more complicated than it is for leaders of other organizations because schools are filled with diverse populations. The result of these complications is that problems can be differently defined and framed by different participants. They can also propose, design, and deploy much different solutions; further they assess the same solution very differently. What represents a successful solution to a technology solution to one participant (or one group of participants) can pose a severe barrier to technology by others. To minimize the threats to efficacy, and to promote more effective and efficient decision-making and problem solving, school IT managers can adopt formal processes for the collaborative planning. Following agreed upon methods to define problems, clarify intended improvements, and gather and analyze data help the disparate groups involved in efficacious IT management to make sound decisions.
702: Section 2-
“Data” has been widely, but imprecisely, used in education for most of the 21st century. Data-driven educators make decisions based on information they have gathered about their students’ performance. Ostensibly, this is done in an attempt to adopt the position of a researcher and to ground decisions in objective research, thus give more support for their decisions. Upon closer inspection, however, there is little resemblance between data collection, the data, and the analysis methods used by researchers and those used by most “data-driven educators.”
Data-driven educators tend to use data that is conveniently available; this data is almost always scores on a standardized or standards-based tests. These tests include both large scale and high stakes tests and also those administered by teachers in the classroom for diagnostic purposes. The validity and reliability of these tests is rarely questioned; educators who claim to be data-driven accept that the tests accurately measure what the publishers claim. Data-driven educators also tend to seek interesting and telling trends in the data, but rarely do they seek to answer specific questions using their data. Further, they rarely use theory to interpret results; it is assumed that instruction determined the scores and that changes to instruction affected all trends they observe.
Researchers, on the other hand, define the questions they seek to answer and the data methods they will use prior to gathering data; they gather only the data they need, and all data is interpreted in light of theory. Researchers challenge themselves and their peers to justify all assumptions and to demonstrate the validity and reliability of instruments that generate data and they challenge themselves and peers to demonstrate the quality of their data and conclusions; for researchers, conclusions based on invalid or badly (or unethically) collected data must be discarded by credible researchers and managers.
By adopting a stance towards data that more closely resembles research than data-driven decision-making, IT managers tend to base their decisions in data that is more valid and reliable than is commonly used in education. Their decisions are also more likely to be grounded in theory that helps explain the observations. Other benefits of adopting a research-like stance towards data and evidence include:
• More efficient processes as planners use theory to focus efforts on relevant factors and only relevant factors;
• More effective decisions, because multiple reliable and valid data are used;
• More effective interventions, because they focus on locally important factors and there is a clear rationale for actions;
• Assessments and evaluations of interventions are more accurate and more informative for further efforts because evidence is clear and clearly understood.
Research is generally differentiated into two types. Pure research is designed to generate and test theory, which contains ideas about how phenomena work and allows researchers to predict and explain what they observe. Applied research is undertaken to develop useful technologies that leverage the discoveries of pure research; applied research is often called technology development. Scholars who engage in pure research identify and provide evidence for cause and effect relationships; this is typically done through tightly controlled experiments and quantitative data. Scholars and practitioners who engage in applied research or technology development seek to produce efficient and effective tools (see Figure 7.2.1).
Figure \(1\): Continuum of pure and applied research
In 1997, Donald Stokes suggested designing a project to be one type of research does not prevent one from doing the other type, so the dichotomy of pure and applied research is misleading. According to Stokes, many researchers seek to create new knowledge and to solve human problems simultaneously; he suggested replacing the continuum of pure to applied research with a matrix in which one axis is labeled “Do researchers seek new understanding?” and the other is labeled “Do researchers seek to use their discoveries?” By dividing each axis into “yes” and “no” sections, four types of research emerge (see Figure 7.2.2).
Figure \(2\): Matrix of research activities
Pure and applied research as they were originally conceived do remain on this new matrix. The cognitive scientists who study brain structure and function with little concern for converting their discoveries into interventions are pure researchers whose work may ultimately affect education, but designing interventions is not their primary purpose. The activity of computer programmers who are developing and refining educational games falls into the technology development quadrant. In general, they seek to build systems that are efficient, and they build their systems to leverage the discoveries of cognitive scientists, but their work does not contribute to new understanding.
Stokes’ matrix introduces a category of research in which there is neither intent to make new discoveries nor intent to apply any discoveries. While it may seem a null set, there are interesting and fulfilling hobbies such as bird watching that fall into this quadrant. Similar activities are those in which discoveries and applications of knowledge is for personal fulfillment and entertainment. Stokes’ matrix also introduces a category of activity in which the researcher intends to both make new discoveries and apply the discoveries; he labeled this “use-inspired research” and referred to it as Pasteur’s quadrant. You may recall Louis Pasteur was a 19th century French biologist and he “wanted to understand and to control the microbiological processes he discovered” (Stokes, 1997 p. 79). Pasteur’s approach was to both explain the natural science of these diseases and to define interventions that would prevent them. In the same way, IT managers seek to build efficacious systems in their schools and to understand what makes them so.
At the center of use-inspired research is an intervention which is designed to solve a problem. In school IT management, interventions will include many and diverse systems of hardware, software, and procedures and methods to use that hardware and software. Because it is focus of research, interventions can be understood in terms of theory. Theory explains what is observed, and theory predicts what will be observed when systems are changed. Because it is the focus of technology development, interventions are revised so that desired changes are observed. Useinspired researchers also seek to observe performance in multiple ways. A single measure is not sufficient for the efficacious IT manager whose planning and decision-making is grounded in useinspired research.
Education and research form a complex situation. Education is a field of active and diverse research; pure research, technology development, action research, and evaluation research all contribute to emerging collection of research. Further, a course in education research is part of almost every graduate program in the field, so many education professionals believe they have a sophisticated capacity to use and even generate research. Despite this, Carr-Chellman and Savoy (2004) observed that inattention to evidence and data in education led to “many innovations being less than acceptable or usable and rarely effectively implemented,” but they concluded, “frustration with the lack of relevant useful results have led to more collaborative efforts to design, develop, implement, and benefit from research, processes, and products” (p. 701). Given this observation, it is reasonable to conclude that useinspired research will improve the collaborative efforts so school IT systems are properly, appropriately, and reasonably designed. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/07%3A_Discourse_Design_Data/701%3A_Section_1-.txt |
Scholars and practitioners in many fields have developed use-inspired research methods specific to the problems they solve and the interventions they design. Educational design research (McKenny & Reeves, 2012). McKenny & Reeves (2014) captured the dual nature of educational design as a method for designing interventions and a method for generating theory, as they noted it is motivated by “the quest for ‘what works’ such that it is underpinned by a concern for how, when, and why is evident….” (p. 23). They further describe educational design research as a process that is:
• Theoretically oriented as it is both grounded in current and accepted knowledge and it seeks to contribute new knowledge;
• Interventionist as it is undertaken to improve products and processes for teaching and learning in classrooms;
• Collaborative as the process incorporates expert input from stakeholders who approach the problem from multiple perspectives;
• Naturalistic as it both recognizes and explores the complexity of educational processes and it is conducted within the setting where it is practiced (this is opposed to the pure researcher’s attempt to isolate and control factors, thus simplifying the setting);
• Iterative as each phase is complete only after several cycles of inquiry and discourse.
Projects in educational design research typically comprise three phases (see Figure 7.3.1), and each phase addresses the problem as it is instantiated in the local school and it is either grounded in or contributes to the research or professional literature. For school IT managers, the analysis/ exploration phase of educational design research is focused on understanding the existing problem, how it can be improved, and what will be observed when it is improved. These discussions typically engage the members of the technology planning committee who are the leaders among the IT managers. Design/ construction finds school IT managers designing and redesigning interventions; this phase is most effective when it is grounded in the planning cycle described in Chapter 6. Reflection/ evaluation finds them determining if the solution was successful and also articulating generalizations that can inform the participants’ further work and that can be shared with the greater community of school IT managers.
Figure \(1\): Phases of educational design research (adapted from Ackerman, in press)
Defining Improvement
Efficacious IT management depends on all participants maintaining a shared understanding of the problem and the intended improvements; it also depends on discourse so that the participants communicate their perspectives so the systems are properly configured, appropriate for students and teachers, and reasonable given the norms and limits of the school. This can only be accomplished through effective communication, which is threatened by the differences that mark the groups.
Carl Bereiter (2002), an educational psychologist, described discourse as a form of professional conversation through which psychologists can “converse, criticize one another’s ideas, suggest questions for further research, and—not least—argue constructively about their differences” (p. 86). This description appears to present discourse as a type of interaction similar to the discussion and debate that characterizes political dialogue and decision making. Bereiter specifies, however, discourse is grounded in data and evidence, so it is a tool for scientific inquiry rather than for political discussions. Bereiter extends the application of discourse to planning for education, and specifically he suggests progressive discourse as a method whereby planners can define continuous improvement, implement interventions, and assess the effectiveness of those interventions. In general, progressive discourse is the work of expanding fact to improve conceptual artifacts; this work necessitates a nonsectarian approach to data and decisions.
Planning is the process by which spoken ideas and written language is converted in to actions. Conceptual artifacts are those actions that can be observed in social systems and that are described in the language used to express plans. Progressive discourse depends on planners sharing an understanding of both the language used to capture plans and the actions that will be observed when those plans are realized. Scardamelia and Bereiter (2006) observed progressive discourse depends on participants’ “commitment to seek common understanding rather than merely agreement” (p. 102). If IT managers agree on the language they use to define strategic and logistic goals, but not on what they will observe when the logistic goal is achieved, then they do not share a conceptual artifact. Their plans and interventions are likely to be inefficient and ineffective.
In education, we can observe many situations in which different conceptual artifacts are instantiated. There is, for example, a growing interest in using games as a method of motivating students and giving context for deeper understanding. Ke (2016) observed, “A learning game is supposed to provide structured and immersive problem-solving experiences that enable development of both knowledge and a ‘way of knowing’ to be transferred to situation outside of the original context of gaming or learning” (p. 221). Contained within that conceptual artifact of “learning games” is cognitive activity that causes learners to interact with information and that leads to understanding sufficiently sophisticated that it can be used flexibly. Such games have been found to contribute to effective and motivating learning environments (Wouters & van Oostendorp, 2017).
Ke’s conceptual artifact of learning games contrasted with the observation I made while visiting a school and watching students who were completing a computerized test preparation program. After students had correctly answered a certain number of questions, the program launched an arcade-style game that had no connection to the content. It appeared the games were intended to motivate the students and provide a reward for giving correct answers. The principal identified this practice as an example of how his teachers were “integrating educational games into their lessons.” The students had discovered the arcade games appeared after a certain number of correct answers and they were “gaming” the system by randomly answering questions by randomly submitting answers in a strategy that led to the games appearing more quickly than if they tried to answer the questions. While the value of determining how to get to the games without answering the questions can be debated, it did not require students to engage with the content. We can reasonably conclude the principal and Ke did not share a conceptual artifact regarding games.
If the principal and Ke were on a technology planning committee together, then we can assume they might both agree that “learning games can be valuable,” but their understandings of the experiences that represent learning games would be different. They would agree on language, but not actions, so they would not share a conceptual artifact. The planning committee with Ke and the principal would find it necessary to continue to discuss learning games and resolve their differences so that decisions about what comprise “learning games” were common to the pair.
As progressive discourse proceeds, the participants build greater knowledge from outside sources (new research and new discoveries) and from inside sources (experiences with their own system and community). This knowledge can cause managers to reconsider the definition and realization of a conceptual artifact. When redefining conceptual artifacts, IT managers may be tempted to accept a broader definition, but new and more nuanced conceptual artifacts are generally improvements over the current ones. In this case, the committee may choose to improve the conceptual artifact by differentiating “learning games” from “games for reward” and proceeding with two conceptual artifacts rather than using one definition that is too broadly applied. Improvement of conceptual artifacts occurs when:
• Planners develop a more sophisticated understanding of what they intend in the conceptual artifacts;
• Conceptual artifacts are replaced with more precise ones;
• Managers communicate the conceptual artifacts so more individuals representing more stakeholders share the conceptual artifact;
• IT managers take steps so the conceptual artifacts are implemented with increased efficiency;
• Conceptual artifacts are implemented with greater effectiveness by increasing alignment between conceptual artifact and practice, removing those practices that are the least aligned, or using conceptual artifacts to frame activity in situations where they are not currently used.
Progressive discourse is especially useful during the analysis/ exploration of a problem. It is during this phase of educational design research (McKenny & Reeves, 2012) that IT managers improve their understanding of the problem and the conceptual artifacts that represent solutions. This is a research activity, thus it cannot be undertaken to support a political conclusion, to accommodate economic circumstances, or to confirm a decision been made prior to beginning. Ignoring facts because they appear to violate one’s political, religious, economic, or pedagogical sensibilities or because they are contrary to those espoused by another who is more powerful is also inconsistent with the process. Equally inconsistent is selecting evidence so it conforms to preferred conclusions; educational design research and progressive discourse builds knowledge upon incomplete, but improving, evidence that is reasonably and logically analyzed in a manner that researchers approach evidence rather than through political preferences. These may be reasons to accept decisions, but decision-makers have a responsibility to be transparent and to identify the actual reason for making the decision.
Schools, of course, are political organizations and different stakeholders will perceive the value of conceptual artifacts their improvements differently. First, individuals’ understanding of the artifacts and acceptance of the values embodied in the conceptual artifacts varies. Because of this, one individual (or groups of individuals) may perceive a change to be an improvement while others perceive the same change to be a degradation; this is unavoidable as it arises from the wicked (Rittel & Weber, 1973) nature of school planning. Second, some participants in the progressive discourse are likely to have a more sophisticated understanding of the conceptual artifacts than others, so they may find it necessary to explain their understanding and build others’ understanding of the conceptual artifacts. Further, those with less sophisticated understanding may find their ideas challenged by unfamiliar conceptual artifacts. Third, understanding of and value placed on the conceptual artifacts becomes more precarious when including stakeholders who are most removed from the operation of the organization.
Consider again the example of learning games; it illustrates the implications for various stakeholders when a conceptual artifact improves. If the planning committee decided the arcade-style games in the test preparation violate the accepted conceptual artifact of learning games, the committee may recommend banning arcadestyle games in the school. A teacher who uses such games to improve students’ speed at recalling math facts may find this “improvement” weakens her instruction. That math teacher may argue for a more precise understanding of arcade-style games, so those she uses are recognized as different from those in the testpreparation program.
If discussions about the role of games in the classroom spilled into the public (which they often do in today’s social mediarich landscape), then the continued use of learning games, regardless of their instructional value, might be challenged by those whose oppose game playing in any form by students, especially those stakeholders who do not see students engaged with games in classrooms. This could change political pressures making progressive discourse more difficult.
Designing Interventions
IT managers seek to improve conceptual artifacts through designing interventions. Decisions about which conceptual artifacts to improve and how to improve them are made as IT managers explore/ analyze the local situation. Interventions are designed through iterative processes informed by the technology planning cycle shown in Figure 6.2.1.
There are deep connections and similarities between design and research. Both activities progress through problem setting (understanding the context and nature of the problem), problem framing (to understand possible solutions) and problem solving (taking actions to reach logistic goals). Both design and research find participants understanding phenomena, which affects decisions and actions that are evaluated for improvement of idea or interventions. “Design itself is a process of trying and evaluating multiple ideas. It may build from ideas, or develop concepts and philosophies along the way. In addition, designers, throughout the course of their work, revisit their values and design decisions” (Hokenson, 2012, p. 72). This view of design supports the iterative nature of design/ construction of educational design research. Initial designs are planned and constructed in response to new discoveries made by IT managers; these discoveries can come from the literature or from deeper understanding of the local instances. In terms of progressive discourse, redesign/ reconstruction decisions are made as conceptual artifacts are improved.
One of the challenges that has been recognized in school IT planning is the fact that the expertise necessary to properly and appropriately configure technology is usually not found in the same person. As one who has worked in both the world of educators and the world of information technology professionals, I can confirm that we do not want educators to be responsible for managing IT infrastructure and we do not want IT professionals making decisions about what happens in classrooms. A recurring theme in this book has been the collaborative work that results in efficacious IT management. In design/ construction this collaboration is most important.
School leaders have the authority to mediate decisions about whose recommendations are given priority at any moment. The attentive school leader will be able to ascertain where in the iterative planning cycle any design/ construction activity is and will resolve disputes accordingly. If teachers are complaining about how difficult a system is to use, then the school leader will determine if the proper use has been explained to the teachers and will determine if the complaining teachers are using the technology as it has been designed. If teachers are not using the system as it has been designed, then the school leader will direct teachers to follow instructions. If they are using the system as designed, but it is still inefficient or ineffective, then the school leader will direct the IT professionals to change the configuration of the system to more closely satisfy the teachers. When there is uncertainty about which configuration to direct, school leaders should accommodate teachers and students whenever it is reasonable, as the experiences of students are most critical to the purpose of the school.
Understanding Interventions
School and technology leaders who model their management after educational design research will engage in a process by which they make sense of the interventions that were implemented and the evidence they gathered. This process is intended to accomplish two goals. First, the IT managers seek to evaluate the degree to which the interventions contributed to the school achieving its strategic and logistic goals. Second, IT managers assess their interventions, the evidence they gathered, and the nature of their designs and products; through this inquiry, they articulate generalizations that can be applied to other planning problems. Based on the view of educational design and progressive discourse that has been presented in this chapter, it is more accurate to suggest IT managers evaluate the degree to which strategic and logistic goals are improved than to suggest IT managers evaluate the achievement of goals. Many school IT managers are motivated to analyze/ explore, then design/ construct interventions, to resolve a situation that is perceived to be a problem, so they will continue design/ construction iterations until the interventions are deemed satisfactory, and the problem solved. Even those interventions that are quickly deemed to have solved the problem should be the focus of evaluation/ assessment so the factors that led to the improvements can be articulated and used to inform other decisions and shared with other IT managers.
Compared to those who rely simply on data measured on a single instrument, IT planners who engage in educational design research use more sophisticated evidence to frame their work and this allows them to support more sophisticated generalizations. They can explain their rationale for the initial design decisions that were made as well as the design decisions made during each iteration; they can explain their conclusions and evaluate their evidence. They also tend to have deeper understanding of how the interventions were instantiated in the local community than other planners, so they can clarify the factors that were relevant for local circumstances and they are more prepared to evaluate the appropriateness of the conceptual artifacts, the cost of improvements, as well as to identify unintended consequences of the interventions. All of this contributes to decisions to maintain or change priorities for continued efforts to improve aspects of technology-rich teaching and learning.
Because their design decisions are grounded in theory and the data they collect are interpreted in light of that theory, IT managers use the theory as a framework to understand what effect their interventions and why those effects occurred. If predictions were not observed, then the situation can be more closely interpreted to either identify problems with the prediction, the evidence collected, or the design and construction of the intervention. It is also possible to identify unforeseen factors that affected a particular intervention, but this can only be done when theory is used to interpret the observations.
Generalizations that appear to be supported by observations can also be reported to the greater community, typically in presentations at conferences or in articles published in periodicals. This reporting of findings increases one’s professional knowledge of important practices and it also exposes the work to criticisms and reviews that improve the capacity of the IT manager to continue and that help all participants refine and clarify their knowledge.
The degree to which IT managers’ evaluation of interventions in the local community are valid and reliable and the degree to which their generalizations are accepted by the greater professional community is determined by the quality of the evidence they present. Evidence is based in fact. In the vernacular, fact typically means information that is true and accurate. Implicit, also, is the assumption that the fact is objectively defined so every observer will agree on the both reality of the fact and the meaning of the fact. For researchers, facts are grounded in assumptions and established through observation, and observation can refute an idea that was long-thought to be a true fact. For IT managers using educational design research to frame decisions, they seek to recognize assumptions and make decisions based on multiple observations.
Consider the example of a school in which school leaders become aware of evidence that hybrid learning is positively associated with students’ course grades (for example Scida & Saury, 2006). They may direct IT managers to install and configure a learning management system (LMS) so teachers can supplement their face-to-face instruction with online activities. Recognizing there is ambiguous evidence of the effectiveness of hybrid and elearning tools and platforms on learning (Desplaces, Blair, & Salvaggio, 2015), the IT managers who deployed the LMS may be interested in answering the question, “Did use of the LMS affect students’ learning?” Data-driven IT managers might simply compare the grades of students in sections that used the LMS to grades of students in sections that did not use the LMS. Those planners are assuming grades in courses really do reflect changes in what a student knows and can do (rather than reflecting teachers’ biases for example).
The first step in answering the question would be to ascertain if there was a difference in the grades between the students who used the LMS and those who do not. In adopting a researchlike stance towards the data, the IT managers would look for statistically significant differences between the grades of students in sections that used the LMS and those that did not. They may choose a specific course to study to minimize the number of variables that affect their observations. The efficacious IT manager would recognize these differences could be accounted for by many variables in addition to the use of the LMS and effects of the LMS might not appear in this initial comparison. Completely understanding the effects of the LMS of students’ learning of Algebra 1 (for example) necessitates further evidence and data; the most reliable and valid evidence includes data from at least three data sources. When studying the LMS, IT managers might answer three questions about the LMS and their students (see Figure 7.3.2).
One relevant measurement might be to compare the performance of different cohorts of students’ performance on a common test, such as a final exam given to all Algebra 1 students. Finding statistically significant differences between the scores of students who enrolled in a section in which the LMS was used compared those who enrolled in a section that did not use the LMS might indicate an effect of the LMS rather than an unrecognized effect. To minimize bias in such data collection, steps should be taken to ensure all of the exams were accurately scored and the statistical tests should be performed by those who do not know which group used the LMS (the treatment) and which did not (the control). This is an example of a quasi-experimental design, as the students in this case are unlikely to have been randomly assigned to the sections.
Figure \(2\): Multiple sources of data
A second observation might be to ascertain if greater use of the LMS is associated with higher scores within sections that used the LMS. The IT managers would have the ability to analyze access logs kept on the LMS that records when individual users log on to the system, and these patterns could be compared to individual students’ grades to determine if there is a correlation between use of the LMS and grades. While a positive correlation between access and grades does not necessarily indicate causation, such positive evidence can corroborate other observations.
A third source of evidence to understand the effects of the LMS on student grades might be to interview students to ascertain their experience using the LMS; the qualitative data collected in this way will help explain differences observed (or not observed) in other data. Together, these three sources of evidence given greater insight into the effects of the LMS on students’ learning than simply comparing grades.
Rationale for the Effort
Compared to other planning methods and other methods of gathering data and evidence, educational design research may be perceived as necessitating greater time and other resources. There are several advantages of this method, however, that justify its use.
First, IT managers adopt researchers’ objectivity and consistency and this reduces the conflict that can arise from the disparate views of the various stakeholders who are interpreting actions and outcomes from different perspectives. That objectivity and consistency also helps conserve the conceptual artifacts that IT planners seek to improve.
Second, students, teachers, and school administrators live and work in a dynamic and evolving environment in which outcomes have multiple causes and an action may have multiple effects. By collecting evidence from multiple sources, IT managers are more likely to understand the interventions as they were experienced by the community. Also, interventions that are developed through iterative processes are more likely to reflect theory. Theory tends to change more slowly than the methods that gain popularity only to be replaced when the next fad gains popularity. Theory-based evidence tends provide a more stable and sustained foundation for interventions that improve how conceptual artifacts are instantiated.
Third, education and technology are domains in which individuals can have seemingly sophisticated experiences, but these methods tend to minimize the threats posed by novices believing they are experts. Many educators have purchased a wireless router, to set up a home network. This can lead them to assume they have expertise in enterprise networking. Technologists are also prone to believe that the years they spent in school give them expert knowledge of teaching and learning. The clear language and diverse perspectives introduced to IT management adopting research methods preserves the complexity of each field while facilitating common understanding.
Fourth, by evaluating and reflecting on the design process as well as the quality of the interventions, IT managers make sense of the interventions they created and can account for their observations. Those conclusions improve their own ability to design similar interventions, contribute to growing institutional knowledge of how the interventions are instantiated in the community, and can support generalizations that can be used by other researchers and by other managers. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/07%3A_Discourse_Design_Data/703%3A_Section_3-.txt |
The arc of the book has taken us from reasons technology must play a new and unfamiliar role in education through the components of a technology-rich school to the methods whereby IT managers envision, design and deploy, and improve IT systems in schools. Implicit in all of this work is change; IT managers seek to change the tools students and teachers use, the purposes for which they use them, and the manner in which they are managed. In this final chapter, I present ideas about change, and how leaders can manage and promote change within schools.
The literature surrounding organizational change often uses the terms “change” and “innovation” interchangeably. When organizations deploy innovations, the leaders and members adopt new tools, follow new procedures, and are driven to meet new purposes. Scholars and practitioners in the field also recognize change can affect different levels within the organization. Change can be address limited parts of the organizational or the entire system, and it can address small changes or wide-spread changes. The strategies used to implement change depend on the nature of the change leaders seek to make. There are several types of change that leaders recognize:
• Procedural change seeks to improve the efficiency of the methods whereby a logistic goal is improved. These are often undertaken in isolation as the inputs into the subsystem responsible for the logistic goal and the outputs from it are unchanged.
• Systemic change seeks to improve the effectiveness and efficiency of many procedures at one time. Rather that addressing procedural change as isolated activities, systemic change considers the complex of procedures and especially the interactions between procedures as the important units of change.
• Transitional change is recognized as that change which is designed to accomplish new goals. Whereas the same strategic and logistic goals can motivate and drive procedural and systemic changes, transitional change find the procedures and systems changed so that new strategic goals are achieved.
One of the challenges facing leaders who seek to implement changes, especially those that are transitional, is their disruptive nature. Successful organizations have defined structures and procedures and developed culture to meet specific purposes with efficiency and effectiveness. When the purpose of the organization changes, or the previous purposes become obsolete, there is conflict between the previous norms and those needed for the future. Clayton Christensen (1997) observed disruptive changes are those in which qualitatively different goals are defined for the organization, and disruptive change requires structures that are contrary to those that have been effective, and those that have the greatest effects on the structural, human resource, political, and symbolic frames of the organization.
The nature of the change in organizations that efficacious IT managers deem necessary will depend in large part on the existing circumstances and leaders’ and members’ interpretation of those circumstances. It is anticipated that much of the change suggested in this book will be transitional, especially that in which the Standard Model of educations in overturned. Educators, like all professionals, comprise individuals who are comfortable with change and those who are not comfortable with change. Resisting efforts to change the Standard Model of education and a marginal role for technology are going to be increasingly untenable position for educators. The decisions IT managers make will continue to be a force directing this change.
In their 2010 book Change, Chip Heath and Dan Heath, scholars and business leader who study change, attributed resistance to change to three factors. These are observed regardless of the type of change. First, until new practices become habit, people must exert self-control to adopt them; this self-control is necessary to continue using the new practices and avoid reverting to the previous practices. Self-control requires effort, so it is in limited supply. When self-control is exhausted, people return to previous practices.
Second, the greatest motivation for change arises when individuals find an emotional connection to the purpose. The Heaths suggest change arises inside an organization when members become aware of a situation and there is a collective realization that existing practices are contrary to the organization’s goals and fixing the problems will result in important changes in the operation or outcomes of the organization.
Third, change can be difficult when the purpose is unclear. They suggest, “what looks like resistance is often a lack of clarity” (Heath & Heath, 2010, p. 17). For Heath and Heath, the path to change is grounded in clarifying the purpose, providing motivation, and creating pathways where by motivation is sustained and action becomes habit and the purpose is achieved.
For IT managers in schools who seek to implement change, Heath and Heath’s propose a model of purpose, clarity, and pathway can be complicated by the nature of education and the nature of motivation. For most of the 20th century, leaders assumed individuals within organizations were motivated by pay and other rewards (increasing these were though to increase compliance with new practices) and they were motivated to avoid punishments. While educators are likely to comply with the changes in practice they are directed to make, they are unlikely to internalize the needs and they will revert to previous practices when possible.
In his 2009 book Drive, Daniel Pink provided evidence that individuals are motivated by autonomy, mastery, and purpose, so change that is sustained must be based in actions that leverage these aspects of individuals’ work. For Pink, autonomy is largely grounded in self-direction; those who perceive they are able to exert control over how they accomplish their goals are more intrinsically motivated than those who have less control. Mastery is the ability of individuals to improve their abilities in a meaningful way and for a meaningful purpose.
Autonomy is a complicated factor in many organizations and professions, including education. While autonomy is a factor that motivates individuals to engage with and adopt innovations, there is evidence that teachers may exert limited autonomy with regards to regarding instructional practices (Range, Pijanowski, Duncan, Scherz, & Hvidston, 2014). Blumenfeld, Kempler, and Krajcik (2006) suggested autonomy is grounded in authority to make decisions and the competence to identify and affect a solution. In many cases, teachers lack the authority to be autonomous and the technology that is the focus of the innovation is unfamiliar and outside their perceived are of expertise.
Further, many teachers have deep personal and emotional commitment to their own education and the practices that marked their entry into the profession and their own teaching. Their understanding of purpose is grounded in these experiences, so teachers who have autonomy may reject the vision and purpose and pathways to change even if they are clearly and reasonably explained. Most math teachers, for example, became math teachers because they found meaning and value in their own math education; they will resist attempts to change the experience of teaching and learning math. The result is a paradox on autonomy; efficacious IT managers need to increase autonomy for teachers to adopt innovative technology and technology-rich pedagogy, but teachers are not used to having autonomy and those who do have it may reject the innovation and seek to subvert it.
(While writing this book, I had a conversation with the manager of a manufacturing facility who indicated workers were no longer allowed to perform their own calculations when configuring machines on the factory floor. Several mistakes had been made, and the company had lost tens of thousands of dollars to resolve each one, so the top-level managers decided that calculations were to be done by engineers using calculators or other simulations of the machines and they tell the operators how to adjust the machines. Math teachers are horrified to hear this story, but the more insightful and forward-thinking take it as motivation to reconsider what they teach and how they teach it. Those are in the minority of teachers who hear this story.)
Whitworth and Benson (2016) suggested three responses by individuals when they perceive a difference between the purposes of the organization and structures of that are deployed. They may accommodate the change and adopt the changes and adapt what they do to reflect the changes. They may relax the definitions (thus creating broader conceptual artifacts) and implement innovations that are nominally different, but that only partially change what they do. Individuals may also subvert change by opposing them or reverting to previously used tools and procedures.
It appears the task of leading change in education is challenging. A leader can expect to encounter disparate and contradictory perceptions of the purpose of school which will lead to disparate and contradictory motivation to engage in the activities necessary to change. Directing educators to adopt new practices or adapt to new practices may result in compliance, but that is contrary to the agency and autonomy the has been shown to result in change in activity.
Educational leaders, including efficacious IT managers, who seek to affect change, can ground their efforts in existing theory related to innovation and change. Leaders who understand organization frames and the nature of innovations and how they are adopted in organizations or communities are more likely to generate changes in practice that are sustained in the schools they lead. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/08%3A_Understanding_Change/801%3A_Section_1-.txt |
Schools, of course, are social organizations; they comprise multiple and diverse individuals who, ostensibly, are working to achieve the same strategic goals through the same logistic goals. The term “ostensibly” is appropriate when describing organizations as they tend to be filled with individuals who different perspectives on the purpose and the work of the organization. Bolman and Deal (2008) are explicit about the difficulty of managing organizations, "The world of most managers and administrators is a world of messes: complexity, ambiguity, value dilemmas, political pressures, and multiple consistencies. For managers whose images blind them to important parts of the chaotic reality, it is a world of frustration and failure (p. 41)."
Bolman and Deal propose four organizational frames to help managers deconstruct what is happening in their organizations and then predict and explain the degree to which innovations or changes are accepted and sustained as well as the reasons they are accepted or rejected. Barriers to innovation, they claim, tend to arise within one of these frames and how a manager responds depends on which of these frames may be problematic. The nature of leadership that is necessary to promote acceptable and sustained innovation and change depends in large part on the frame within which the leader seeks to exert influence. By addressing potential problems, building capacity to address them, and increasing awareness of the problems and solutions within each frame, organizational leaders have a greater chance of being efficacious leaders than those who ignore these frames.
Structural Frame
Organizations exist to accomplish goals; the book is grounded in the assumption that schools exist to ensure students participate in the communication and information landscape that dominates their society so they have experience to continue that participation when they leave the school. (Remember I am a follower of John Dewey, so I believe “education is not preparation for life; education is life itself.”)
Within the structural framework leaders seek to implement new structures with innovations; implicit in an innovation is the perception by members of the organization that structures are different from those that characterized their work previously. Innovations may increase efficiency, often after a period of decreased efficiency as the innovation becomes habit. Other innovations are designed to improve performance by more closely aligning the outcomes with the desired outcomes. In some instances, improved performance means accomplishing goals and engaging in activities that were not previously recognized as goals of the organization.
Strategic goals are achieved by achieving logistic goals. Logistic goals, and the strategic goals they support, are achieved through the tools, methods, and procedures that comprise the structural frame including:
• methods for dividing labor (efficacious educational technology depends on different expertise to decide what is appropriate, proper and reasonable);
• controlling activities within groups assigned a responsibility and coordinating between different groups to connect the divisions of labor;
• establishing hierarchy (different individuals should be allowed to override the others when designing educational technology).
Especially in large and diverse organizations in which the logistic goals are only achieved by individuals who have greater expertise than others in the organizations, the division of labor and responsibility is more marked than it is in other organizations. Efficacious IT management is clearly an example of such a situation, so it is helpful for leaders to further deconstruct the structural frame in to components following Mintzberg’s (1979) typology:
• Operating core which includes those individuals and structures that directly lead to the strategic goal; teachers are the primary personnel in the structural frame in schools and the materials they use are the primary resources in the operating core of schools.
• Administrative component which includes those personnel whose role is to manage the operating core and structures they use. In schools, principals and other instructional leaders along with (for example) the system they use to evaluate teachers are among to structures that comprise this component.
• Techncostructures includes those components of the structural frame that ensure the system is efficient and effective. In educational technology, this would include the technicians and network administrators along with CIO’s who maintain the IT infrastructure.
• Support systems include those components of the structures designed to facilitate others’ work. The assistant who processes purchase orders for computer hardware is an example of the support systems that comprise the structural frame for educational technology.
Improvements of the structural frame within each component lead to greater efficiency of its operation and the greater alignment with the its effectiveness in achieving those logistic goals that fall under the leadership and control of those with that expertise. In general, when innovations affect the operation of one single component, those leaders and members have greater autonomy in making decisions and deploying innovations.
When decisions and innovations affect more than one component, coordination becomes more important to ensure the innovation is effective from multiple perspectives. Coordination depends in large part on effective horizontal communication. Efficacious IT management in schools depends on the participation of leaders from disparate groups, and they have a role in ensuring members of their organizations understand the rationale for the decision, and members have a responsibility for facilitating horizontal communication of structures within their domain to others.
Consider IT managers who are implementing a new ticketing system to report and track malfunctioning devices. The IT professionals must ensure teachers and school leaders understand the importance of using it (a message that must come from all leaders in the school) and they must ensure the system is easy to use and known to all. It is only in this way that the techncostructure of the ticketing system can help the IT professionals support the operating core of the organization. Consider, also, the configuration of the student information system. How performance is recorded and scores and grades are calculated depends on the SIS being configured so that it reflects the grading policy of the school. This requires coordination between those with different types of expertise and different responsibilities to ensure the intended outcome is realized.
Within the structural frame, procedural changes are common as those within a division of labor attempt to improve their efficiency and effectiveness. These changes are most likely to be accepted and adopted when there is clear alignment between the changes, the logistic goal, and the strategic goals of the organization. For many leaders, this becomes an exercise in backwards design (see Figure 8.2.1). This finds managers defining the logistic goals in collaboration with disparate leaders. In a manner aligned with progressive discourse (Bereiter, 2002), they define both the language of the goal and the observations that will confirm the goal has been met. Within the component of the structural frame, experts will design and improve structures to increase efficiency and effectiveness.
Figure \(1\): Backwards design
Teachers and other school professionals recognize that leaders who are newly hired in schools or central offices often seek to change practices for reasons unrelated to the efficient and effective operation of the structural frame. Consider the school principal who seeks to implement new procedures that have been effective in other schools where she was the principal. While making the changes may improve performance from her perspective or they may make the structure more familiar to her, they may be resisted by teachers and they may result in less effective school operation than the existing procedures.
For technology-rich organizations, understanding change within the structural frame necessitates leaders and members differentiate that change needed to keep current and that change needed for procedural or transformative changes. Technology evolves. To ensure the information ones creates is compatible with that created by others and to ensure IT systems are compatible, they are updated and upgraded. Some of these changes may necessitate procedures and tools be updated simply to maintain the current level of functionality. While these may lead to more efficient use of IT resources, they generally are not perceived to be improvements in the structural frame by leaders or members of organizations.
Human Resource Frame
All actions taken by leaders, and especially those in which they attempt to innovate, have implications for the people who work within the organization. Organizations that are most successful at implementing procedural and transitional changes have employees and members who are fully engaged with the work. They both implement existing procedures as designed and they identify and they communicate methods whereby procedures can be improved; they approach transitional change in the same manner. They connect the purpose to the innovation and improve the pathways between the innovations and the new purpose. The human resource frame addresses those aspects of the organization that affect members’ motivation to participate in the changes.
Generations of managers have assumed that individuals would work for pay (or other rewards) or to avoid punishments. While those do work to a limited degree, scholars are beginning to understand the importance of other aspects of work and personality that more accurately predict and explain participation and engagement in change efforts. Efficacious IT managers (and other leaders) now understand the importance of promoting innovations by motivating members and developing human resources in a more complete manner. Bolman and Deal (2008) identified several strategies for fully developing the human resource frame; some of these can be done with the existing human resources while others necessitate changes in staffing.
Management can affect human resources by changing their expectations of members and changing how and why management interacts with members. Examples of these strategies include redesigning structures to align with goals they value and to seek and accept members’ feedback in refining structures to improve efficiency. Decisions and actions that members perceive to be the managers supporting their development as competent and contributing members of the organization can improve the human resources frame of organizations. These strategies do include some of the traditional factors thought to motivate, such as promoting from within the organization and increasing salaries. In most educational institutions, many compensation structures are established by negotiated contracts with unions, and many advancement opportunities require additional licenses. Further, teachers who assume leadership roles often find they have less time for their regular duties, so they are less motivated by these strategies than members of other organizations.
Managers can also improve the performance of the human resources frame by articulating a clear vision around supporting employees as valued contributors to the organization. In some cases, the human resources frame can only be improved by changing the individuals who work in the organization. This is especially true when disruptive changes are underway, and the organization cannot continue with those who reject the new purpose of the organization or those who do not have the knowledge, skill, or propensity to adopt and adapt to essential innovations. This is described as adopting a philosophy towards human resources, but in many ways the vision of human resources frame has symbolic implications. This vision also informs hiring decisions, and managers improved the human resources frame by hiring individuals with the personal qualities that are amenable to adopting and accepting change and innovation. One important aspect of hiring IT professionals is also ensuring there is a match between the technology skills of the individual and the expectations of the job.
Argyris and Schön (1996) suggested leaders who adopt a stance towards communication that combines advocacy and inquiry are perceived as effective in implementing change while respecting important aspects of human resources. Through advocacy, leaders attempt to implement change and they are either assertive or passive. Through inquiry, leaders seek to understand others’ perspectives on situations. In this model, the leaders who are most effective seek to be integrative, both understanding and assertive; they implement change while accommodating others to the extent possible.
When adopting an integrative stance, leaders do find a role for both the formal and informal participation of the members of the organization in decision-making. This requires leaders to provide sufficient structure that the process does not become a “turf-war” or that irrelevant factors affect decisions. It also requires the leader provide a sufficiently clear goal. When defining goals and processes, however, leaders can become imposing which threatens the participation that is necessary to improve the human resource frame of organizational innovation.
Political Frame
All human organizations are political; they comprise individuals and groups who are largely motivated by self-interest as they advocate the organization support a particular set of decisions and actions. Self-interest is grounded in the different values and beliefs held by individuals as well as different interpretations of information which are affected by those beliefs and values. Political advocacy is necessary as organizations have limited resources, so there are debates and negotiations that influence decision-making about which problems will be solved and which aspects of the structural frame will be improved. This is the situation from which the political frame of organizations innovation arises.
Implicit in the political frame is power and partisanship. Some individuals and the groups to which they belong have greater influence and authority to make decisions than others and partisans are those with lesser power who support the recommendation of others. These, of course, are dynamic characteristics within organizations; individuals or groups can gain or lose power depending on changes in how partisans align their support and other factors including changes in governance. Differences in political power are also consistent with many decision-making processes especially in IT, which finds those who use the systems (and who must find them efficient and effective) and those with expertise in building the systems are different.
Power does arise from various sources including the position one holds; in schools, the superintendent typically has the greatest authority and reports to the publically elected officials who govern the school. Efficacious IT managers will likely find it necessary to defer to the superintendent as the arbiter of political disputes. These leaders also tend to derive power from the ability to control which decisions are made, how the problems are framed, and what solutions are deemed acceptable. In addition to the superintendent, other school leaders derive political power from their offices, but power derived form position tends to be the most tenuous.
Expertise and the capacity to solve the problems faced by the organization (and that are deemed important and unsolved by leaders) is another source of power. Increasingly expertise is determined by the nature and extent of one’s professional network as it is a source of strategies and approaches to problems that one has yet to encounter. Reputation is largely grounded in one’s expertise and the extent to which others are aware of one’s expertise; this awareness is also extended through a wide network.
All leaders and members, including those who hold political power through their office, can extend and expand their political power by negotiating coalitions. An individual who holds expertise that is needed by others gains power and can enter into partisan relationships with others, thus those who are politically less powerful can gain power by forming their relationships. Astute political leaders will attempt to form partisanship alliances with individuals whose sources of power complete those of the leader. Because of the benefits one can gain, the ability to negotiate these partisan relationships is a source of political power that can be improved.
Leaders who seek to promote organizational innovation improve capacity within the political frame by encouraging large coalitions of individuals and group who both support and participate in implementing the changes beyond compliance. Referring to those within organizations who are less powerful due to position, Bolman and Deal (2008) observed, “They accept direction better when they perceive the people in authority to be credible, competent, and sensible” (p. 219). Leaders who have engaged members are more likely to receive accurate and complete feedback from members who are more autonomous.
Political conflict can be a barrier to innovation and even destructive to many aspects of organizations, especially the human resources frame. Efficacious leaders, including IT managers in schools, will recognize the political frame of decision-making, and they will negotiate to leverage collaboration among the stakeholders so that leaders access more complete expertise and those with valuable expertise gain political power. Effective political leaders also develop their own expertise so they are in a better position to evaluate their own expertise and to understand the recommendations of others.
Symbolic Frame
Actions, events, and situations can all have meaning for individuals. In organizations, these meanings determine in large part the emotional and intellectual connections members make to the organization and its purposes and goals. These contribute in an important way to the motivation of members to participate in innovative change. Leaders can develop the symbolic frame to affect how members connect to and identify with the organization, and the extent to which they value and contribute to improving efficiency and efficiency, as well as the collations to which they belong.
The symbolic frame is grounded in the themes that people use to organize ambiguous and unclear situations. Culture and its components such as faith, myths, values, and rituals, all contribute to how the symbolic frame is instantiated. Efficacious leaders who seek to affect the symbolic frame will often craft myths and stories to describe their organizations or their vision for what the organization will become. In many cases these begin as myths, and the organization in fact does not reflect the myth. Over time, as innovations in the structural, human resource, and political frames become aligned with the symbolic vison of the leaders, the vision becomes realized.
A common criticism of leaders who focus on the symbolic frame is that they are “all talk, but no action,” as the symbolic frame is often communicated in grand-sounding, but nebulous, terms. The translation of symbolic language into a clear vision and path is accomplished by defining individuals and the actions of individuals who represent the symbol. This embodiment of the symbols can both demonstrate to members that the vision contained in the symbol is possible and the members can identify with the actions. This allows members to identify a connection to the goals of the innovation which Heath and Heath (2010) observe provides the motivation for change. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/08%3A_Understanding_Change/802%3A_Section_2-.txt |
Everett Rogers’ seminal work the Diffusion of Innovations (2003) first appeared in 1962, and he produced multiple versions and editions in the following decades. Throughout the history of the work Rogers sought to understand how innovations (new ways of action or new tools) are communicated (through various channels) over time to the members of a social system. Rogers found the similarities in the characteristics of adopters and in the factors describing the diffusion in a wide range of organizations, industries, and cultures. Scholars continue to use diffusion of innovation as a model for framing data collection and interpreting results. His observations and theory provides several useful frameworks for efficacious IT managers in schools.
The Nature of Innovations
According to Rogers, the rate at which an innovation is adopted by a group is affected by four factors. First, the users must become aware of the innovation and perceive the ideas, tools, or practices as different from those currently in use. In the world dominated by rapid advances in information and other technology, it is easy to assume innovations must be based on things that did not exist previously. Rogers confirms anything that is unfamiliar can be an innovation. How an innovation is perceived is determined by its relative advantage, compatibility with existing practices, complexity, and demonstrability. In general, innovations that diffuse are those that help one improve performance in a meaningful and efficient way, that are easy to use, and that users can try on a limited basis and the results can be shared with others.
Many educators are familiar with seeming cyclic nature of educational reforms and pedagogies advocates claim are innovative. This can lead cynical curmudgeonly teachers—a group which occasionally includes the author when faced with leaders whose credibility, competent, and sensibility is dubious—to remind others “we used to do this years ago.”
Second, diffusion of innovation requires communication, and that communication can occur through various types of channels. Mass media, a channel marked by a single person or group communicating the same message to a large audience, and it can be an effective method for introducing innovations to a community. Increasingly, social media and professional learning networks that are maintained and cultivated with digital tools are replacing mass media as a method of communicating innovations. This is one reason those with greater networks have greater political power in organizations that seek to be innovative. The diffusion of innovation typically involves interpersonal communication between dyads or small groups within the social system.
Third, innovations occur within a social system or community comprising members who seek to accomplish a particular goal. Some innovations are designed to accomplish essential aspects of the social system; these can be implemented by authoritarian fit. Others are designed to affect optional aspects of the social system, and these are adopted largely though social influences. Within the social system, there will be leaders whose opinions and perceptions matter to others and there are various types of decisions that are made. Venktash et al. (2003) noted social influences are a factor directly associated with the decision to use technology, thus individuals perceived to be influential are of particular important when leaders seek to diffuse technological innovations. Social systems, we know, comprise structural, human resource, political, and symbolic frames, and how an innovation affects each frame contributes to rate it diffuses and the extent to which it diffuses.
Fourth, the diffusion of innovations is characterized by time. The rate at which individuals within the social system adopt an innovation defines four groups that are considered in the next section. The time necessary for an individual to adopt an innovation depends on the delays between learning of an innovation until the decision is made to adopt it and then to actually implement it. In some situations, individuals may be locked-in to other methods because of investments in time, money, or other resources; or for political or symbolic reasons.
Rogers and others have observed that some innovations are discontinued after they enter a social system. Reasons for discontinuation vary, but replacement by another innovation is common; innovation researchers recognize that an innovative tool, practice, or idea will become traditional practices, which is later replaced by a different innovation. (In education, these innovations often return after a generation of disuse. My grandfather and I used to talk about innovative new science teaching methods that I was using. We found many similarities between those he adopted during his career in the classroom and those I was adopting. We both were active in our professions and had spent summers attending workshops to learn “the innovative new teaching methods.”)
Users may discontinue using an innovation when they become disenchanted with it, especially when they do not produce the outcomes promised by advocates. Cuban (1986) noted this was a reason teachers discontinued to use radio, television, and movies as they emerged in the 20th century. Disenchantment can also rise when innovations prove to be unsafe or when other unforeseen and unintended consequences threaten the effectiveness of the innovation.
Stages of Adoption
Once an innovation enters a community, and begins to diffuse, its adoption occurs as the populations accepts it, and this can be explained in a very predictable way. A small number of individuals are responsible for introducing the innovation and those that prove more efficacious, effective, and efficient tend to diffuse through organizations through five different stages. The characteristics of those who adopt an innovation at each stage have also been documented by Roger and others. Two types of lines are used to describe and quantify the diffusion of an innovation, a bell curve illustrates the number of individuals who are in each of five stages of adoption and an s-curve is used to illustrate the part of the population that has adopted the innovation (see Figure 8.3.1).
Figure \(1\): Stages of innovation illustrated
Innovators comprise the first 2.5% of the population of the social system to begin using a new tool or practice or to accept an idea. Individuals in this group tend to be widely connected to others outside the social system or community, thus have greater exposure to new ideas and tools; in the digital world, innovators may be widely dispersed and use digital tools and social networks to maintain their networks. In addition, these individuals tend to have resources that can be dedicated to experimentation with innovations and the individuals are open to taking the risks associated with adopting ineffective or inefficient innovations that do not gain acceptance. This group is illustrated on the far left of Figure 8.3.1.
Early adopters are the next 13.5% of the population to adopt an innovation. Whereas innovators tend to be highly connected outside of an organization or population (thus they are the conduits for an innovation to enter it), early adopters are more highly connected and respected within the organization or population. Innovators seek to identify those who are likely to be early adopters, as those innovations accepted by this group are likely to diffuse more quickly because these individuals exert significant social pressure on others. In addition to vetting the changes introduced by innovators, early adopters become change agents as they become a model for others to follow and they demonstrate the applicability of an innovation.
Members of the early majority are the first adopters that are considered followers as they are the first to follow the example of the early adopters. Rogers quotes Alexander Pope who wrote in 1711, “Be not the first by whom the new is tried, nor the last to lay the old aside” to describe this type of user. All adopters proceed from awareness of the innovation through knowledge of the innovation to the decision to adopt it. The early majority tends to take longer than earlier adopters to become aware of an innovation, but once they have knowledge of it from credible early adopters, they tend to make the decision to adopt the innovation.
The second half of the users to adopt an innovation is divided into two groups. For statistical reasons, the late majority comprises 34% of the users and the final 16% of the adopters are the late adopters. Once the majority of the population is using an innovation, the late majority adopters yield to increasing expectations that the innovation be used. They also cite practical reasons, including economic factors and decreasing access to traditional tools, when making the decision to adopt an innovation. In many cases, these users adopt an innovation only after the remaining uncertainties over the effectiveness and acceptance of an innovation are removed.
Rogers and others have used the term laggards to describe the later adopters. This group tends to retain the traditional tools, practices, and ideas until all other options have been removed. While this group tends to be relatively closed, tending to communicate with others in the group who are later adopters also, the reasons for the later adoption of innovation be this group derive from any factors. Rogers does recognize the tendency in many organizations to blame the individuals who are the last to adopt an innovation, but he criticizes that approach as important factors related to the organization can be understood by studying the rationale given by later adopters for their delay.
Leaders who seek to sustain innovations within their organizations should analyze later adopters and their rationale for not adopting earlier as this can indicate system-wide problems with structure, communication, or implementation that should be resolved. Bolman and Deans’ (2008) structural, human, resources, political and symbolic frames can provide a framework for understand these adoption decisions. In some cases, it is the characteristics of the individuals which led to laggardly adoptions, but in many cases, there are other factors (especially those beyond the control of the later adopters) that affected their knowledge, decisions, or ability to implement an innovation. Understanding these will both help sustain innovations and allow the leaders to more quickly diffuse other innovations.
Innovations within Organization
The diffusion of innovations has been studied in both formal and informal populations. Among the examples that Rogers used often in his books were farmers. Innovations in farming practice tend to diffuse through social systems of farmers who grow similar crops in similar environments, and adoption rates are affected by many both market factors and production factors as well as the degree to which one is locked in. A farmer who has recently purchased a machine that is aligned with a traditional practice is unlikely to discard it for an innovative practice until the new machine has generated sufficient income. Likewise, a school that had purchased a new student information system and has migrated data to it and trained users in using it are unlikely to abandon it for an innovative new system until very compelling reasons are obvious.
Organizations are characterized by specific purposes and it achieves its purpose through specific role that are assigned to members, organizational and authoritarian structures, and both formal and informal rules and practices. As described in the previous section, organizations can be deconstructed into four frames which affect how they accomplish their goals and how it responds to change.
Rogers defined organizational innovativeness in terms of the speed at which an innovation diffuses through an organization. The faster the adoption rate, the more innovative the organization. He also found eight factors that affect organizations innovativeness; six are positively associated and two are inversely associated.
Given that autonomy is known to be a factor continuing to member’s motivation and participation to adopt innovation, it is unsurprising to learn Rogers finds centralize management and high levels of formalized processes to be negatively associated with the adoption of innovations within organizations. In addition to being an obstacle to the entry of innovations into an organization, centralized management and formalized processes slow adoption as the single entities responsible for approving changes become a bottleneck where the diffusion of innovations slow.
It is perhaps not surprising that leaders’ attitudes towards change is positively associated with organizations innovation. Those leaders who are more accepting of innovations are more likely to both seek out innovations, become an active advocate for them, and make decisions and delegate authority in a manner that that contributes to the more rapid diffusion of innovations within the organization. Further, members of the organization who have a positive attitude towards change will find fewer reasons to avoid innovations, thus they quickly adopt them which increases organizational innovativeness. Hiring such individuals (and providing mentors to those who are not) becomes a human resources strategy that can increase the diffusion of innovations.
Interconnectedness and openness are variations on the same characteristic; each determines the availability and use of channels of communication. Organizations with connections to the outside are open and those with deep interpersonal connections within the organization are interconnected. Both of these contribute to the communication that is essential to the diffusion of innovations as they are more likely to enter an open organization and diffuse through an interconnected one.
Slack is a measure of the resources within an organization that are not committed to other purposes. Financial, personnel, and other resources that can be used to support innovation, thus increasing the capacity for diffusion of innovations and the development of expertise.
Complexity is in interesting factor. Organizations comprising individuals with greater expertise and knowledge in their area tend to be more complex, and innovations tend to enter those organizations through those individuals. In general, greater complexity is associated with greater organizational innovation. Size is also an interesting factor associated with organizations innovativeness. Larger and more complex organizations tend to have greater formalized procedures and centralized structures which decrease innovation. Despite this, Rogers found larger organizations to be more innovative, others (for example Laforet, 2016) have found small organizations to be more innovative.
Even in those organizations in which the characteristics positively associated with the diffusion of innovations are observed, leaders tend to follow a consistent procedure when selecting those innovations to pursue and implementing innovations, and five processes characterize this work:
• Agenda-setting;
• Matching;
• Restructuring;
• Clarifying;
• Routinizing.
As listed, these represent the chronological order in which innovations diffusion through organizational practice, but in the most innovative organizations, these processes tend to be blurred and the progression is not always linear.
Agenda-setting is the process of identifying a problem within and organization and determining that it is going to receive the attention of leaders and a solution will be designed and implemented. Because organizations have a purpose which is embodied in the strategic goal, the problems that are solved through innovation are directly related to the degree to which the purpose of met (or not met). Problems typically emerge from the organization’s purpose and may be defined so the organization gains competitive advantage, addresses an unmet need among clients, or otherwise expands its reach or improves its performance. Implicit in the agenda setting is that some aspect of the organization will be changed. In most organizations, agenda-setting occurs within the political frame, and only those situations recognized by the most powerful leaders receive the attention or members or the financial resources of the organization.
Matching is undertaken to ensure the innovation meets the needs of the organization. In some cases, new ideas or new tools are produced by manufacturers or publishers and incorporating those into the organization becomes a priority, despite the fact the need did not exist before the innovation was produced by others. Mobile phones are the quintessential example; until they were invented and became used by a critical mass of individuals, the devices did not focus innovation. Now, mobile devices have led to many innovations in products, processes, and services. Innovators, the first 2.5 % of a population to adopt it, play an important role in finding and introducing innovations to organizations and finding those that may match problems identified in agenda setting.
Restructuring is the process whereby the innovation is customized to fit the needs and the existing structures (and human resources, politics, and symbolism) of the organization. In some instances, this requires modifying the innovation so the tool, practice, or idea more closely aligns with the existing organization; and in other instances, it requires the organization adapt to reflect the capacity of the innovation. Through either approach, restructuring assures a match between the innovation and the operation of the organization.
No matter how careful and attentive the restructuring process is, there will be gaps the in the implementation of the innovation. Anticipated improvements will not be realized and unexpected consequences will emerge, so leaders will support the clarification of innovation, and this also includes both adapting the organization to the innovation and adapting the innovation to the organization.
Once innovations are tuned to the needs and structures of the organization, they become a part of the routine. Once completely routinized, innovations are no longer innovations and they become the traditional practice and tools that are replaced by new innovations.
94: Conclusion
We are working at a moment in history when education is changing. For more than one generation into the 21st century, adults have been trying to figure out how to create schools that reflect the changing society and culture. For those generations, adults have spoken of the need to create “21st century schools.” (I have a former colleague who would recoil every time she heard that phrase. “It’s too late,” she would say, “its going be over before we stop talking about building schools for it.”) These adults have been grounding all of their recommendations in old and outdated assumptions about teaching and learning and technology. Compounding the problems that arise from this is the rate at which everything changes—what we teach, how we teach, and the tools we have for teaching change far more rapidly than they did for previous generations.
What has become clear to me in the time since I began my career in the field, and even since I started drafting this book, is the schools we need now, and that we will continue to need long after I have retired are will be places where great expertise comes together to create a place that cannot be created by any one individual. Our future schools depend on:
• Information technology that is always functioning and available to all students and teachers.
• Teaching and learning that is diverse and responsive to the needs to teachers and learners and that prepares all for the unpredictable future.
• Decisions made that ensure these schools exist and that all families can send their children to one of these schools. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/08%3A_Understanding_Change/803%3A_Section_3-.txt |
Efficacious educational technology supports, enables, and facilitates students as they are become full participants in the computer and network-rich communication landscape of society. Differences between how IT is provided and managed in other organizations compared to educational organizations can pose challenges for school leaders and the IT professional they hire from other industries. It is through the collaborative efforts of educators, information technology professionals, and school leaders that educational technology becomes efficacious.
In 1993, Seymour Papert imagined two time-traveling professionals from 100 years earlier; he speculated the physician would be flummoxed by the activity and the technology in the 20th century medical clinic, but the teacher would find the activity and the technology in a 20th century classroom very familiar. Papert based his speculations on the degree to which medical practitioners had adopted and adapted to technological innovations compared to educational practitioners. In the decades since, we who work in educational technology have made some progress in creating schools that would flummox the teacher in Papert’s tale, but the work is far from complete.
The technicians among us have deployed computers that connect to servers, switches, routers, and other network devices so the Internet is available from nearly every corner of nearly every classroom in nearly every one of our schools. We use sophisticated software to manage those networks; our networks store and protect all varieties of data about of students, our curriculum, and our operations. Further, our networks provide robust and reliable access to vast information and global interaction through devices that our schools own and that students, faculty, and staff own and bring to school. That information technology (IT) infrastructure has not, however, transformed teaching and learning in a manner that has been promised by so many advocates. The observation that much teaching and learning remains as it was prior to the arrival of digital tools continues to be made by scholars who study teaching and learning (Luckin, Bligh, Manches, Ainsworth, Crook, & Noss, 2012; OECD, 2015; Tondeur, van Braak, Ertmer, & Ottenbreit- Leftwich, 2017).
The laggardly rate at which technology has changed what happens in classrooms may not be surprising, however. Larry Cuban, a well-known scholar from Stanford University, studied the effects of electronic media (radio, television, and movies) on education earlier in the 20th century and found them to be inconsequential. He noted, “Claims predicting extraordinary changes in teacher practice and students’ learning, mixed with promotional tactics, dominated the literature in the initial wave of enthusiasm for each new technology” (Cuban, 1986, p. 4), but observation proved these tools were no better than teachers using other information technology at conveying information. Something appears different, however, about the computers and digital networks we have today compared to earlier media. For the most part, earlier electronic media did not become as widely used for official purposes in the way that digital technology has become the default for legal and governmental communication. Nor did it become so widely adopted for interaction, nor did it become widely used for people to create information in the way that digital technologies have. Previous generations of American citizens listened to the radio for entertainment as they completed paper copies of their income tax returns which were mailed to the Internal Revenue Service. Now, we listen to streaming media and carry on conversations over text messaging as we complete and file our tax returns via the Internet. In those areas where IT infrastructure has been installed it has come to dominate all aspects of economic, political, social, and cultural life.
The leaders of almost every school face the same challenging situation: They must create schools that reflect the dominant role of digital IT in society and they must prepare students for that world; but the changing landscape of teaching, inadequate technical expertise, and limited resources are genuine barriers to this work. What we know, how we know it, and what we know about learning is advancing at a rate that fast outpaces teachers’ capacity to respond to it. Operating and maintaining the IT systems in schools requires expertise that is far beyond that of the “tech-savvy” teachers who managed the first IT systems installed in schools. IT professionals who are “imported” into education from other businesses and industries often find the practices, assumptions, and expectations that served them well in other settings do not transfer into education. Teachers and students are different from other workers, and the IT (including the hardware, the software configurations, and the personnel) they rely on for their work must accommodate those differences. IT is also a capital-intensive aspect of operating schools. Devices and network upgrades can consume years’ worth of technology budgets in a short time, and the total cost of ownership of devices places on-going demands on budgets. Further, technology introduces new and rapidly evolving regulatory and policy issues into school management.
The situation regarding IT management in many schools is well-captured by the hypothetical (and sarcastic) Putt’s Law. According to Archibald Putt, “Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand” (Putt, 2006, p. 7). Further, Putt articulated a corollary, “Every technical hierarchy, in time, develops a competence inversion” (p. 7). While these words are intended to be humorously cynical observations, they do describe the current state of IT management in schools:
• Technology professionals configure IT systems for students and teachers, but they are unfamiliar with emerging technology-rich pedagogy. In Putt’s terms, IT professionals are managing devices for purposes they do not understand.
• Educators complain about the IT systems in schools, but they don’t understand the complexity of managing IT systems, the potential conflicts and threats to the operation of enterprise IT, and general chaos that can result when enterprise networks are not tightly controlled. In Putt’s terms, educators seek to manage IT they do not understand.
• School leaders make budget and personnel decisions that impose unrealistic limits on IT professionals and they advocate for practices beyond the capacity of the available IT or are contrary to the professional tendencies of the teachers.
The schools in which students participate in the digital world are places in which IT infrastructure is available and functioning; the existence of this infrastructure is absolutely dependent on skilled IT professionals to operate and maintain it. These schools are also absolutely dependent upon skilled educators who plan and facilitate learning experiences in which all students access, manipulate, analyze, create, and disseminate information using the IT. To improve efficacy in these schools, teachers’ critiques of the of the IT they use as well as their requests for new features must be accommodated by IT professionals because educators best understand how the IT effects students. Further, these are schools are absolutely dependent upon school administrators understand the demands of maintaining IT in an operational state as well as the emerging needs of teachers. Together; educators, IT professionals, and school administrators must collaborate for efficacious technology management in schools. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/1%3A_Introduction/11%3A_Introduction.txt |
Within any organization, leaders define a small number of strategic goals; these indicate the conditions they seek to make true and the success of the organization is determined by the degree to which these goals are accomplished. When an organization achieves its strategic goals, we recognize the leaders and members have been efficacious. Throughout this book, I refer to “efficacious IT managers” which is a group comprising teachers, IT professionals, and school leaders whose decisions and actions lead to the strategic goals being realized
Each community defines its own strategic goals, but I fully expect every reader of this book is associated with (or hopes to become associated with) a school in which leaders have articulated a strategic goal such as “Students will fully participate in the communication life of our society which is dominated by digital information technologies.” (My choice of words models John Dewey, who is credited with saying “Education is not preparation for life; education is life itself.” Therefore, the strategic goal is written to participate in the information life of society, not simply to prepare students for it.)
This book was written to support school professionals (educators, technicians, and leaders) as they become efficacious IT managers. It concerns both the decisions they make and the actions they take to ensure the information technology infrastructure installed in schools is useful to teachers as they work with learners as they become citizens in the emerging digital world. This book is intended to help IT professionals understand the world of education and for educators to understand the world of IT.
Because strategic goals are generally too broad to guide meaningful action, planners define logistic goals. In situations where the logistic goals are aligned with the strategic goal, there will exist a positive association between achieving the logistic goals and achieving the strategic goal. With regards to information technology in schools, logistic goals must ensure decisions are made and actions taken to create technology that is appropriate, proper, and reasonable (see figure 1).
• Teachers (whose who spend their days working directly with students) steer decision-making processes so that IT systems are appropriately configured to be useful for the curriculum they teach, the pedagogical methods they employ, and the developmental circumstances of their students.
• IT professionals implement decisions so that IT systems are properly configured; this ensures the IT is operational, it functions as expected, and it is secure.
• School administrators govern decision making to ensure IT systems are reasonably configured and supported to meet the needs of learners and to reflect local priorities and limits. Reasonableness is a relative term and it is defined locally; budgets, existing policy and procedure, and similar factors affect what is deemed reasonable.
A situation I encountered when writing an early draft of this book serves to illustrate how proper, appropriate, and reasonable
Figure \(1\): Dimensions of efficacious IT management
configurations of IT can influence teaching and learning. I was asked to help resolve some “network problems” in a school. Math teachers had complained that students could not access the online grade book from the computers provided under the recently begun one-to-one initiative. It turned out the network administrator had configured the permissions and switching so that students were unable to access the online grade book while at school. He reasoned, “We need to prevent students from trying to ‘hack’ their grades.” The principal responded, “That seems an insignificant threat, and it prevents students from tracking their grades when they are here at school. It is essential they be able to see their grades while in class with their teachers present” and he directed the network administrator to reconfigure the network. In this case, the network administrator properly configured the network (he had successfully prevented students from accessing the server), but the configuration was inappropriate (it prevented access to information necessary for teaching and learning), and it was deemed unreasonable (thus the school administrator who had authority insisted the configuration be changed). | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/1%3A_Introduction/12%3A_Efficacious_IT_Management.txt |
If information technology is to facilitate realization of the strategic goal of allowing students to fully participate in the digital world, then it must be appropriately used, properly configured, and reasonably supported. Deficiencies in any of these aspects of technology management threaten the overall efficacy of the IT managers. To ensure those with expertise in all three aspects of IT management are involved in planning, decision-making, designing and implementing interventions, most schools convene technology planning committees. These groups have made schools that are physical places rich with screens and connections to online spaces. Even in those schools served by well-functioning committees, technology management may not be as efficacious as it could be; it can be inefficient, ineffective in some areas, and incomplete for some populations of students. I have come to conclude the root cause of much inefficacy is lack of shared understanding among the disparate professionals involved in IT management.
Fundamentally, educators and IT professionals understand technology different ways. Even steps that seem to be necessary for reliable and secure computers can be differently perceived and understood by different groups. Consider complex passwords; IT professionals perceive them to be a simple method for keeping the network secure (which they do), but teachers can find them to be an impediment to quick access, especially for those students with emerging keyboarding skills. Consider, as well, the example of operating systems. Installing operating system updates in a timely manner as an essential step of keeping systems secure, thus reliably available. Teachers, however, who find their lesson delayed as they wait for computers to finish installing updates before they can begin will see them as interfering with the reliability of the machines (of course updates are becoming less disruptive as school have adopted Internet-only notebooks). The school administrator who is an enthusiastic user of his or her tablet for personal and professional and work may not understand the difficulty of managing those devices in multi-user environments; this leads IT professionals to push back against his or her suggestion tablets be purchased for students.
Negotiating what is appropriate, proper, and reasonable is difficult when the participants in the management decisions approach the problem from different perspectives, have different concepts of the same terms, and interpret the same circumstances differently. Efficacious IT management is also made more difficult because of the disparate approaches to problems solved by the three groups who must collaborate for efficacious IT management.
Designing IT systems is a typical tame problem (Rittel & Weber, 1973); it is well-understood and systems are designed using known procedures. IT professionals can clearly describe the networks they seek to build and maintain, and the procedures for building and troubleshooting computer networks can be transferred reliably from one design project to another. Further, IT systems can be tested and redesigned before they are deployed to users. Teaching, on the other hand, is a wicked problem; it is not clearly understood, there are multiple and interconnected factors that affect how its effectiveness is judged, and those factors are incompletely known. Further, different individuals will judge the same outcomes differently. Successful teaching depends on learning (which is both a physiological and a psychological process as well as social one), and many educators recognize the best teaching does not always influence learning in the intended manner. School leadership is largely a political process, so the way it proceeds and the measures of success are entirely dependent on perceptions, power, and priorities. Because of these fundamental differences in their work, technology professionals, teachers, and school administrators can find their IT management is affected by the silo effect. For most of their work hours, these professionals work in separate locations and they apply different knowledge and skills to the problems and accomplish the tasks specific to their area of expertise. While educators, IT professionals, and school leaders all assume responsibility for effectively and efficiently realizing their logistic goals, the nature of those goals and their connection to the strategic goal must be understood collectively if IT management is to be efficacious. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/1%3A_Introduction/13%3A_The_Barriers_to_Efficacy.txt |
For teachers, IT professionals, and school leaders to make decisions and take actions that build effective digital learning environments, they must build a common language and understanding of the nature of the problems and how acceptable solutions will be recognized. When school IT managers share understanding of what needs to be done, what everyone can expect to see when it is done, and how they should approach the work, they will be more successful in achieving strategic goals.
Through this book, I seek to support those who are interested in generating common language, understanding, and actions so that communities realize the goal of creating and sustaining schools that are places and spaces for digital learning. I define the context in which educational technology is used, the dimensions of educational technology, and the processes that can facilitate this collaboration. My work is grounded in assumptions about the users of IT in educational institutions and it recognizes the role of theory in IT management.
15: My Assumptions About Users of School IT
The education and experience that prepares IT professionals to properly configure IT infrastructure in schools is unlike the professional preparation of educators. To earn teaching credentials, educators must complete undergraduate and graduate programs at accredited institutions of higher education, pass tests, and meet other requirements specified by the regulatory agencies that grant teaching licenses.
The government oversight that marks educator licensing is not required for those who work with IT systems in schools. IT professionals become qualified to enter the field in two ways. First, they earn degrees from colleges and universities. Second, they pass exams created by professional organizations and companies that build and sell hardware and software. Interestingly, these two are not coincident. Consider an individual who earns an undergraduate degree in information systems. The graduate will have taken courses in network management, network security, databases, and other aspects of IT systems. Those courses are likely to be vendor- independent, so students learn the theory and practice of IT management common to all information systems. Cisco, the manufacturer of networking devices, certifies individuals who pass the exams they publish can properly configure the devices they sell, but make no claims about their other skills. The information systems graduates may be unable to pass a Cisco exam, but the degree program was not intended to prepare students for those tests. Because the contents of the tests are very specific, one may be able to pass a Cisco exam without holding a degree. Both individuals, however, may be qualified to properly configure IT systems in schools, but neither the undergraduate degree nor the Cisco exams address the needs of users in educational organizations.
IT professionals who arrive in schools are likely, also, to have experience working in fields other than education. While the steps needed to properly configure IT networks are the same regardless of the nature of the users, the appropriate configuration does depend on the nature of the users. The differences between users in schools and users in business are relevant to the design of IT systems, and IT professionals may find the configurations that were proper and appropriate in business are proper but inappropriate in schools, thus they must reevaluate what they believe to be the best practices for managing and configuring IT. The differences between users in business and industry and those in educational organizations (especially K-12 schools) are based on both the skill levels of the users and the nature of teaching and learning as information tasks. These differences are summarized in Table 1.5.1.
Some users of school IT do resemble users in other businesses and organizations; for example, in the business office of any school, there are professionals who manage finances. Those individuals need access to accounting software, so they can process invoices and pay bills just as finance professionals in all organizations. Those individuals will know the task they are assigned and will have been trained in how to do it. They will do that job daily (with regular and predictable variation such as completing and distributing tax forms) and indefinitely. The computer room in an elementary school served by that business office will be used by students who are early in the process of learning to read as well as teachers who are working on graduate courses, so the users of the computer room have much more varied need.
Table \(1\): Comparing IT users in different organizations
Teachers are likely to vary their curriculum and instruction based on the needs of particular students and groups, and those may not be known until they meet the students and work with them for several weeks. Perhaps the most important characteristic of school users is the compulsory nature of being a student. Whereas underperforming business users can separate from the situation, the professionals responsible for school IT have a legal and moral obligation to provide appropriate IT environments and experiences for all students. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/1%3A_Introduction/14%3A_Becoming_Efficacious.txt |
In the vernacular, “theory” is associated with ideas that are incomplete or not necessarily true. Among educators, and other pragmatic professionals such as technologists, theory is often associated with unrealistic or idealistic thinking that has little connection to their work. Those interpretations of theory are unfortunate, however, as theory can inform and focus decisions made by all who participate in school IT management. It is reasoned that making decisions and taking actions without addressing theory leads to inefficient and ineffective decisions and actions.
Grounding decisions and actions in theory allows decision makers to take advantage of three affordances that make it particularly useful for efficacious IT management in schools. First, every theory clearly identifies those factors that are relevant and that deserve managers’ attention as they design interventions. Even when professionals are working within their field of expertise, they often overlook important factors, they dedicate resources to irrelevant factors, or they accept assumptions that have been disproven by research. Theory supports the design of interventions that focus on what matters and only what matters.
Second, theory allows IT managers to predict the changes that will be observed once decisions are implemented. Coincidentally, theory suggests methods for collecting data that will confirm or refute those predictions. Although instruments designed to collect research data may not be appropriate for evaluating interventions in schools, theory has been elucidated with instruments and methods that can be can be adapted by IT managers as they seek to evaluate management decisions and actions in schools.
Third, theory affords explanations. The reason researchers do their work is to identify and support cause-and-effect relationships. While it is exceedingly difficult to establish cause- and-effect without experimenting (and true randomized double- blind experiments are unusual in education for a range of factors including ethical considerations), theory can facilitate our understanding why IT projects in schools failed or succeeded. If our predictions are accurate, then we explain them in terms of theory. If our predictions fail, then we use theory to understand what happened and why. In both cases, theory results in deeper understanding of our unique situations and the decisions we make and actions we take.
Several theories and frameworks relevant to IT management in schools are presented in this book. Educational technology is a field in which some work can be conducted from an atheoretical stance. The technician repairing computers has little concern for theory, but teachers’ actions are informed by theory (even if it is not articulated). Theory, nevertheless, plays an important role in how managers undertake their work and in providing a structure within which technicians, teachers, and all others who contribute to the technology-rich school function. Without theory, IT managers are likely to abandon interventions before they have matured to the point where expected improvements are widely observed as they are distracted by emerging fads that promise unreasonable outcomes.
There are two theories have been widely applied to problems in educational technology and these can be used to explain and predict many situations and the results of many interventions in the field. When there appears to be no other theory to inform decisions and evidence, technology acceptance model and cognitive load theory can provide insight for IT managers.
Technology Acceptance
Technology acceptance model was first elucidated to understand the observation “that performance gains are often obstructed by users' unwillingness to accept and use available systems” (Davis, 1989 p. 319), and it has been used to study decisions to use (or avoid) technology in many settings. Variations of technology acceptance model have been used to develop and refine both IT systems (hardware and software) and organizational practices that rely on IT systems. It is used to predict and explain both how individuals interact with IT as well as patterns of IT use within groups, and it is used to change perceptions of technology and patterns of technology use.
In 2003, Venkatesh, Morris, Davis and Davis modified the TAM into the Unified Theory of Acceptance and Use of Technology (UTAUT); in this work, the scholars combined eight different theories that predict the decision to use technology into one model. According to UTAUT, four factors are positively associated with the use of technology: performance expectancy, effort expectancy, social influences, and facilitating conditions (see Figure 1.6.1).
• Performance expectancy is a measure of the extent to which an individual believes technology will affect his or her job performance in a positive manner. It is rooted in efficiency, relative advantage, and outcome expectations. Interventions that lead to increased efficiency or improved outcomes will be more used.
Figure \(1\): Factors directly associated with technology use (adapted from Venkatesh, Morris, Davis and Davis 2003)
• Effort expectancy is a measure of the individual’s perceptions of how easy it is to use the technology; users intend to use systems they perceive to be easy-to-use.
• Social influences are related to the individual’s perceptions of how others perceive the technology and its use; technology used by others who opinion is valued will be more used.
• Facilitating conditions include structures that provide responsive and effective technical support, adequate replacement plans, access to necessary training, and other supports. More and more highly functioning systems that maintain and provide technology in organizations are associated with increased use of it.
It is notable that these factors are associated with ones’ intention to use a technology are based on each individual’s perceptions. In a school, different populations and even different individuals within a population may perceive the same technology differently, and those differences will affect individuals’ intentions to use the technology. Efficacious IT managers will use UTAUT as a theory to explain observed uses of technology and predict interventions that will change those patterns. Changes can be made to affect those factors, and failures to observe the expected changes can be evaluated for either effectiveness or perceptions of the changes.
Cognitive Load Theory
While technology acceptance is a theory that can explain and predict the decision to use a technology, cognitive load theory (Sweller, Ayres, & Kalyuga, 2011) (CLT) predicts and explains technology use once it has been adopted. CLT is based on the assumption that using information (and the information technologies used to communicate it) requires attention, perception, thought, and memory, thus it is a cognitive activity. Further, human cognition is a zero-sum quantity; each individual has a limited quantity of cognition available at any moment, and that cognition used for one purpose is not available for another purpose.
Theorists identify three types of cognitive load that characterize an information task:
• Intrinsic cognitive load is that which is necessary to understand the task and to use the information necessary to accomplish the task. Changing the task changes the intrinsic cognitive load, and steps taken to reduce it results in a different task.
• Germane cognitive load is that which is available for the learner to think about, strategize about, and come to deeper understanding of the ideas and information in the task. Learning occurs only when germane cognitive load is available and the amount available limits what can be learned.
• Extraneous cognitive load is that which is wasted by the learners managing bad design or poor organization of information or information technology tools. Using unfamiliar tools can also increase extraneous cognitive load.
When designing the information tasks and the information technology platforms that are used for teaching and learning, efficacious IT managers seek to minimize extraneous cognitive load and maximize germane cognitive load. It is reasoned that changing the intrinsic cognitive load is accomplished only by changing the task; therefore, reducing the extraneous cognitive load is the only method of increasing the cognitive load for germane purposes.
Consider the example of graphing calculators. Using this technology, one can minimize the extraneous cognitive load of drawing the graph of a sophisticated function, so more cognition can be dedicated to understanding the mathematics. When first encountering a graphing calculator (or when encountering an unfamiliar model), determining how to use the device increases extraneous cognitive load of graphing. This explains the practice of introducing such technology with simple and familiar examples. Once the technology and its operation along with the manner in which it displays information becomes familiar, the extraneous cognitive load of using it decreases, so the advantage of using it for sophisticated mathematics is realized.
As with all theory, CLT predicts and explains what may be observed in technology-mediated teaching. Devices may be unused and procedures may be avoided because they introduce excessive extraneous cognitive load. It also helps IT managers understand the changing perceptions of and uses of technologies; as technology solutions become more familiar (through training and familiarity) they should become more widely used.
17: The Organization of My Solution
When writing the book, I sought to answer three questions about educational technology. “Why do we need to plan for efficacious IT in schools?” In “Chapter 1: Information Technology, Society, and Schools,” I answer this by describing the active influences of computer and related technologies on humans and our organizations, including schools. My purpose for beginning with this chapter is to establish the context within which strategic goals must be defined and realized and to establish the complex nature of technology in schools and the nature of change within human organizations.
In the next five chapter, I describe the dimension of educational technology. These comprise those aspects of educational technology that IT managers must address. The chapters include: “Chapter 2: Technology-Rich Teaching and Learning,” “Chapter 3: Access to Sufficient Computing Devices,” “Chapter 4: IT Networks,” “Chapter 5: Web Services,” and “Chapter 6: Technology Support Systems.” In these chapters, I answer the question “What systems must school and technology leaders create?” The focus of these chapters is largely on information technology infrastructure and the ancillary systems necessary to ensure logistic goals are defined to address relevant purposes. While some IT professionals will find this information insufficient to provide configuration advice, they will find it helpful to understand the level of expertise they can reasonably expect school leaders to demonstrate. Further, it deconstructs the many potential activities of IT managers so they focus on the essential tasks.
The final two chapters address the question “How should school and technology leaders approach planning and decision- making?” Progressive discourse, which is a model that allows for (and necessitates) shared understanding and valid evidence, is described in “Chapter 7: Discourse, Design and Data;” and some trends and generalization that inform all leadership and management decisions are considered in “Chapter 8: Understanding Change.” | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/1%3A_Introduction/16%3A_The_Role_of_Theory.txt |
Information technology exerts strong and active influences on the humans who experience it. In this chapter, I explore those effects and describe how they define aspects of economic, political, and cultural life as well as the schools that reflect those realities.
What we think, how we think, and what types of thinking we value depend largely on the nature of the information technology we experience. The effects of information technology on human cognition are so deep that many are unaware of the degree to which it affects us, or even that it affects us at all. Scholars refer to such deeply embedded aspects of civilizations as paradigm mediums. Brad Mehlenbacher (2010), a scholar at North Carolina State University, observed, “Once these developments are in place, it becomes exceedingly difficult to disentangle them from predictions about the future” (p. 7), and he continues, they “form the very core of our systems for understanding, conceptualizing and promulgating knowledge” (p.7). For individuals whose experience is immersed in these paradigm mediums, they determine what is expected of other people and what will comprise the environment they perceive to be natural.
Humans tend to become aware of the effects of paradigm mediums only during those periods when they change in significant ways or when they are replaced. The current generation of educators is working at the historical moment when digital information consumed on screens is replacing print information consumed on paper, and we are observing changes cognition and education similar to those observed throughout history when paradigm mediums changed. The strategic goals that focus efficacious IT management in schools will be grounded in emerging paradigm mediums and intended to allow students to participate in a world that is dominated by digital information. In this chapter, I explore the nature of the influences of information technology on society and its schools. Understanding the nature and extent of these influences will prepare school IT managers to articulate strategic and logistic goals that accurately reflect the technology-rich nature of society. The role of microcomputers in curriculum and instruction has been debated since they first arrived in schools; some educators advocate for quick adoption of every new tool while others advocate for avoiding digital technology altogether. Disparate perceptions of emerging information technologies among educators is not a new phenomenon. In his 2011 book The Information: A History, A Theory, A Flood, James Gleick noted that Plato criticized those who sought to teach writing when he observed, “You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom” (p 30). The wisdom of Plato did not require writing. Gleick goes on to quote Thomas Hobbes, the 17th century philosopher, who commented on preliterate cultures (those that lack writing), “There was no method: that is to say, no planting of knowledge by itself, apart from weeds and common plants of error and conjecture” (p. 49). For Hobbes, no writing meant no wisdom. In the time between Plato and Hobbes, writing expanded throughout society, it disrupted patterns of information use, and redefined what it meant to be “educated.” Plato perceived writing as a degradation of human skills, so he rejected the emerging information technology and recommended that others reject it as well. In this, Plato lost. We can predict similar loss for those who advocate we avoid the technologies emerging today.
We are in the midst of a disruption similar to that caused by writing, and literacy skills that have been useful for generations are no longer sufficient. My grandfather graduated from the University of Vermont in 1939 and I have some of his textbooks on my bookshelves along with the textbooks I used while an undergraduate student at the same institution 49 years later. The content of the textbooks (we both studied biology) is vastly different, but the literacy skills useful for his books were equally useful for mine (including our shared habit of writing in our textbooks). While alternatives to print media have always played a minority role in curriculum, digital media are increasingly the mode of content, and are coming to dominate in some content areas. In a 2014 report on National Public Radio (Kestenbaum, 2014), the growing trend of publishers replacing printed textbooks with digital versions was detailed. Publishers are motivated by the single use nature of digital texts; each student who enrolls in a course must purchase access to digital textbooks whereas students can recycle printed textbooks until the professor adopts a new one.
The emergence of computers and other digital devices, the information accessed through them, and the capacity to rapidly manipulate information using them is challenging deeply held beliefs about cognition and learning. It is no longer tenable to argue that technology is marginal to the curriculum, nor is it tenable to use computers and associated technologies as an add-on to the curriculum to be used for enrichment purposes. It is only through using digital technologies to access, manipulate, create, and disseminate information that students fully participate in 21st century society. Because this shift from print to digital information is still incomplete and the technologies are still emerging, strategic goals for schools will be actively renegotiated to reflect changing technologies and associated societal expectations into the future.
02: Information Technology and Society
There can be little question that characteristics of our brains differentiate humans from other creatures. Increasingly, cognitive scientists recognize our brains are designed for the social interactions that have allowed humans to cooperate, and this cooperation has enabled our species to avoid extinction. Cognitive and developmental psychologist Michael Tomassello (2014) described the importance of social interaction for human nature when he observed, “Humans biologically inherit their basic capacities for constructing uniquely human cognitive representations, forms of inference, and self-monitoring, out of their collaborative and communicative interactions with other social beings. Absent a social environment, these capacities would wither away from disuse....” (p. 147). As much as we are a social species, humans are a technology-using species. It is through technology that humans have extended their capacity to manipulate and control the environment. These effects had led scientists to define the Anthropocene as the era in which humans are changing the world on a geologic time scale (Waters, Zalasiewicz, Summerhayes, Barnosky, Gałuszka, & Wolfe, 2016). When using information technology in the 21st century, humans are both social and technology-using at once. Through our IT, we interact with people across the globe just as quickly and easily as we can interact with individuals at the next room. In the next section, I present information technology as a factor in society that exerts strong and active influences on individual humans, the organizations we create, and the cultures that emerge. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/2%3A_Information_Technology_Society_and_Schools/01%3A_Chapter_Introduction.txt |
Human bodies are well-adapted to communicate with other humans, but that communication is limited. Successful human communication requires the individuals be close enough to hear or see each other, they share a language, and the message be sufficiently noise-free that it can pass between the individuals. If the spoken message is perceived, then it can enter the recipient’s memory, resulting in two copies; one in the sender’s brain and one in the recipient’s brain. We know through experience and experiment those memories are faulty and fading, which makes communication incomplete and inconsistent.
Humans have created many technologies to mediate communication. Our capacity to use sophisticated language allows humans to encode complex ideas in words, and we have invented many technologies to encode those words in memory systems that are more reliable than the human brain. Prior to writing, these technologies included the repetitive patterns in epic poems, communal call and response songs and tales, and quipas, which were knotted strings used by people living in the Andes Mountains in South America (Wright, 2007). In Western societies, we mark the beginning of print as the dominant information technology from Gutenberg and his press in the middle of the 15th century, but printing presses were in use in Europe and Asia centuries earlier. The electronic digital computers found on students’ desks (and in their pockets) are the latest in a long series of devices invented to encode, store, and transport information in a manner more resilient and far-reaching than the human brain and body.
Walter Ong studied the effects of information technologies on societies, and was one of the first to detail the social influences of information technology. Ong (1982) observed “Writing, print, and computers are all ways of technologizing the word. Once the word is technologized, there is no way to criticize what has been done without the aid of the highest technology possible” p. 80); once new technologies emerged, they are used to identify the deficiencies of the previous information technology and judge it adversely compared to the new technology. The conflicts that were noted when comparing Plato’s and Hobbes’ perceptions of writing as well as the conflicts we see in classroom as teachers struggle to adapt to new technologies are examples of those Ong predicts will be observed when one technology replaces another. He attributed these conflicts to the social and cognitive effects of the new technologies. If the new technology caused no changes, Ong reasoned, there would be no conflict. “Neutral” is the term used to describe things and actions that do not change the state of a system. Because there are changes in human cognition and communication that are associated with information technology, scholars and practitioners refer to the “non-neutrality of technology” to capture the active effects of technology on human interactions.
Not all scholars have recognized the non-neutrality of information technologies, however. For much of the 20th century, the discoveries necessary to design and develop computer and information technologies were made by a group of researchers who perceived technology as a pipeline for accessing information. For these information theorists, the experience of using information was the same regardless of the technology used to deliver it. In his seminal 1945 article “As We May Think,” Vannavar Bush, suggested computers were going to improve the efficiency of communication, and he even predicted the invention of the memex (a device that would operate much as the Internet does), but he did not predict any changes in how humans learn with the arrival of digital computers.
More recently, scholars have continued to develop the concept of non-neutrality of technology and they have added to Ong’s (1982) observations. The phenomenon can be observed at three levels. The structure and function of individuals’ brains are affected by the information technology they experience, especially though their adolescent years. The characteristics of humans’ social organizations are affected by information technology; and a society’s norms are influenced by the nature of the information technology available.
Effects on People
Brains and the sense organs sending signals into the brains are used by humans to perceive the world and to react and respond to it. Neuroscience researchers are elucidating the nature and details of neural changes when we learn, as well as the details of how memories are recalled. Neuroscience is basic research, so the discoveries are not immediately useful or relevant to educators (Antonenko, van Gog, & Paas, 2014), but discoveries are clearly contributing to educators’ understanding of how the environment and its information technology influence developing brains.
Neuroscience has confirmed that the brain is somewhat modular, so different parts of the brain are active when it is processing different types of information (Antonenko, Paas, Grabner, & van Gog, 2010). Since the 1990’s, studies have confirmed the dual coding theory (Clark & Paivio, 1991); this theory posits information presented as text and information presented as images is processed in different parts of the brain. There is further evidence that information presented in video format is processed in a third area of the brain (Gerě, & Jaušcvec, 1999). There is evidence that five hours of exposure to information on screens can change the areas in the brain that are used to process information (Small & Vorgan, 2008).
In addition to affecting brain structure and function, the information technology to which one is exposed affects his or her behavior. Those born since about 1990—those who entered school about when the World Wide Web arrived in schools—have been labeled the iGeneration (Rosen, 2010), and those generations have been widely studied by many research groups (Dijk, 2012; Montgomery, 2007; Palfrey & Gasser, 2016; Tapscott, 2009). While each research group attributes slightly different characteristics of these generations to the influences of digital information, there are several observations upon which they seem to concur:
• Individuals in these generations have a proclivity to use digital technology and they consume vast amounts of media.
• These individuals tend to create content and share details of their lives online.
• They are heavy users of social media and they use it to establish and control relationships.
While young people have always consumed large amounts of media (especially recorded music and television), the tendencies to create digital content and share it over social media along with the availability of vast amounts of digital content from other providers is a new aspect of media associated with digital technologies (Rideout, Foehr, & Roberts, 2010). The sharing perceived to be excessive (and disconcerting) by older generations, but natural by the iGenerations, is conflict that can reasonably be interpreted as another example of that which Ong (1982) attributed to the non-neutrality of technology.
It is also clear that individuals in the iGenerations are actively learning when online, and interests and friendships motivate this learning. Indeed, Ito and her colleagues (2010) suggested that youth are developing greater expertise in learning in the digital landscape than adults. For perhaps the first time in human history, there is an information technology skill inversion, as individuals in the younger generations appear to be more skilled than the older generations in using the dominant information technology. This leads Ito (2010) to conclude, “Given the centrality of youth-defined agendas in [interest and friendship-driven learning], the challenge is to build roles for productive adult participation that respect youth expertise, autonomy, and initiative” (pp. 340-1).
Ample research supports the conclusion that brains change depending on the information technology, and research also suggests that humans adapt their behavior to the nature of the information they encounter. Mark Deuze (2006), a media and journalism researcher, concluded digital media demands that we participate in the creation of media as we consume it, that we remediate digital information as we become responsible for navigating and assessing the vast information landscape, and that we discover and invent new and unintended uses of information and technologies through bricolage. In these ways, we live in a media landscape that is much more participatory than the print-dominated landscape of previous generations. We see, as well, that information technology affects both the nature of human brains and the nature of human behaviors.
In her 2017 book, iGen, psychologist Jean Twenge attributed the extreme access to digital devices and social networks to a number of trends observed in younger people who were born after 1995. This generation appears to be delaying driving, romantic relationships, and other adult activities compared to previous generations; many report they never attended a party without adults present by the time they graduated from high school. Twenge also attributes greater levels of depression and other concerning mental health trends in this generation to their use of digital devices. She concludes, “The devices they hold in their hands have both extended their childhoods and isolated them from true human interaction” (p. 312).
Effects on Organizations
Humans, we know, are social creatures; the organizations and associations they form are an important part of life in the 21st century. Students leave school to join organizations and businesses after they graduate, and the success of schools is determined by the degree to which graduates are able to function in those organizations. In the same way that individuals in digital landscapes are more active creators and consumers of information than individuals in print landscapes, organizations are becoming more flexible and dynamic in both the internal organizational structures and management practices as well as the nature of interactions with clients and customers.
Olumuyiwa Asaolu (2006), a scholar in industrial and information engineering, applied the label “Fordist (Old)” to organizations that are structured in a manner that reflects industrial technologies. These organizations tend to consume energy to produce standardized products using standardized methods, and they tend to rely on individuals with specialized skills who are managed through hierarchical systems. Asaolu concludes Fordist (Old) organizations are being replaced with those he labels “ICT (New)” which reflect modern information technology. These organizations use information to create customized services through flexible and innovative products. These organizations leverage broad skills held by employees whose work is managed through horizontal structures.
Among the factors contributing to the replacement of Fordist (Old) organizations with ICT (New) organizations is the rapid evolution of IT and the global communication that it supports (Miller, 2011). This creates new problems and new opportunities for organizations, and those that adopt ICT (New) characteristics appear to be abler than Fordist (Old) to adapt to those opportunities that require innovative solutions. The assets and social norms that support innovation are self-creating and self-supporting and they develop organically within ICT (NEW) organizations. Fordist (Old) organizations tend to be highly controlled by the management, but innovative thinking can be neither imposed nor mandated nor can it be standardized, thus Fordist organizations at a disadvantage in situation where flexibility, innovation, and other ICT (New) approaches are necessary.
Manuel Castels, a sociologist who has held positions in both North America and Europe has studied the wide-ranging effects of computer networks on society, especially on economic organizations. Commenting on the role of digital information and computer networks in the rejuvenation of many businesses and industries late in the 20th century, Castells (1996) noted, “Technological innovation and organizational change, focusing on flexibility and adaptability, were absolutely critical in ensuring the speed and efficiency of restructuring” (p. 19). Castels goes on to argue that human cognitive power to process abstract symbols, which is much enhanced by the digital electronic computers, is the basis for our capacity as a species to survive. Further, he posits that while technology is largely shaped by social influences, “the availability of new communication networks and information systems prepared the ground for” (p. 53) new organizations and social structures.
Effects on Society
The effects of information technology on human life extends to society-wide characteristics as well. In preliterate cultures, communication is communal and loud (at least audible). For Plato to teach his students, they needed to be together and to speak and listen. Writing and print allowed communication to be solitary (writers and readers need not be together in time and space) and silent (with practice, reading can be done inside one’s head and writing is largely silent except for our writing tools and our attempts to break writers’ block by talking to ourselves). The manner in which social norms and values are remembered and interpreted in preliterate cultures is dynamic (updated through communal storytelling), and decisions in preliterate cultures are likely to depend on the specific circumstances of a situation rather than on reference to an abstract concept.
Once writing arrives in a society, ideas can be stored in a more permanent manner than they can be in preliterate cultures; abstract ideas enter the culture which allows for money, law and evidence, sacred books and monotheist religions to emerge. Further, those who have greater skill reading and writing tend to have greater political and economic power than those with lesser skills, and children are excluded from much of the information life in a literate society until they become readers and writers. The marginalization of individuals and groups based on lack of communications skill is largely absent from preliterate societies.
In the 20th century, electronic media entered the popular culture in the form of radio, movies, and television. These media again changed the nature of information creation and consumption in society where they were available. Compared to print media, radio and television tended to be consumed in isolation but at the same time (we watched alone, but everyone consumed the broadcast at the same time); this pattern is changing as digital video becomes more popular. While many legal documents are printed, electronic media has come to dominate almost every aspect of economic, political, and social life.
Perhaps the greatest change in information use in societies with digital electronic media compared to earlier electronic media is the degree to which individuals can participate in the global media. With the arrival of the World Wide Web, then web 2.0 technologies that afford users the ability to easily publish content (including multimedia content) to world-wide audiences, the nature of users’ interaction with information became more participatory. This access to publishing has contributed to the evolution of many traditional institutions, including journalism. The British Broadcasting Company (BBC), for example, has been active in both encouraging responsible reporting by amateur journalists, and it has developed formal processes for including information from amateur journalists in its reports (Belair-Gagnon, 2016).
Palfrey and Gasser (2016) used the term semiotic democracy to describe the effects of participatory content creation on society. They observed, “any citizen with the skills, time, and access to digital technologies to do so may reinterpret and reshape the stories of the day” (p. 233). Of course, this can threaten social and governmental institutions, and they observed that in times of social and political instability, governments can and do take the step of restricting or preventing citizens from accessing and participating in the technologies that make the semiotic democracy possible. Yochai Benkler (2016), a professor at Yale Law School, observed access to computers and information technology extends throughout society and he claimed, “the change brought about by the networked information environment is deep. It is structural. It goes to the very foundations of how liberal markets and liberal democracies have coevolved for almost two centuries” (p. 1). | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/2%3A_Information_Technology_Society_and_Schools/03%3A_The_Non-Neutrality_of_Technology.txt |
School as we understand it is not a new invention; for generations, adults have created and sustained them for a wide range of purposes. High schools both provide access to sophisticated and specialized curriculum and keep large numbers of able-bodied individuals out of the work force. Organizational structures and management practices are also articulated to meet a wide range of purposes. Summer breaks allow children to return to agrarian work when it is most needed (even in the 21st century when family farms are largely disappeared from the landscape). In these ways, and many others, we see how schools reflect the societies and cultures that support them. In some instances, the structures and practices remain after the need for them has disappeared. The slow rate of change in schooling is yet another example of the conflict that characterize the time when technologies replace technologies.
For convenience, we can mark the 21st century as the historical moment digital information and information technologies replaced print and ICT (New) organizations replaced Fordist (Old) organizations. Certainly, this is an artificial and blurred boundary, but for our purposes it is illustrative. We can reasonably expect this boundary also marked by a transition in schools; presumably 21st century schools replaced 20th century schools as different skills and knowledge are needed to engage with new information technologies and to participate and succeed in new organizations.
The change to education for ICT(New) organizations and society is not complete. Indeed, as the third decade of the 21st century approaches, there are two clearly different and competing approaches to education we observe. One seeks to preserve and continue education for print and Fordist (Old) organizations and the other seeks to more clearly reflect ICT (New) organizations and to educate for new literacies (Limbu, 2017). Just as writing replaced orality over Plato’s protests, we can predict 21st century skills and schools to replace 20th century schools, the nature of schools as the transition continues and after it completes focus this section.
Nominal change in school
In the 21st century, several major political efforts have sought to influence educational policy at a broad level; No Child Left Behind (NCLB) (2002) and the Common Core State Standards (CCSS) (National Governors Association Center for Best Practices, Council of Chief State School Officers, 2010) are two that attained national influence in the United States. Advocates for each indicated the effort would revise curriculum, instruction, and assessment for the 21st century. The publishers of tests used as part of CCSS claim they are valid (McCall, 2016), but those claims appear unverified by scholarly research. Further, claims by advocates that constructs and measures used in these efforts are valid and reliable are also unverified by scholarly research. Both NCLB and CCSS appear to be grounded in three assumptions about teaching and learning that dominated in the 20th century but that appear to be unsupported by and even contradicted by the discoveries of learning sciences in recent decades:
• The curriculum (what students should learn) is well-known and accurately reflects the skills and knowledge students need. The reality is that what represents knowledge changes rapidly, and it is impossible to predict exactly which skills or knowledge students is actually necessary for students.
• Educators know with certainty and clarity how to transfer the curriculum into students’ minds. Cognitive science is elucidating the details of how humans learn and the environmental factors relevant to learning in ways unavailable to earlier generations of teachers.
• Tests are an accurate and reliable measure of what students have learned. Useful assessments and evaluations of learning will be predictive; performance on those tasks will indicate the student’s ability to use the information and skills in other settings. Most tests lack this predictive ability.
Sawyer (2008) referred to education grounded in known curriculum and tests as the Standard Model of teaching and observed it had been widely adopted by the societies with industrialized economies in the 20th century. Ronald Gallimore and Roland Tharp (1992) educational psychologists who studied conditions in classrooms that influence learning, referred to this type of teaching as a recitation script and observed, “the predominant experience of American school children. Sitting silently, students read assigned texts, complete 'ditto' sheets, and take tests. On those rare occasions when they are encouraged to speak, teachers control the topics and participation” (p. 175)
The Standard Model has been increasingly challenged by the observation that innovation economies were replacing industrial economies and the Standard Model is recognized as no longer gave students the opportunities to develop necessary skills. Helen Adabzi (2016), a scholar from the University of Texas at Arlington, observed, “many documents state that the traditional education has failed, and it is time for a new paradigm [that] “teach[es] a combination of basic, new and ‘soft’ skills [to] emphasize critical thinking, communication, and leadership” (p. 256). The Standard Model was also challenged by the observation that other models were more closely aligned with discoveries regarding learning emerging from the cognitive sciences. Deeper learning (Bransford, Brown, and Cocking, 2000) has emerged as a model that recognizes the social and emotional aspects of learning as well as the importance of activity and engagement, including reflection, in learning. Despite the finding that the Standard Model does not result in students developing the skills and knowledge they need for post- industrial economies, Sawyer (2008) noted, "Many of today’s schools are not teaching the deep knowledge that underlies innovative activity. But it is not just a matter of asking teachers to teach different curriculum, because the structural configurations of the Standard Model make it very hard to create learning environments that result in deeper learning (pp. 48-9.)"
It is reasonable to conclude the Standard Model of education is based on assumptions about human learning that have been overturned and it is less effective pedagogy for developing necessary skills than others. Despite this, it seems the Standard Model has been reinforced by the policy determined by NCLB and CCSS.
For many observers (including the taxpayers who fund public schools, the politicians who seek to control schools, the parents who send their children to schools, and even many who work in schools) what constitutes “school” is grounded in their experience with the Standard Model. Their concepts are clear and unquestioned and perceived to be objective and shared by all, so proposals that would produce different experiences for students are often shunned. This factor contributes to “institutional inertia,” and schooling continuing as it has despite evidence it must change.
Also slowing the replacement of the Standard Model is the fact that schools have become highly politicized institutions. In 2006, futurists Alvin Toffler and Heidi Toffler captured the relative speed of change throughout society with this scale: businesses appear to be adopting new information technologies and adapting to them at 100 miles per hour, with other organizations (such as professional organizations and non-governmental organizations) moving almost as quickly; families in the United States are moving at 60 miles per hour. Schools and other bureaucracies are moving at a mere 25 miles per hour. Political parties and legislative processes are moving even slower, at three miles per hour in the Tofflers’ estimate. If we accept this scale, then it is reasonable to assume that schools would be adopting and adapting to new technologies faster than they are if it were not for the slowing caused by political actions (such as No Child Left Behind legislation and the Common Core State Standards initiative) undertaken to “fix schools.”
The inconsistencies between the schools we need for innovation economies and the instruction provided under the Standard Model is yet another example of the conflict that Walter Ong (1982) described when technologies are replaced and that have become familiar in this chapter. Publicly funded and compulsory education for all is widely perceived to be the foundation of economic growth and effective governance in democracies. In the United States, education has become a government service influenced by increasingly centralized authorities as the population grew and become more urbanized and mobile (McCluskey, 2007). Especially since 1983 and the publication of A Nation at Risk (The National Commission on Excellence in Education, 1983), education has become an issue in national elections at a level that was not observed previously. This has placed education firmly among the institutions that innovate at the slowest rate.
Alternatives to the Standard Model
Many educational scholars and practitioners have recognized the inadequacy of the Standard Model in recent decades and they have proposed alternative models of education. The (incomplete) list of alternatives includes authentic learning (Herrington, Reeves, & Oliver, 2015), natural learning (Caine & Caine, 2011), project-based learning (Krajcik & Shin, 2014), problem-based learning (Lu, Bridges, & Hmelo-Silver, 2014), complex learning (Kirschner, Jeroen, & van Merrienboer, 2008), learner-centered instruction (Stefaniak, 2015), situated learning (Lave & Wenger, 1991), and cognitive apprenticeships (Dennen & Burner, 2008). While advocates for these different methods vary in the specifics of how they would implement schooling, there are several assumptions about teaching, learning, and testing that they share and that differentiate these approaches from the Standard Model of schooling:
• The curriculum is assumed to be more dynamic and vastly greater than can be articulated in standards and “covered” by lectures and similar instruction, so these methods tend to include an increased role for activities in which students learn how to learn. It is reasoned that students who gain experience learning with independence are better prepared for rapidly changing and unpredictable knowledge and situations that characterize New (ICT) organizations and digital cultures.
• Because the curriculum will vary and because their curriculum will, in part, be self-defined, teachers cannot accurately predict students’ paths through the curriculum. Further, new discoveries are likely to invalidate some of the extant curriculum before it can be completed. For these reasons, what students learn may vary.
• In these models, learning is understood to be a social activity as much as it is a cognitive activity. Meaningful social engagement between students and teachers and other experts and among students are purposefully designed into the learning activities.
• How learning is demonstrated varies. In these models, learning is best demonstrated through performance on authentic projects and performances, while schools based in the Standard Model tend to rely on test scores as the primary measure of learning.
• Finally, metacognition—knowing how and what one knows—is a goal of learning in the alternatives to the Standard Model.
The boundaries between schooling when the Standard Model dominated and 21st century schools are not as clear as I have presented. Activities, lessons, courses, and curriculum frameworks that promote 21st century skills have been available for decades (Dede, 2010). Student-based learning, constructivist methods, and other alternatives to the Standard Model of teaching have been described and promoted by scholars and practitioners, but those methods have largely been marginalized and have not been the focus of the wide-scale efforts to define educational policy. It is anticipated that 21st century pedagogies will replace the Standard Model, and the Standard Model will become the marginalized pedagogy. Daniel Pink (2006) can be credited with popularizing the term “necessary, but not sufficient” to describe the linear skills that are well-developed through the Standard Model. Scholars have continued to elucidate many trends, especially economic trends, that necessitate the curriculum be revised to both provide linear skills, but also prepare students to be flexible and innovative. The nature of the workers needed in institutions that reflect the ICT (New) organization, illustrate these changes. Johannessen (2008) concluded, "the workforce will shift away from employees who have traditional, practical training backgrounds and towards an ever-increasing number of employees who have had a higher education and are theoretically well equipped. Such workers will be capable of working in a problem definition and problem-oriented manner and possess skills for both analysis and synthesis (p. 407)."
Richard Suskind and Daniel Suskind (2015), scholars and policy analysts from the United Kingdom, observed workers “will need to learn to communicate differently, to gain mastery of the data in their disciplines, to establish working relationships with their machines, and to diversify” (p. 114). The factors contributing to the changing nature of educational outcomes include globalism and technology-driven automation, as well as the availability of increasingly sophisticated information technology. Levy and Murname (2004) cited evidence that there are four trends that are changing that nature the tasks that will be necessary for workers: (see Figure \(1\)):
• Complex communication, which requires one to interpret sophisticated information and articulate clear explanations, is becoming one of the most important skills for workers.
• Expert thinking, which requires one create solutions to unique and unfamiliar problems, is becoming increasingly important (but less than complex communication).
• Routine manual labor is decreasing in importance as robots and other tools automate the easy-to-repeat physical tasks common in the industrial economy.
• Routine cognitive work is decreasing even more in importance as algorithms perform simple analyses andrestatement of information, and draw conclusions based on quantitative data.
While most see clear connections between the skills Levy and Murname identify and the curriculum common in the Standard Model, there is also increasing need to diversify the skills that students develop during their school careers; these emerging skills are motivated by factors other than economic as well. In his 2010 book Wisdom, Stephen Hall who is an award-winning writer about science and society, posed the question, “How do we make complex, complicated decisions and life choices, and what makes some of these choices so clearly wise that we all intuitively recognize them as a moment, however brief, of human wisdom?” (p. 6). Hall recounted the story of a scholar who has become a leader in the field of wisdom studies, and who concluded,
that wisdom represented a state of mind beyond standard metrics of intelligence, and this revelation forced him to see inherent failures in the educational system, and the philosophy of educational testing, and the degree to which too narrow measures like IQ tests fail miserably to predict lifetime satisfaction (p. 245).
Hall concluded wisdom is grounded in eight characteristics which are generally ignored in the Standard Model, but that are more important than traditional measures of knowledge when solving complex problems: emotional regulation, knowing what’s important, moral reasoning, compassion, humility, altruism, patience, dealing with uncertainty.
While advocates for the Common Core State Standards and other standards argue that curriculum is known and measurable via a test, the scholars whose work is summarized in this section do not appear to agree. They appear to concur, rather, with Douglas Thomas and John Seely Brown who concluded full participation in the digital society necessitates individuals have the capacity for lifelong learning as workers and citizens will be adapting to new technologies and new information in perpetuity. They propose schooling be focused by, “a new culture of learning the point [of which] is to embrace what we don’t know, come up with better questions about it, and continue asking those questions in order to learn more about it” (Thomas and Brown, 2011, p. 38).
Clearly, schools must adopt curriculum and instruction to reflect the needs of citizens in the digital society, and there have been localized efforts to make these changes. The rate at which schools are changing appears to be far behind the speed at which other organizations are changing, but schools are adapting faster than the political organizations that govern schools. The design of the learning environments necessitated by a world in which traditional skills and knowledge are still important, but that are no longer sufficient has important implications for IT managers as both of these models of teaching are dependent on IT that is appropriately and properly configured. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/2%3A_Information_Technology_Society_and_Schools/04%3A_Schools_for_Networked_Societies.txt |
It was concluded in the previous section that the Standard Model is being replaced, however, it is anticipated that both the Standard Model and its alternatives will organize teaching and learning into the future. It follows that IT managers will be responsible to ensure that the systems they create and sustain have the capacity and functionality to support both types of teaching and learning. The Standard Model of education is dominated by instructionism in which an expert (the teacher) defines the content to be studied, the manner and order in which it is going to be experienced, and finally determines the extent to which each student has learned it. While some associate instructionism with the leaners being passive recipients of information, Burton, Moore, and Magliaro (2004) suggested instruction can provide a structure for approaching a complex body of knowledge and also for maintaining knowledge. Reif (2008) identified several factors that make instruction effective including articulating very clear goals; the inclusion of explicit and implicit guidance, support, and feedback that can be individualized; and providing timely and appropriate feedback. Instruction is amenable to deconstruction into several components: goals, a predictable path through known content, and clear determination of outcomes, along with appropriate feedback. These are all clearly definable and knowable before the instruction begins, thus instruction amenable to technology-based delivery. Reif (2008) concluded, “Computers are well suited for instructional purposes because they provide a dynamic medium that can not only convey information in visual and auditory forms, but can also flexibly interact with users so as to respond to their actions” (p. 428). Instructionism has been used to create a variety of digital educational materials. This list includes arcade-style games designed to teach mathematics skills, spelling words, typing skills, and similar lessons; intelligent tutoring systems for individualized lessons (e.g. test preparation systems); and simulations, which are designed to make the instructional activity more context-rich than arcade-style games typically are. Bowers (1988) criticized these designs as “students encounter a one-dimensional world of objective data” (p. 34), and he concluded the prejudices and biases of the programmers exert strong and perhaps unintentional effects on the lessons learned. When they are aware of these limitations and take steps to minimize their influence on the materials, instructional designers can create very effective instructional materials (for appropriate purposes) by the judicious application of technologies.
Efficacious IT managers will build systems that can be used to deliver instruction by ensuring:
• Students can access appropriate instructional materials including both locally installed programs and web-based media;
• Teachers have resources for creating instructional materials;
• Instructional materials are accessible to those students who have disabilities;
• Teachers have access to easy-to-use systems for managing instructional resources they create and that they find. This can include both local copies of files and online repositories.
One of the reports that emerged from the comprehensive John D. and Catherine T. MacArthur Foundation’s Digital Media project was The Future of Learning Institutions in a Digital Age (Davidson & Goldberg, 2009). In that book, 10 characteristics of learning in the digital age are proposed (see Table \(1\)). The authors observed, “Digital technologies increasingly enable and encourage social networking and interactive, collaborative engagements, including those implicating and impacting learning” (Davidson & Goldberg, 2009, p. 24). They further confirm a commitment to developing alternatives to the Standard Model of education noting learners will become more participatory in virtual environments “where they share ideas, comment on one another’s projects, and plan, design, implement, advance, or simply discuss their practices, goals or ideas together” (p. 12). As the pillars are more completely implemented in a community, the implications for teaching and learning as well as professional learning become more pressing.
In the milieu of researchers’ and practitioners’ perceptions of the trends emerging in digital education, there is evidence to support Gros’ (2016) observation, “The ubiquity of technology calls for a shift away from low-level use of technology, such as drilling, practice and looking up information. Rather, smart education encourages ‘high-level’ uses of technology, utilising it as a ‘mind tool’ or ‘intellectual partner’ for creativity, collaboration, and multimedia productivity” (p. 6). Such systems are built for interoperability and seamless connection of devices (to facilitate use of multiple devices), allow for adaptable configuration for users’ preferences, and engage teachers and learners in natural engagement (Zhu, Hu, & Riezebos, 2016).
To address the pillars of digital learning, efficacious IT managers in schools will revise how the range of IT infrastructure, practices, and policies are instantiated. The tools must accommodate interaction and creation of information as much as it accommodates access to and consumption of information. The teaching and learning that students experience will likewise be flexible and interactive in a manner it was not when the Standard Model dominated. The design of these learning environments necessitates insightful and attentive school leaders as well.
06: Conclusion
Given the conflict that accompanies the arrival of new information technologies, it is reasonable to expect scholars and practitioners to have struggled to define the appropriate role for the devices in teaching and learning in today’s schools. Some have adopted a stance similar to that Plato adopted towards writing; they have avoided it entirely. Others are quick to adopt every new innovation. Between those extremes we find the more reasoned observers and practitioners who advocate for purposeful and thoughtful approaches to using information technology in classrooms. Todd Oppenheimer (2003) who generally argues for avoiding technology in his book The Flickering Mind observed computers “can be effective when they are used only as needed, when students are at the right age or them, and when they are kept in their place” (p. 394). David Jonassen, a scholar who studied educational technology for decades and was recognized as a leader in the field, differentiated active learning in which technology is used to “engage learners, in representing, manipulating, and reflecting on what they know,” from passive learning in which students used technology for “reproducing what someone tells them” (2000, p. 10).
We know schools are designed for the purpose of enabling and encouraging young people to fully engage with information technology so they can participate in the economic, political, and cultural life of society. The curriculum comprises those skills and that knowledge that is necessary for this goal. It is expected the complexity of society’s IT will be reflected in the strategic goals articulated by school leaders and the curriculum and instruction designed to achieve those goals. For the digital generations, the process of revising curriculum and instruction is further complicated by the changing nature of information technology in the society. Plato, we saw previously, argued against the incorporation of reading and writing into schools.
One’s perception of changing information technology depends on the direction from which one perceives the change. Older generations grew up using the information technology that is being replaced tend to perceive the arrival of IT and the transition in schooling associated with the IT in a negative manner. For them, using new information technology is degrading human cognition and students are not being taught the skills and knowledge that they value and that were necessary for their generation. Younger generations perceive the emerging information technology as natural to their future, and they tend to adopt the technologies and become comfortable with emerging information and technologies. The challenge for efficacious IT managers is to negotiate the many factors that affect the transition. | textbooks/workforce/Information_Technology/Information_Systems/Efficacious_Technology_Management_-_A_Guide_for_School_Leaders_(Ackerman)/2%3A_Information_Technology_Society_and_Schools/05%3A_Implications_for_IT_Management.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• Define what an information system is by identifying its major components;
• Describe the basic history of information systems;
• Discuss the role of and the purpose of information systems; and
• Explain why IT matters
01: What Is an Information System
Introduction
In the course of a given day, think of activities that you do to entertain yourself, deliver a work product, purchase something, or interact with your family, friends, or co-workers. How many times do you snap a picture, post a text, or email your friends? Can you even remember the number of times you used a search engine in a day? Consider what you are using to do these activities. Most likely, many, if not all, of these activities involve using technologies such as a smartphone, a laptop, a website, or an app. These activities are also enabled by Wi-Fi networks that surround us everywhere, be it on the school’s campus, workplace, the airport, or even cars. You are already a user of one or more information systems, using one or more electronic devices, different software, or apps, and connect globally through different networks. Welcome to the world of information systems!
Information systems affect our personal, career, society, and the global economy by evolving to change businesses and the way we live. To prepare yourself to participate in developing or using information, building a business, or advancing your career, you must be familiar with an information system's fundamental concepts.
Defining Information Systems
Students from diverse disciplines, including business, are often required to take a course to learn about information systems. Let’s start with the term Information System (IS). What comes to your mind? Computers? Devices? Apps? Here are a few definitions from a few sources:
• Information Systems is an academic study of systems with a specific reference to information and the complementary networks of hardware and software that people and organizations use to collect, filter, process, create and also distribute data .” (Wikipedia Information Systems, 2020)
• “Information systems are combinations of hardware, software, and telecommunications networks that people build and use to collect, create, and distribute useful data, typically in organizational settings.” (Valacich et al., 2010)
• “Information systems are interrelated components working together to collect, process, store, and disseminate information to support decision making, coordination, control, analysis, and visualization in an organization.” (Laudon et al., 2012)
They sound similar, yet there is something different in each as well. In fact, these authors define the terms from these perspectives:
• What are the components that make up an information system? How do they work together?
• What is the role of IS in providing value to businesses and to individuals in solving their needs?
Let’s examine each perspective.
1.02: Identifying the Components of Information Systems
Let’s use your experience as users to understand the above definitions. For example, let’s say you work for a small business, and your manager asks you to track the expenses of the business and send her the list so that she can see where the money has gone. You decide to use a spreadsheet on your laptop to enter the list of expenses you have collected and then email the spreadsheet to her once you are done. You will need to have a system, a laptop, a spreadsheet running and connect to email, and an internet connection. All these components must work together perfectly! In essence, you are using the interrelated components in an IS to allow it to collect, process, store, and disseminate information. The role of this IS system is to enable you to create new value (i.e., expense tracker) and for your manager to use the information you disseminate “to support decision making, coordination, control, analysis, and visualization in an organization.” (Laudon et al., 2011) You and your manager have obtained your goals through the processes you have created to capture the data, calculate it, check it, and how and when your manager receives the new information you created to make her decision to manage her company.
Hence, information systems can be viewed as having six major components: hardware, software, network communications, data, people, and processes.
Each has a specific role, and all roles must work together to have a working information system. In this book, we group the first four components as Technology. People and Processes are the two components that deliver value to organizations in how they use the collection of technologies to meet specific organizations’ goals.
Technology
Technology can be thought of as the application of scientific knowledge for practical purposes. From the invention of the wheel to the harnessing of electricity for artificial lighting, technology is a part of our lives in so many ways that we tend to take it for granted. As discussed before, the first four components of information systems – hardware, software, network communication, and data, are all technologies that must integrate well together. Each of these will get its own chapter and a much lengthier discussion, but we will take a moment to introduce them to give you a big picture of what each component is and how they work together.
Hardware
Hardware represents the physical components of an information system. Some can be seen or touched easily, while others reside inside a device that can only be seen by opening up the device's case. Keyboards, mice, pens, disk drives, iPads, printers, and flash drives are all visible examples. Computer chips, motherboards, and internal memory chips are the hardware that resides inside a computer case and not usually visible from the outside. Chapter 2 will go into more details to discuss how they function and work together. For example, users use a keyboard to enter data or use a pen to draw a picture.
Software
Software is a set of instructions that tell the hardware what to do. Software is not tangible – it cannot be touched. Programmers create software programs by following a specific process to enter a list of instructions that tell the hardware what to do. There are several categories of software, with the two main categories being operating-system and application software.
Operating system software provides an interface between the hardware and application to protect the programmers from learning about the underlying hardware's specifics. Chapter 3 will discuss Software more thoroughly. Here are a few examples:
Examples of Operating Systems and Applications by Devices
Devices
Operating Systems
Applications
Desktop
Apple macOS, Microsoft Windows
Adobe Photoshop, Microsoft Excel, Google Map
Mobile
Google Android, Apple iOS
Texting, Google Map
Data
The third component is data. You can think of data as a collection of non-disputable raw facts. For example, your first name, driver's license number, the city you live in, a picture of your pet, a clip of your voice, and your phone number are all pieces of raw data. You can see or hear your data, but by themselves, they don’t give you any additional meanings beyond the data itself. For example, you can read a driver's license number of a person, you may recognize it as a driver's license number, but you know nothing else about this person. They are typically what IS would need to collect from you or other sources. However, once these raw data are aggregated, indexed, and organized together into a logical fashion using software such as a spreadsheet, or a database, the collection of these organized data will present new information and insights that a single raw fact can’t convey. The example of collecting all expenses (i.e., raw data) to create an expense tracker (new information derived) discussed earlier is also a good example. In fact, all of the definitions presented at the beginning of this chapter focused on how information systems manage data. Organizations collect all kinds of data, processed and organized them in some fashion, and use it to make decisions. These decisions can then be analyzed as to their effectiveness, and the organization can be improved. Chapter 4 will focus on data and databases and their uses in organizations.
Networking Communication
The components of hardware, software, and data have long been considered the core technology of information systems. However, networking communication is another component of an IS that some believe should be in its own category. An information system can exist without the ability to communicate. For instance, the first personal computers were stand-alone machines that did not have access to the Internet. Information Systems, however, have evolved since they were developed. For example, we used to have only desktop operating system software or hardware. However, in today’s environment, the operating system software now includes mobile OS, and hardware now includes other hardware devices besides desktops. It is extremely rare for a computer device that does not connect to another device or a network. Chapter 5 will go into this topic in greater detail.
People
People built computers for people to use. This means that there are many different categories in the development and management of information systems to help organizations to create value and improve productivity, such as:
• Users: these are the people who actually use an IS to perform a job function or task. Examples include: a student uses a spreadsheet or a word processing software program.
• Technical Developers: these are the people who actually create the technologies used to build an information system. Examples include a computer chip engineer, a software programmer, and an application programmer.
• Business Professionals: these are the CEOs, owners, managers, entrepreneurs, employees who use IS to start or expand their business to perform their job functions such as accounting, marketing, sales, human resources, support customers, among others. Examples include famous CEOs such as Jeff Bezos of Amazon, Steve Jobs of Apple, Bill Gates of Microsoft, and Marc Benioff of Salesforce.
These are just some of the key people; more details will be covered in Chapters 9 and 10.
Process
The last component of information systems is Process. A business process is a series of steps undertaken to achieve a desired outcome or goal. Businesses have to continually innovate to either create more revenues through new products and services that fulfill customers’ needs or to find cost-saving opportunities in the ways they run their companies. Simply automating activities using technology is not enough. Information systems are becoming more and more integrated with organizational processes to deliver value in revenue-generating and cost-saving activities that can give companies competitive advantages over their competitors. Specialized standards or processes such as “business process reengineering,” “business process management,” “enterprise resource planning,” and “customer relationship management” all have to do with the continued improvement of these business procedures and the integration of technology with them to improve internal efficiencies and to gain a deeper understanding of customers’ needs. Businesses hoping to gain an advantage over their competitors are highly focused on this component of information systems. We will discuss processes in Chapter 8.
Reference
Laudon, K. C., & Laudon, J. P. (2011). Management information systems. Upper Saddle River, NJ: Prentice-Hall. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/01%3A_What_Is_an_Information_System/1.01%3A_Introduction.txt |
Now that we have explored the different components of information systems (IS), we need to turn our attention to IS's role in an organization. From our definitions above, we see that these components collect, store, organize, and distribute data throughout the organization, which is the first half of the definition. We can now ask what do these components actually do for an organization to address the second part of the definition of an IS “to support decision making, coordination, control, analysis, and visualization in an organization” Earlier, we discussed how IS collects raw data to organize them to create new information to aid in the running of a business. To help management to make informed critical decisions, IS has to take the information further by transforming it into organizational knowledge. In fact, we could say that one of the roles of IS is to take data and turn it into information and then transform that into organizational knowledge. As technology has developed and the business world becomes more data-driven, so has IS's role, from a tool to run an organization efficiently to a strategic tool for competitive advantages. To get a full appreciation of IS's role, we will review how IS has changed over the years to create new opportunities for businesses and address evolving human needs.
The Early Years (1930s-1950s)
We may say that computer history came to public view in the 1930s when George Stibitz developed the “Model K” Adder on his kitchen table using telephone company relays and proved the viability of the concept of ‘Boolean logic,’ a fundamental concept in the design of computers. From 1939 on, we saw the evolution of special-purpose equipment to general-purpose computers by companies that are now iconic in the computing industry; Hewlett-Packard with their first product HP200A Audio Oscillator that Disney’s Fantasia used. The 1940s gave us the first computer program running a computer through the work of John von Newmann, Frederic Williams, Tom Kilburn, and Geoff Toothill. The 1950s gave us the first commercial computer, the UNIVAC 1, made by Remington Rand and delivered to the US Census Bureau; it weighed 29,000 pounds and cost more than \$1,000,000 each. (Computer History Museum, n.d.)
Software evolved along with the hardware evolution. Grace Hopper completed A-0, the program that allowed programmers to enter instructions to hardware with English-like words on the UNIVAC 1. With the arrival of general and commercial computers, we entered what is now referred to as the mainframe era. (Computer History Museum, n.d.)
The Mainframe Era
From the late 1950s through the 1960s, computers were seen to more efficiently do calculations. These first business computers were room-sized monsters, with several refrigerator-sized machines linked together. These devices' primary work was to organize and store large volumes of information that were tedious to manage by hand. More companies were founded to expand the computer hardware and software industry, such as Digital Equipment Corporation (DEC), RCA, and IBM. Only large businesses, universities, and government agencies could afford them, and they took a crew of specialized personnel and specialized facilities to install them.
IBM introduced System/360 with five models. It was hailed as a major milestone in computing history for it was targeted at business besides the existing scientific customers, and equally important, all models could run the same software (Computer History, n.d.). These models could serve up to hundreds of users at a time through the technique called time-sharing. Typical functions included scientific calculations and accounting under the broader umbrella of “data processing.”
In the late 1960s, the Manufacturing Resources Planning (MRP) systems were introduced. This software, running on a mainframe computer, gave companies the ability to manage the manufacturing process, making it more efficient. From tracking inventory to creating bills of materials to scheduling production, the MRP systems (and later the MRP II systems) gave more businesses a reason to integrate computing into their processes. IBM became the dominant mainframe company. Nicknamed “Big Blue,” the company became synonymous with business computing. Continued software improvement and the availability of cheaper hardware eventually brought mainframe computers (and their little sibling, the minicomputer) into most large businesses.
The PC Revolution
The 1970s ushered in the growth era in both making the computers smaller- microcomputers, and faster big machines- supercomputers. In 1975, the first microcomputer was announced on the cover of Popular Mechanics: the Altair 8800, invented by Ed Roberts, who coined the term “personal computer.” The Altair was sold for \$297-\$395, and came with 256 bytes of memory, and licensed Bill Gates and Paul Allen’s BASIC programming language. Its immediate popularity sparked entrepreneurs' imagination everywhere, and there were quickly dozens of companies making these “personal computers.” Though at first just a niche product for computer hobbyists, improvements in usability and practical software availability led to growing sales. The most prominent of these early personal computer makers was a little company known as Apple Computer, headed by Steve Jobs and Steve Wozniak, with the hugely successful “Apple II .” (Computer History Museum, n.d.)
Hardware companies such as Intel and Motorola continued to introduce faster and faster microprocessors (i.e., computer chips). Not wanting to be left out of the revolution, in 1981, IBM (teaming with a little company called Microsoft for their operating system software) released their own version of the personal computer, called the “PC.” Businesses, which had used IBM mainframes for years to run their businesses, finally had the permission they needed to bring personal computers into their companies, and the IBM PC took off. The IBM PC was named Time magazine’s “Man of the Year” in 1982.
Because of the IBM PC’s open architecture, it was easy for other companies to copy or “clone” it. During the 1980s, many new computer companies sprang up, offering less expensive versions of the PC. This drove prices down and spurred innovation. Microsoft developed its Windows operating system and made the PC even easier to use. Common uses for the PC during this period included word processing, spreadsheets, and databases. These early PCs were not connected to any network; for the most part, they stood alone as islands of innovation within the larger organization. The price of PCs becomes more and more affordable with new companies such as Dell.
Today, we continue to see PCs' miniaturization into a new range of hardware devices such as laptops, Apple iPhone, Amazon Kindle, Google Nest, and the Apple Watch. Not only did the computers become smaller, but they also became faster and more powerful; the big computers, in turn, evolved into supercomputers, with IBM Inc. and Cray Inc. among the leading vendors.
Client-Server
By the mid-1980s, businesses began to see the need to connect their computers to collaborate and share resources. This networking architecture was referred to as “client-server” because users would log in to the local area network (LAN) from their PC (the “client”) by connecting to a powerful computer called a “server,” which would then grant them rights to different resources on the network (such as shared file areas and a printer). Software companies began developing applications that allowed multiple users to access the same data at the same time. This evolved into software applications for communicating, with the first prevalent use of electronic mail appearing at this time.
This networking and data sharing all stayed within the confines of each business, for the most part. While there was sharing of electronic data between companies, this was a very specialized function. Computers were now seen as tools to collaborate internally within an organization. In fact, these computers' networks were becoming so powerful that they were replacing many of the functions previously performed by the larger mainframe computers at a fraction of the cost.
During this era, the first Enterprise Resource Planning (ERP) systems were developed and run on the client-server architecture. An ERP system is a software application with a centralized database that can be used to run a company’s entire business. With separate modules for accounting, finance, inventory, human resources, and many more, ERP systems, with Germany’s SAP leading the way, representing state of the art in information systems integration. We will discuss ERP systems as part of the chapter on Process (Chapter 9).
The Internet, World Wide Web, and Web 1.0
Networking communication along with software technologies evolve through all periods: the modem in the 1940s, clickable link in the 1950s, the email as the “killer app’ and now iconic “@” the mobile networks in the 1970s, and the early rise of online communities through companies such as AOL in the early 1980s. First invented in 1969 as part of a US-government funded project called ARPA, the Internet was confined to use by universities, government agencies, and researchers for many years. However, the complicated way of using the Internet made it unsuitable for mainstream use in business.
One exception to this was the ability to expand electronic mail outside the confines of a single organization. While the first email messages on the Internet were sent in the early 1970s, companies who wanted to expand their LAN-based email started hooking up to the Internet in the 1980s. Companies began connecting their internal networks to the Internet to communicate between their employees and employees at other companies. With these early Internet connections, the computer truly began to evolve from a computational device to a communications device.
In 1989, Tim Berners-Lee from CERN laboratory developed an application (CERN, n.d.), a browser, to give a simpler and more intuitive graphical user interface to existing technologies such as clickable link, to make the ability to share and locate vast amounts of information easily available to the mass in addition to the researchers. This is what we called as the World Wide Web. 4 This invention became the launching point of the growth of the Internet as a way for businesses to share information about themselves and for consumers to find them easily.
As web browsers and Internet connections became the norm, companies worldwide rushed to grab domain names and create websites. Even individuals would create personal websites to post pictures to share with friends and family. For the first time, users could create content on their own and join the global economy.
In 1991, the National Science Foundation, which governed how the Internet was used, lifted restrictions on its commercial use. These policy changes ushered in new companies establishing new e-commerce industries such as eBay and Amazon.com. The fast expansion of the digital marketplace led to the dot-com boom through the late 1990s and then the dot-com bust in 2000. An important outcome of the Internet boom period was that thousands of miles of Internet connections were laid around the world during that time. The world became truly “wired” heading into the new millennium, ushering in the era of globalization, which we will discuss in Chapter 11.
The digital world also became a more dangerous place as more companies and users were connected globally. Once slowly propagated through the sharing of computer disks, computer viruses and worms could now grow with tremendous speed via the Internet and the proliferation of new hardware devices for personal or home use. Operating and application software had to evolve to defend against this threat, and a whole new industry of computer and Internet security arose as the threats kept increasing and became more sophisticated. We will study information security in Chapter 6.
Web 2.0 and e-Commerce
Perhaps, you noticed that in the Web 1.0 period, users and companies could create content but could not interact with each other directly on a website. Despite the Internet's bust, technologies continue to evolve due to increased needs from customers to personalize their experience and engage directly with businesses.
Websites become interactive; instead of just visiting a site to find out about a business and purchase its products, customers can now interact with companies directly, and most profoundly, customers can also interact with each other to share their experience without undue influence from companies or even buy things directly from each other. This new type of interactive website, where users did not have to know how to create a web page or do any programming to put information online, became known as web 2.0.
Web 2.0 is exemplified by blogging, social networking, bartering, purchasing, and post interactive comments on many websites. This new web-2.0 world, in which online interaction became expected, had a big impact on many businesses and even whole industries. Some industries, such as bookstores, found themselves relegated to niche status. Others, such as video rental chains and travel agencies, began going out of business as online technologies replaced them. This process of technology replacing an intermediary in a transaction is called disintermediation. One such successful company is Amazon which has disintermediated many intermediaries in many industries, and it is one of the leading e-commerce websites.
As the world became more connected, new questions arose. Should access to the Internet be considered a right? What is legal to copy or share on the internet? How can companies protect data (kept or given by the users) private? Are there laws that need to be updated or created to protect people’s data, including children’s data? Policymakers are still catching up with technology advances even though many laws have been updated or created. Ethical issues surrounding information systems will be covered in Chapter 12.
The Post PC and Web 2.0 World
After thirty years as the primary computing device used in most businesses, sales of the PC are now beginning to decline as tablets and smartphones are taking off. Just as the mainframe before it, the PC will continue to play a key role in business but will no longer be the primary way people interact or do business. The limited storage and processing power of these mobile devices is being offset by a move to “cloud” computing, which allows for storage, sharing, and backup of the information on a massive scale.
Users continue to push for faster and smaller computing devices. Historically, we saw that microcomputers displaced mainframes, laptops displaced (almost) desktops. We now see that smartphones and tablets are displacing laptops in many situations. Will hardware vendors hit the physical limitations due to the small size of devices? Is this the beginning of a new era of invention of new computing paradigms such as Quantum computing, a trendy topic that we will cover in more detail in Chapter 13?
Tons of content has been generated by the users in the web 2.0 world, and businesses have been monetizing this user-generated content without sharing any of their profits. How will the role of users change in this new world? Will the users want a share of this profit? Will the users finally have ownership of their own data? What new knowledge can be created from the massive user-generated and business-generated content?
Below is a chart showing the evolution of some of the advances in information systems to date.
The Eras of Business Computing
Era
Hardware
Operating System
Applications
Early years (1930s)
Model K, HP’s test equipment, Calculator, UNIVAC 1
The first computer program was written to run and store on a computer.
Mainframe (1970s)
Terminals connected to a mainframe computer, IBM System 360
Time-sharing (TSO) on MVS
Custom-written MRP software
PC (mid-1980s)
IBM PC or compatible. Sometimes connected to the mainframe computer via an expansion card.
Intel microprocessor
MS-DOS
WordPerfect, Lotus 1-2-3
Client-Server (the late 80s to early 90s)
IBM PC “clone” on a Novell Network.
Apple’s Apple-1
Windows for Workgroups, MacOS
Microsoft Word, Microsoft Excel, email
World Wide Web (the mid-90s to early 2000s)
IBM PC “clone” connected to the company intranet.
Windows XP, macOS
Microsoft Office, Internet Explorer
Web 2.0 (mid-2000s to present)
Laptop connected to company Wi-Fi.
Smartphones
Windows 7, Linux, macOS
Microsoft Office, Firefox, social media platforms, blogging, search, texting
Post-Web 2.0 (today and beyond)
Apple iPad, robots, Fitbit, watch, Kindle, Nest, cars, drones
iOS, Android, Windows 10
Mobile-friendly websites, more mobile apps
eCommerce
We seem to be at a tipping point of many technological advances that have come of age. The miniaturization of devices such as cameras, sensors, faster and smaller processors, software advances in fields such as artificial intelligence, combined with the availability of massive data, have begun to bring in new types of computing devices, small and big, that can do things that were unheard in the last four decades. A robot the size of a fly is already in limited use, a driverless car is in the ‘test-drive’ phase in a few cities, among other new advances to meet customers’ today needs and anticipate new ones for the future. “Where do we go from here?” is a question that you are now part of the conversation as you go through the rest of the chapters. We may not know exactly what the future will look like, but we can reasonably assume that information systems will touch almost every aspect of our personal, work-life, local and global social norms. Are you prepared to be an even more sophisticated user? Are you preparing yourself to be competitive in your chosen field? Are there new norms to be embraced? | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/01%3A_What_Is_an_Information_System/1.03%3A_The_Role_of_Information_Systems.txt |
It has always been the assumption that the implementation of information systems will, in and of itself, bring a business competitive advantage, especially in the cost-saving or improve efficiency. The more investment in information systems, the more efficiencies are expected by management.
In 2003, Nicholas Carr wrote an article, “IT Doesn’t Matter,” in the Harvard Business Review (Carr, 2003) and raised the idea that information technology has become just a commodity. Instead of viewing technology as an investment that will make a company stand out, it should be seen as something like electricity: It should be managed to reduce costs, ensure that it is always running, and be as risk-free as possible.
This article was both hailed and scorned at the time. While it is true that IT should be managed to reduce cost, improve efficiencies, history has shown us that many companies have leveraged information systems to build wildly successful businesses, such as Amazon, Apple, Walmart. Chapter 7 will discuss competitive advantage in great detail.
Sidebar: Walmart Uses Information Systems to Become the World’s Leading Retailer
Walmart is the world’s largest retailer, with gross revenue of \$534.6 billion and a market of \$366.7B in the fiscal year that ended on January 31, 2020 (source: Yahoo finance on 7/13/2020). Walmart currently has approximately 11,500 stores and e-commerce websites in 27 countries, serving nearly 265 million customers every week worldwide (Wal-Mart, 2020). Walmart’s rise to prominence is due in no small part to its use of information systems.
One of the keys to this success was the implementation of Retail Link, a supply-chain management system. This system, unique when initially implemented in the mid-1980s, allowed Walmart’s suppliers to directly access the inventory levels and sales information of their products at any of Walmart’s more than ten thousand stores. Using Retail Link, suppliers can analyze how well their products are selling at one or more Walmart stores, with a range of reporting options. Further, Walmart requires the suppliers to use Retail Link to manage their own inventory levels. If a supplier feels that their products are selling out too quickly, they can use Retail Link to petition Walmart to raise their inventory levels. This has essentially allowed Walmart to “hire” thousands of product managers, all of whom have a vested interest in managing products. This revolutionary approach to managing inventory has allowed Walmart to continue driving prices down and responding to market forces quickly.
However, Amazon’s fast rise as the leader in eCommerce has given Walmart a new formidable competitor. Walmart continues to innovate with information technology combined with their physical stores to compete with Amazon, locking the two in a fierce battle to retain the largest retailer's title. Using its tremendous market presence, any technology that Walmart requires its suppliers to implement immediately becomes a business standard.
1.05: Summary
Summary
In this chapter, you have been introduced to the concept of information systems. We have reviewed several definitions, focusing on information systems components: technology (hardware, software, data, networking communication), people, and process. We have reviewed the evolution of the technology and how the business use of information systems has evolved over the years, from the use of large mainframe computers for number crunching, through the introduction of the PC and networks for business applications, all the way to the era of mobile computing for both business and personal applications. During each of these phases, innovations in technology allowed businesses and individuals to integrate technology more deeply.
It is a foregone conclusion that almost all, if not all, companies are using information systems. Yet, history also has shown us that some companies are very successful and some are failures. By the time you complete this book, you should understand the important role of IS in helping improve efficiencies and know-how to leverage IS to develop sustained competitive advantages for every company or your own career.
1.06: Study Questions
Study Questions
1. What are the components that make up an information system?
2. List three examples of information system hardware
3. Identify which component of information systems include Microsoft Windows
4. What is application software?
5. Describe the different roles people play in information systems
6. Describe what a process is and its purpose
7. What was invented first, the personal computer or the Internet?
8. Which comes first, the internet or the world wide web?
9. What helps make the internet usable for the masses, not just researchers?
10. What does it mean to say we are in a “post-PC and Web 2.0 world”?
11. What is Carr’s main argument about information technology? Is it true then, and is it true now?
Exercises
1. Suppose you had to explain to a member of your family or one of your closest friends the concept of an information system. How would you define it? Write a one-paragraph description in your own words that you feel would best describe an information system to your friends or family.
2. Of the six components of an information system (hardware, software, data, network communications, people, process), which do you think is the most important to a business organization's success? Write a one-paragraph answer to this question that includes an example from your personal experience to support your answer.
3. We all interact with various information systems every day: at the grocery store, at work, at school, even in our cars (at least some of us). Make a list of the different information systems you interact with every day. See if you can identify the technologies, people, and processes involved in making these systems work.
4. Do you agree that we are in a post-Web 2.0 stage in the evolution of information systems? Some people argue that we will always need the personal computer, but it will not be the primary device used to manipulate information. Others think that a whole new era of mobile, biological, or even neurological computing is coming. Do some original research and make your prediction about what business computing will look like in the next three to five years.
5. The Walmart case study introduced you to how that company used information systems to become the world’s leading retailer. Walmart has continued to innovate and is still looked to as a leader in the use of technology. Do some original research and write a one-page report detailing a new technology that Walmart has recently implemented or is pioneering to stay competitive. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/01%3A_What_Is_an_Information_System/1.04%3A_Can_Information_Systems_Bring_Competitive_Advantage.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
We discuss hardware and how it works. We will look at different types of computing devices, computer parts, learn how they interact and the effect of the commoditization of these devices.
• 2.1: Introduction
Discuss hardware, the first of the six components: hardware, software, data, communication, people, and process
• 2.2: Tour of a Digital Device
Examining the personal computer and its hardware components.
• 2.3: Sidebar- Moore’s Law
Experts provide insight if Moore's Law is still viable in todays times.
• 2.4: Removable Media
Advancement of technology in removable media.
• 2.5: Other Computing Devices
As personal computer technologies have become more commonplace, many of the components have been integrated into other devices that previously were purely mechanical. We have also seen an evolution in what defines a computer. Ever since the invention of the personal computer, users have clamored for a way to carry them around. Here we will examine several types of devices that represent the latest trends in personal computing.
• 2.6: Summary
Gaining an understanding of information systems focusing on consumer devices such as the personal computer, tablet, and Bluetooth.
• 2.7: Study Questions
Test your knowledge of information systems hardware.
02: Hardware
Information systems are made up of six components: hardware, software, data, communication, people, and process. In this chapter, we will review hardware. Hardware is the tangible or physical parts of computing devices to function. We will review the components of information systems, learn how it works, and discuss some of the current trends.
As stated above, computer hardware encompasses digital devices that you can physically touch. This includes devices such as the following:
• desktop computers
• laptop computers
• mobile phones
• smartphones
• smartwatches
• tablet computers
• e-readers
• storage devices, such as flash drives
• input devices, such as keyboards, mice, and scanners
• output devices such as 3d printers and speakers
Besides these more traditional computer hardware devices, many items that were once not considered digital devices are now becoming computerized. Digital technologies are now being integrated into many everyday objects, so the days of a device being labeled categorically as computer hardware may be ending. Examples of these types of digital devices include automobiles, , and even soft-drink dispensers. In this chapter, we will also explore digital devices, beginning with defining the term.
Digital Devices
A digital device is any equipment containing a computer or microcontroller; included in these devices are smartphones, watches, and tablets. A digital device processes electronic signals that represent either a one (“on”) or a zero (“off”). The presence of an electronic signal represents the “ on ” state; the absence of an electronic signal represents the “ off ” state. Each one or zero is referred to as a bit (a contraction of binary digit); a group of eight bits is a byte. The first personal computers could process 8 bits of data at once; modern PCs can now process 128 bits of data at a time. The larger the bit, the faster information can be processed simultaneously.
Sidebar: Understanding Binary
As you know, the system of numbering we are most familiar with is base-ten numbering. In base-ten numbering, each column in the number represents a power of ten, with the far-right column representing 10^0 (ones), the next column from the right representing 10^1 (tens), then 10^2 (hundreds), then 10^3 (thousands), etc. For example, the number 1010 in decimal represents: (1 x 1000) + (0 x 100) + (1 x 10) + (0 x 1).
Computers use the base-two numbering system, also known as binary. In this system, each column in the number represents a power of two, with the far-right column representing 2^0 (ones), the next column from the right representing 2^1 (tens), then 2^2 (fours), then 2^3 (eights), etc. For example, the number 1010 in binary represents (1 x 8 ) + (0 x 4) + (1 x 2) + (0 x 1). In base ten, this evaluates to 10.
As digital devices' capacities grew, new terms were developed to identify the capacities of processors, memory, and disk storage space. Prefixes were applied to the word byte to represent different orders of magnitude. Since these are digital specifications, the prefixes were originally meant to represent multiples of 1024 (which is 210) but have more recently been rounded to mean multiples of 1000.
The following table contains a listing of Binary prefixes:
Binary Prefixes and Examples
Prefix
Represents
Example
kilo
one thousand
kilobyte=one thousand bytes
mega
one million
megabyte=one million bytes
Giga
one billion
gigabyte=one billion bytes
tera
one trillion
terabyte=one trillion bytes
Peta
one quadrillion
petabyte=one quadrillion bytes
exa
one quintillion
exabyte=one quintillion bytes
Zetta
one sextillion
zettabytes=one sextillion bytes
yotta
one septillion
yottabytes=one septillion bytes | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/02%3A_Hardware/2.01%3A_Introduction.txt |
We will begin with the personal computers, which consist of the same basic components:
• Motherboard (circuit board)
• Central Processing Unit ( CPU)
• Random Access Memory (RAM)
• Video Card
• Power Supply
• Hard Drive (HDD)
• Solid-State Drive (SSD)
• Optical Drive (DVD/CD drive)
• Card Reader (SD/SDHC, CF, etc.)
It also turns out that almost every digital device uses the same set of components, so examining the personal computer will give us insight into the structure of various digital devices. So let’s take a “tour” of a personal computer and see what makes them function.
Processing Data: The CPU
As stated in the previous section, most computing devices have a similar architecture. The core of this architecture is the central processing unit or CPU. The CPU can be thought of as the “brain” of the device or main processor. Back in the day, the CPU was made up of hundreds of wires that carried information.
These wires carried out the commands sent to it by the software and returned results to be acted upon. The earliest CPUs were large circuit boards with limited functionality. Today, a CPU is generally on one chip and can perform a large variety of functions. There are two primary manufacturers of CPUs for personal computers: Intel and Advanced Micro Devices (AMD).
The speed (“clock time”) of a CPU regulates the rate of instruction and executes and synchronizes the various computer components. The faster the clock, the quicker the CPU can execute instruction per second. The clock is measured in hertz. A hertz is defined as one cycle per second. Using the binary prefixes mentioned above, we can see that a kilohertz (abbreviated kHz) is one thousand cycles per second, a megahertz (MHz) is one million cycles per second, and a gigahertz (GHz) is one billion cycles per second. The CPU’s processing power increases at an amazing rate (see the sidebar about Moore’s Law). Besides a faster clock time, many CPU chips now contain multiple processors per chip.
A multi-core processor is a single integrated circuit that contains multiple chips. These chips are commonly known as cores. The multi-core runs and reads instructions on the cores at the same time, increasing the speed. A computer with two processors is known as dual-core, or quad-core (four processors), increasing the processing power of a computer by providing multiple CPUs' capability.
When computers are running with multiple cores, additional heat is generated; this is why companies build in fans on top of the CPU. Macs have built-in a fail-safe that the computer will shut itself down to avoid damage when the temperature builds too rapidly. Smartphones avail themselves to hot temperatures. As our devices get smaller, we have many parts placed in a compact area, and in turn, devices will generate more heat. Running many apps on your phone simultaneously is another way to increase the phone's heat; this is why it is important to close applications after use.
Graphics processing unit (GPU) is an electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer for output. Devices that use GPUs are personal computers, smartphones, and game consoles. Nvidia is one of the powerhouse companies that manufacture HD graphics cards. Nvidia has been a leader in GPU’s chips, one of the most popular chips is the Nvidia GeForce, which is integrated with laptops, PCs, and virtual reality processors. Nvidia has also worked with many companies expanding its GPU chip market. Some notable companies that Nvidia works with are Tesla, Quadro, and GRID. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/02%3A_Hardware/2.02%3A_Tour_of_a_Digital_Device.txt |
Technology is advancing, and computers are getting faster every year. Consumers often are unsure of buying today’s smartphone, tablet, or PC model because a more advanced model will be out shortly, leaving them with regret that it won’t be the most advanced anymore. Gordon Moore, the co-founder of Fairchild and one of Intel's founders, recognized this phenomenon in 1965, noting that microprocessor transistor counts had been doubling every year. His insight eventually evolved into Moore’s Law, which states that the number of transistors on a chip will double every two years. (Moore, 1965). This has been generalized into the concept that computing power will double every two years for the same price point. Another way of looking at this is to think that the same computing power price will be cut in half every two years. Though many have predicted its demise, Moore’s Law has held for over fifty-five years. Technology is changing with innovation in design and AI support. Experts now believe,
“The name of the game now is the technology may not be traditional silicon transistors; now it may be quantum computing, which is a different structure and nano-biotechnology, which consists of proteins and enzymes that are organic."
Therefore it is likely in the next five years, the emphasis of Moore’s Law will change. Experts believe that Moore’s law will not be able to go on indefinitely because of physical limits on shrinking the size of components on a chip continually. Currently, the billions of transistors on chips are not visible to the naked eye. It is thought that if Moore’s law were to continue through 2050, engineers would have to design transistors from components that are smaller than a single atom of hydrogen.
This figure represents Moore’s law empirical relationship linked to transistors' number in a dense integrated circuit that doubles about every two years.
There will be a point, someday, where we hit the apex of processing technology as challenges occur to move forward to shrink circuits at the time of exponential growth will get more expensive. Moore’s Law will then be outdated due to technology innovation. Engineers will continue to strive for new ways to increase performance (Moore, 1965).
Motherboard
The motherboard is the main circuit board hub of the computer. The hub connects the inputs and components of the computer. It also controls the power received by the hard drive and video card. The motherboard is a crucial component, housing the central processing unit (CPU), memory, and input and output connectors. The CPU, memory, and storage components, among other things, all connect to the motherboard. Motherboards come in different shapes and sizes; the prices of motherboards also vary depending on complexity. Complexity depends on how compact or expandable the computer is designed to be. Most modern motherboards have many integrated components, such as video and sound processing, requiring separate components.
Random-Access Memory
When a computer starts up, it begins to load information from the hard disk into its working memory. Your computer's short-term memory is called random-access memory (RAM), which transfers data much faster than the hard disk. Any program that you are running on the computer is loaded into RAM for processing. RAM is a high-speed component that stores all the information the computer needs for current and near-future use. Accessing RAM is much quicker than retrieving it from the hard drive. For a computer to work effectively, a minimal amount of RAM must be installed. In most cases, adding more RAM will allow the computer to run faster. Increasing the RAM size, the number of times this access operation is carried out is reduced, making the computer run faster. Another characteristic of RAM is that it is volatile or temporary memory. This means that it can store data as long as it receives power; when the computer is turned off, any data stored in RAM is lost. This is why we need hard drives and SSDs that hold the information when we shut off the system.
RAM is generally installed in a personal computer by using a dual-inline memory module (DIMM). The type of DIMM accepted into a computer is dependent upon the motherboard. As described by Moore’s Law, the amount of memory and speeds of DIMMs have increased dramatically over the years.
Hard Disk and Hard Drive
While the RAM is used as working memory, the computer also needs a place to store data for the longer term. Most of today’s personal computers use a hard disk for long-term data storage. A hard disk is a magnetic material disk; a hard disk drive or HDD is the device for storing the data into a hard disk. The disk is where data is stored when the computer is turned off and retrieved from when the computer is turned on. The HDD provides lots of storage at an inexpensive cost compared to the SSD.
Solid-State Drives
SSD is a new generation device replacing hard disks. They are much faster, and they utilize flash-based memory. Semiconductor chips are used to store data, not magnetic media. An embedded processor (or brain) reads and writes data. The brain, called a controller, is an important factor in determining the read and write speed. SSD’s are decreasing in price, but they are expensive. SSD’s have no moving parts, unlike the HDD, which deals with wear and tear of spinning and break down.
Comparison of SSD vs. HDD
The checkmarks represent the best selection in the category.
Comparison of Solid State Drives and Hard Disk Drives
Attribute
SSD (Solid State Drive)
HDD (Hard Disk Drive)
Power Draw / Battery Life
Less power draw, averages 2 – 3 watts, resulting in 30+ minute battery boost.
More power draw-- averages 6 – 7 watts and therefore uses more battery.
Cost
Expensive, roughly \$0.20 per gigabyte (based on buying a 1TB drive).
Only around \$0.03 per gigabyte, very cheap (buying a 4TB model)
Capacity
Typically not larger than 1TB for notebook size drives; 4TB max for desktops.
Typically around 500GB and 2TB maximum for notebook size drives; 10TB max for desktops.
Operating System Boot-Time
Around 10-13 seconds average bootup time.
Around 30-40 seconds average bootup time.
Noise
There are no moving parts and, as such, no sound.
Audible clicks and spinning can be heard.
Vibration
No vibration as there are no moving parts.
The spinning of the platters can sometimes result in vibration.
Heat Produced
Lower power draw and no moving parts, so little heat is produced.
HDD doesn’t produce much heat, but it will have a measurable amount more heat than an SSD due to moving parts and higher power draw.
Failure Rate
Mean time between failure rate of 2.0 million hours.
Mean time between failure rate of 1.5 million hours.
File Copy / Write Speed
Generally above 200 MB/s and up to 550 MB/s for cutting-edge drives.
The range can be anywhere from 50 – 120 MB/s.
Encryption
Full Disk Encryption (FDE) Supported on some models.
Full Disk Encryption (FDE) Supported on some models.
File Opening Speed
Up to 30% faster than HDD.
Slower than SSD.
Magnetism Affected?
An SSD is safe from any effects of magnetism.
Magnets can erase data.
Reference
Moore, Gordon E. (1965). "Cramming more components onto integrated circuits" (PDF). Electronics Magazine. p. 4. Retrieved 2012-10-18. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/02%3A_Hardware/2.03%3A_Sidebar-_Moores_Law.txt |
Removable Media
Removable storage has changed greatly over the four decades of PCs. CD-ROM drives have replaced floppy disks, and then they were replaced by USB (Universal Serial Bus) drives. USB (Universal Serial Bus) drives are now standard on all PCs with capacities approaching 512 gigabytes. Speeds have also increased from 480 Megabits in USB 2.0 to 10 Gigabits in USB 3.1. USB devices also use EEPROM technology. Since the USB is a cross-platform technology, it is supported by most operating systems. This helps connect to other devices such as printers, tv’s external hard drives, and the list goes on. “There are now by one count six billion USB devices in the world.” (Johnson, 2019)
Network Connection
When personal computers were first developed, they were stand-alone units, which meant that data was brought into the computer or removed from the computer via removable media, such as the floppy disk. Engineers as early as 1965 saw merit in being able to connect and share information with other computers. The term used was networking as the connections increased to multiple users, it grew to inter-networking. The abbreviated version is now called the internet. In the mid-1980s, organizations began to see the value in connecting computers together via a digital network. Because of this, personal computers needed the ability to connect to these networks. Initially, this was done by adding an expansion card to the computer that enabled the network connection. By the mid-1990s, network ports were standard on most personal computers. The configuration of these ports has evolved over the years, becoming more standardized over time. Today, almost all devices plug into a computer through the use of a USB port. This port type, first introduced in 1996, has increased in its capabilities, both in its data transfer rate and power supply.
For a personal computer to be useful, it must have channels for receiving input from the user and channels for delivering output to the user. These input and output devices connect to the computer via various connection ports, which generally are part of the motherboard and are accessible outside the computer case. In early personal computers, specific ports were designed for each type of output device. The configuration of these ports has evolved over the years, becoming more and more standardized over time. Today, almost all devices plug into a computer through the use of a USB port. This port type, first introduced in 1996, has increased in its capabilities, both in its data transfer rate and power supplied.
Bluetooth
Besides USB, some input and output devices connect to the computer via a wireless-technology standard called Bluetooth. Bluetooth was first invented in the 1990s and exchanges data over short distances using radio waves.
Bluetooth generally has a range of 100 to 150 feet. It was not until 1999 that it reached its first general public users. Two devices communicating with Bluetooth must both have a Bluetooth communication chip installed. Bluetooth devices include pairing your phone to your car, computer keyboards, speakers, headsets, and home security, to name just a few.
Input Devices
All personal computers need components that allow the user to input data. Early computers used simply a keyboard to allow the user to enter data or select an item from a menu to run a program. With the advent of the graphical user interface, the mouse became a standard component of a computer. These two components are still the primary input devices to a personal computer, though variations of each have been introduced with varying levels of success over the years. For example, many new devices now use a touch screen as the primary way of entering data. Besides the keyboard and mouse, additional input devices are becoming more common. Scanners allow users to input documents into a computer, either as images or as text. Microphones can be used to record audio or give voice commands. Webcams and other video cameras can be used to record video or participate in a video chat session. The list continues to grow, such as joysticks used for gaming, digital cameras, and touch screens. Smartwatches are wearable compact computers on the wrist. The watch's functionality is similar to the smartphone offering mobile apps and WiFi/Bluetooth connectivity. Specialized watches for health and sports enthusiasts have also emerged, offering counts of steps taken, heart rate, and blood pressure monitoring; a popular brand is Fitbit.
Output Devices
Output devices are essential as well. The most obvious output device is a display, visually representing the state of the computer. In some cases, a personal computer can support multiple displays or be connected to larger-format displays such as a projector or large-screen television. Besides displays, other output devices include speakers for audio output and printers for printed output. 3D printers have changed the way we build toys, tools, homes, and even body parts. The process of 3D printing that differentiates itself from a regular printer is called additive manufacturing.
Additive manufacturing breaks down an object and builds it layer by layer, making three-dimensional objects.
The most popular material used is plastic, but other materials can be used, such as gold and bio-material, to make human parts such as a nose or ear. The 3D printers have proven themselves in many different industries and have offered an inexpensive route for prototyping.
Sidebar: What Hardware Components Contribute to the Speed of My Computer?
A computer's speed is determined by many elements, some related to hardware and some related to software. In hardware, speed is improved by giving the electrons shorter distances to traverse to complete a circuit. Since the first CPU was created in the early 1970s, engineers have constantly worked to figure out how to shrink these circuits and put more and more circuits onto the same chip. And this work has paid off – the speed of computing devices has been continuously improving ever since.
The hardware components that contribute to a personal computer's speed are the CPU, the motherboard, RAM, and the hard disk. In most cases, these items can be replaced with newer, faster components. In the case of RAM, simply adding more RAM can also speed up the computer.
The table below shows how each of these components contributes to the speed of a computer. Besides upgrading hardware, many changes can be made to the software to enhance the computer's speed.
How Components Impact the Speed of a Computer
Component
Speed measured by
Units
Description
CPU
Clock speed
GHz
The time it takes to complete a circuit.
Memory does affect computer speed. The CPU moves information from the memory while retrieving information from running applications.
Motherboard
Bus speed
MHz
How much data can move across the bus simultaneously.
RAM
Data transfer rate
MB/s
The time it takes for data to be transferred from the memory to the system.
Hard Disk
Access time
ms
The time it takes before the disk can transfer data.
Router
Data transfer rate
MBit/s
The time it takes for data to be transferred from disk to system.
Reference
Johnson, J. (2019). The unlikely origins of USB, the port that changed everything. FastCompany. Retrieved August 6, 2020, from https://www.fastcompany.com/3060705/an-oral-history-of-the-usb | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/02%3A_Hardware/2.04%3A_Removable_Media.txt |
A personal computer is designed to be a general-purpose device. That is, it can be used to solve many different types of problems. As the technologies of the personal computer have become more commonplace, many of the components have been integrated into other devices that previously were purely mechanical. We have also seen an evolution in what defines a computer. Ever since the invention of the personal computer, users have clamored for a way to carry them around. Here we will examine several types of devices that represent the latest trends in personal computing.
Portable Computers
In 1983, Compaq Computer Corporation developed the first commercially successful portable personal computer. By today’s standards, the Compaq PC was not very portable: weighing in at 28 pounds, this computer was portable only in the most literal sense – it could be carried around. But this was no laptop; the computer was designed like a suitcase, to be lugged around and laid on its side to be used. Besides portability, the Compaq was successful because it was fully compatible with the software being run by the IBM PC, which was the standard for business.
In the years that followed, portable computing continued to improve, giving us laptop and notebook computers. The “luggable” computer has given way to a much lighter clamshell computer that weighs from 4 to 6 pounds and runs on batteries. In fact, the most recent advances in technology give us a new class of laptops that is quickly becoming the standard: these laptops are extremely light and portable and use less power than their larger counterparts. The screens are larger, and the weight of some can be less than three pounds.
The ACER SWIFT 7 is a good example of this. Its specification is:
• CPU: Intel Core i7-7Y75
• Graphics: Intel HD Graphics 615
• RAM: 8GB
• Screen: 14-inch Full HD
• Storage: 256GB SSD
• Weight: 1.179 kg (2.6 pounds)
This is simply amazing!
Finally, as more and more organizations and individuals are moving much of their computing to the Internet or cloud, laptops are being developed that use “the cloud” for all of their data and application storage. These laptops are also extremely light because they have no need for a hard disk at all! A good example of this type of laptop (sometimes called a netbook) is Samsung’s Chromebook.
Smartphones
The first modern-day mobile phone was invented in 1973. Resembling a brick and weighing in at two pounds, it was priced out of reach for most consumers at nearly four thousand dollars. Since then, mobile phones have become smaller and less expensive; today, mobile phones are a modern convenience available to all levels of society. As mobile phones evolved, they became more like small walking computers. These smartphones have many of the same characteristics as a personal computer, such as an operating system and memory. The first smartphone was the IBM Simon, introduced in 1994.
In January of 2007, Apple introduced the iPhone. Its ease of use and intuitive interface made it an immediate success and solidified the future of smartphones. Running on an operating system called iOS, the iPhone was really a small computer with a touch-screen interface. In 2008, the first Android phone was released, with similar functionality.
Consider the following data regarding mobile computing :
• There are 4.57 billion global mobile Internet users as of April 2020. (Statista, 2020)
• It is expected by 2024, approximately 187.5 million U.S. users will have made at least one purchase via a web browser or mobile app on their mobile device.(Clement, 2020)
• In 2020, U.S. mobile retail revenues were expected to amount to 339.03 billion U.S. dollars.(Clement, 2019)
• The average order value for online orders placed on Smartphones in the second quarter of 2019 is \$86.47, while the average order value for orders placed on Tablets is \$96.88.(Clement, 2020)
• As of 2020, there are 4.5 billion active social media users in the world; As of July 2019, there were an estimated 3.46 billion actively using their mobile devices for social media-related activities. (Clement, 2020)
• 90 percent of the time spent on mobile devices is spent on apps. (Saccomani, 2019)
• Mobile traffic is responsible for 51.9 percent of Internet traffic in the first quarter of 2020 — compared to 50.3 percent from 2017. (Clement, 2020)
• While the total percentage of mobile traffic is more than desktop, engagement on the desktop is 46.51 percent in 2020. (Petrov, 2020)
• 2020, mobile traffic is at 51.3, and desktop engagement is at 48.7 percent over the years, users are moving away from the desktop. (Broadband Search, 2020)
Tablet Computers
The tablet is larger than a smartphone and smaller than a notebook. A tablet uses a touch screen as its primary input and is small enough and light enough to be easily transported. They generally have no keyboard and are self-contained inside a rectangular case. Apple set the standard for tablet computing with the introduction of the iPad in 2010 using iOS, the operating system of the iPhone. After the success of the iPad, computer manufacturers began to develop new tablets that utilized operating systems that were designed for mobile devices, such as Android.
Global market share for tablets has changed since the early days of Apple’s dominance. Today the iPad has about 58.66%, Samsung at 21.73%, and Amazon at 5.55% as of June 2020 (Statistica: E-commerce, 2020). The market popularity of the tablet has been steadily declining in recent years.
Integrated Computing and Internet of Things (IoT)
Along with advances in computers themselves, computing technology is being integrated into many everyday products such as security systems, thermostats, refrigerators, airplanes, cars, electronic appliances, lights in the household, alarm clocks, speaker systems, vending machines, and commercial environments, just to name a few. Integrated computing technology has enhanced the capabilities of these devices and adds capabilities into our everyday lives, thanks in part to IoT.
These three short videos highlight some of the latest ways computing technologies are being integrated into everyday products through the Internet of Things (IoT):
• The video is about the internet of things.: [video file: 3:21 minutes] Closed Captioned
• This video is about how to update your home to a smart home.: [video file: 2:01 minutes] Closed Captioned
• This video takes you for a drive in Tesla’s autopilot mode.: [video file: 10:04 minutes] Closed Captioned
The Commoditization of the Personal Computer
Since the late 1970’s the personal computer has gone from a technical marvel to part of our everyday lives; it has also become a commodity. The PC has become a commodity in the sense that there is very little differentiation between computers, and the primary factor that controls their sale is their price. Hundreds of manufacturers all over the world now create parts for personal computers. Dozens of companies buy these parts and assemble the computers. As commodities, there are essentially no differences between computers made by these different companies. Profit margins for personal computers are razor-thin, leading hardware developers to find the lowest-cost manufacturing.
Apple has differentiated itself from the pack and achieved a competitive advantage in a challenging market. The cost of their product is significantly higher, but you are buying a high-quality product and design. Apple designs both the hardware as well as their software in-house. The hardware and software design of the Mac works seamlessly with its other products such as the iPhone and iPad. The engineers at Apple are constantly updating software apps and updating hardware in order to remain a leader in the PC world.
This is an interesting article on the newest innovation for smartphones (Stuff, 2020).
Smartphone shipments are forecasted from 2010 to 2023 to grow from 304.7M units in 2010 to an estimate of 1.484 billion units in 2023 (Statista, 2019).
The Problem of Electronic Waste
Personal computers have become a common fixture in households since the early eighties. The average life span of many of these devices is between three to five years. Recycling has become a hot subject for companies who want to be viewed by consumers as Green companies. Consumers are demanding companies make a commitment to the environment. Worldwide, almost 45 million tons of electronics were tossed out in 2016. Out of that staggering amount of electronic waste, only 20% has been recycled in some shape or form. The remaining 80% made its way to a more environmentally damaging end at the landfill. Mobile phones are now available in even the remotest parts of the world and, after a few years of use, they are discarded. Where does this electronic debris end up?
Many developing nations accept this e-waste. Abroad, these recyclers re-purpose parts and extract minerals, gold, and cobalt from these devices. These dumps have become health hazards for those living near them.
Proper safe practices are ignored, and whatever waste is not usable is dumped improperly. Consumers are trying to change this common practice by demanding companies be transparent as to how they are addressing e-waste. Though many manufacturers have made strides in using materials that can be recycled, electronic waste is a problem with which we must all deal with.
In 2006 the Green Electronics Council launched the Electronic Product Environmental Assessment Tool (EPEAT). This tool helps purchasers of electronics to evaluate the effect of products on the environment. They give a ranking of how companies are doing in gold, silver, and bronze levels. When the first began, three manufactures of PC and electronic equipment manufactures participated with 60 products. The US government in 2007 then created the U.S. Federal Acquisition Regulations (FAR), requiring federal agencies to make purchases based on EPEAT status. In 2015 EPEAT added in Imaging Equipment and Television categories. Today many large companies are using EPEAT standards such as Amazon and Apple. EPEAT systems are widely accepted, and over 43 countries are participating, and the number continues to grow. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/02%3A_Hardware/2.05%3A_Other_Computing_Devices.txt |
Summary
Information systems hardware consists of the components of digital technology that you can touch. In this chapter, we focused on the personal computer and its components. We reviewed the personal computer configuration because it has many of the same attributes as other digital computing devices. A personal computer comprises many components, most importantly the CPU, motherboard, RAM, hard disk, removable media, and input/output devices. We also reviewed some personal computer variations, such as the tablet computer, Bluetooth, and the smartphone. By Moore’s Law, these technologies have improved quickly over the years, making today’s computing devices much more powerful than devices just a few years ago. Finally, we discussed two of the consequences of this evolution: the commoditization of the personal computer and the problem of electronic waste.
2.07: Study Questions
Study Questions
1. Write your own description of what the term information systems hardware means.
2. Explain why Moore’s Law may not be a valid theory in the next five years.
3. Write a summary of one of the items linked to in the “Integrated Computing” section.
4. Explain why the personal computer is now considered a commodity.
5. What is the difference between a USB and a USB portal, and what was the reason for the need?.
6. List the following in increasing order (slowest to fastest): megahertz, kilohertz, gigahertz.
7. What are the differences between HDD and SSD?
8. Why are desktops declining in popularity?.
9. What is IoT?
10. Why is Apple a leader in the computer industry?
Exercises
1. Review the sidebar on the binary number system. How would you represent the number 16 in binary? How about the number 100? Besides decimal and binary, other number bases are used in computing and programming. One of the most used bases is hexadecimal, which is base-16. In base-16, the numerals 0 through 9 are supplemented with the letters A (10) through F (15). How would you represent the decimal number 100 in hexadecimal?
2. Go to Old-Computer.com - Pick one computer from the listing and write a brief summary. Include the specifications for CPU, memory, and screen size. Now find the specifications of a computer being offered for sale today and compare. Did Moore’s Law hold?
3. Under the category of IoT, pick two products and explain how IoT has changed the product. Review the price before and after the technology was introduced. Has this new technology increased popularity for the item?.
4. Go on the web and compare and contrast two smartphones on the market. Is one better than the other, and if so, why. Be sure to include the price.
5. Review the e-waste policies in your area. Do you feel they are helping or ignoring this growing crisis?
6. Now find at least two more scholarly articles on this topic. Prepare a PowerPoint of at least 10 slides that summarize the issue and recommend a possible solution based on your research.
7. As with any technology text, there have been advances in technologies since publication. What technology that has been developed recently would you add to this chapter?
8. What is the current state of solid-state drives vs. hard disks? Describe the ideal user for each. Do original research online where you can compare prices on solid-state drives and hard disks. Be sure you note the differences in price, capacity, and speed. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/02%3A_Hardware/2.06%3A_Summary.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
Software and hardware cannot function without each other. Without software, hardware is useless. Without hardware, the software has no hardware to run on. This chapter discusses the types of software, their purpose, and how they support different hardware devices, individuals, groups, and organizations.
03: Software
The second component of an information system is software. Software is the means to take a user’s data and process it to perform its intended action. Software translates what users want to do into a set of instructions that tell the hardware what to do. A set of instructions is also called a computer program. For example, when a user presses the letter ‘A” key on the keyboard when using a word processing app, it is the word processing software that tells the hardware that the user pressed the key ‘A’ on the keyboard and fetches the image of the letter A to display on the screen as feedback to the user that the user’s data is received correctly.
Software is created through the process of programming. We will cover the creation of software in this chapter and more detail in chapter 10. In essence, hardware is the machine, and software is the intelligence that tells the hardware what to do. Without software, the hardware would not be functional.
3.02: Types of Software
The software component can be broadly divided into two categories: system software and application software.
The system software is a collection of computer programs that provide a software platform for other software programs. It also insulates the hardware's specifics from the applications and users as much as possible by managing the hardware and the networks. It consists of
Application software is a computer program that delivers a specific activity for the users (i.e., create a document, draw a picture). It can be for either
System Software
Operating Systems
The operating system provides several essential functions, including:
1. Managing the hardware resources of the computer
2. Providing the user-interface components
3. Providing a platform for software developers to write applications.
An operating system (OS) is a key component of the system software. Examples of popular operating systems are Google AndroidTM, Microsoft WindowsTM, and Apple iOSTM.
An OS is a set of programs that coordinate hardware components and other programs and acts as an interface with application software and networks. Some examples include getting input from a keyboard device, displaying output to a screen, storing or retrieving data from a disk drive.
The above picture shows the operating system at the center; it accepts input from various input devices such as a mouse, a keyboard, a digital pen, or a speech recognition, outputs to various output devices such as screen monitor or a printer; acts an intermediary between applications and apps, and access the internet via network devices such as a router or a web server.
In 1984, Apple introduced the Macintosh computer, featuring an operating system with a graphical user interface, now known as macOS. Apple has different names for its OS running on different devices such as iOS, iPadOS, watchOS, and tvOS.
In 1986, as a response to Apple, Microsoft introduced the Microsoft Windows Operating Systems, commonly known as Windows, as a new graphical user interface for their then command-based operating system, known as MS-DOS, which was developed for IBM’s Disk Operating System or IBM-DOS. By the 1990s, Windows dominated the desktop personal computers market as the top OS and overtaken Apple’s OS.
A third personal-computer operating system family that is gaining in popularity is Linux. Linux is a version of the Unix operating system that runs on a personal computer. Unix is an operating system used primarily by scientists and engineers on larger minicomputers. These computers, however, are costly, and software developer Linus Torvalds wanted to find a way to make Unix run on less expensive personal computers: Linux was the result. Linux has many variations and now powers a large percentage of web servers in the world. It is also an example of open-source software, a topic we will cover later in this chapter.
In 2007, Google introduced Android to support mobile devices such as smartphones and tablets specifically. It is based on the Linux kernel, and a consortium of developers developed other open-source software. Android quickly became the top OS for mobile devices and overtook Microsoft.
Operating systems have continuously improved with more and more features to increase speed and performance to process more data at once and access more memory. Features such as multitasking, virtual memory, and voice input have become standard features of both operating systems.
All computing devices run an operating system, as shown in the below table. The most popular operating systems are Microsoft’s Windows, Apple’s operating system, and different Linux versions for personal computers. Smartphones and tablets run operating systems as well, such as Apple’s iOS and Google’s Android.
Computing devices and Operating system
Operating Systems
Desktop
Mobile
Microsoft Windows
Windows 10
Windows 10
Apple OS
Mac OS
iOS
Various versions of Linux
Ubuntu
Android (Google)
According to netmarketshare.com (2020), from August 2019 to August 2020, Windows still retains the desktop's dominant position with over 87% market share. Still, it is losing in the mobile market share, to Android with over 70% market share, followed by Apple’s iOS with over 28% market share.
Sidebar: Why Is Microsoft Software So Dominant in the Business World?
As we learned in chapter 1, almost all businesses used IBM mainframe computers back in the 1960s and 1970s. These same businesses shied away from personal computers until IBM released the PC in 1981. Initially, business decisions were low-risk decisions since IBM was dominant, a safe choice. Another reason might be that once a business selects an operating system as the standard solution, it will invest in additional software, hardware, and services built for this OS. The switching cost to another OS becomes a hurdle both financially and for the workforce to be retrained.
Utility
Utility software includes software that is specific-purposed and focused on keeping the infrastructure healthy. Examples include antivirus software to scan and stop computer viruses and disk defragmentation software to optimize files' storage. Over time, some of the popular utilities were absorbed as features of an operating system.
Application or App Software
The second major category of software is application software. While system software focuses on running the computers, application software allows the end-user to accomplish some goals or purposes. Examples include word processing, photo editor, spreadsheet, or a browser. Applications software are grouped in many categories, including:
• Killer app
• Productivity
• Enterprise
• Mobile
The “Killer” App
When a new type of digital device is invented, there are generally a small group of technology enthusiasts who will purchase it just for the joy of figuring out how it works. A “killer” application runs only on one OS platform and becomes so essential that many people will buy a device on that OS platform just to run that application. For the personal computer, the killer application was the spreadsheet. In 1979, VisiCalc, the first personal-computer spreadsheet package, was introduced. It was an immediate hit and drove sales of the Apple II. It also solidified the value of the personal computer beyond the relatively small circle of technology geeks. When the IBM PC was released, another spreadsheet program, Lotus 1-2-3, was the killer app for business users. Today, Microsoft Excel dominates as the spreadsheet program, running on all the popular operating systems.
Productivity Software
Along with the spreadsheet, several other software applications have become standard tools for the workplace. These applications, called productivity software, allow office employees to complete their daily work. Many times, these applications come packaged together, such as in Microsoft’s Office suite. Here is a list of these applications and their basic functions:
• Word processing: This class of software provides for the creation of written documents. Functions include the ability to type and edit text, format fonts and paragraphs, and add, move, and delete text throughout the document. Most modern word-processing programs also have the ability to add tables, images, voice, videos, and various layout and formatting features to the document. Word processors save their documents as electronic files in a variety of formats. The most popular word-processing package is Microsoft Word, which saves its files in the Docx format. This format can be read/written by many other word-processor packages or converted to other formats such as Adobe’s PDF.
• Spreadsheet: This class of software provides a way to do numeric calculations and analysis. The working area is divided into rows and columns, where users can enter numbers, text, or formulas. The formulas make a spreadsheet powerful, allowing the user to develop complex calculations that can change based on the numbers entered. Most spreadsheets also include the ability to create charts based on the data entered. The most popular spreadsheet package is Microsoft Excel, which saves its files in the XLSX format. Just as with word processors, many other spreadsheet packages can read and write to this file format.
• Presentation: This software class provides for the creation of slideshow presentations that can be shared, printed, or projected on a screen. Users can add text, images, audio, video, and other media elements to the slides. Microsoft’s PowerPoint remains the most popular software, saving its files in PPTX format.
• Office Suite: Microsoft popularized the idea of the office-software productivity bundle with their release of Microsoft Office. Some office suites include other types of software. For example, Microsoft Office includes Outlook, its e-mail package, and OneNote, an information-gathering collaboration tool. The professional version of Office also includes Microsoft Access, a database package. (Databases are covered more in chapter 4.) This package continues to dominate the market, and most businesses expect employees to know how to use this software. However, many competitors to Microsoft Office exist and are compatible with Microsoft's file formats (see table below). Microsoft now has a cloud-based version called Microsoft Office 365. Similar to Google Drive, this suite allows users to edit and share documents online utilizing cloud-computing technology. Cloud computing will be discussed later in this chapter.
Sidebar: “PowerPointed” to Death
As presentation software, specifically Microsoft PowerPoint, has gained acceptance as the primary method to formally present information in a business setting, the art of giving an engaging presentation is becoming rare. Many presenters now just read the bullet points in the presentation and immediately bore those in attendance who can already read it for themselves.
The real problem is not with PowerPoint as much as it is with the person creating and presenting. The book Presentation Zen by Garr Reynolds is highly recommended to anyone who wants to improve their presentation skills.
New opportunities have been presented to make presentation software more effective. One such example is Prezi. Prezi is a presentation tool that uses a single canvas for the presentation, allowing presenters to place text, images, and other media on the canvas and then navigate between these objects as they present.
Enterprise Software
As the personal computer proliferated inside organizations, control over the information generated by the organization began splintering. For example, the customer service department creates a customer database to track calls and problem reports. The sales department also creates a database to keep track of customer information. Which one should be used as the master list of customers? As another example, someone in sales might create a spreadsheet to calculate sales revenue, while someone in finance creates a different one that meets their department's needs. However, the two spreadsheets will likely come up with different totals for revenue. Which one is correct? And who is managing all this information? This type of example presents challenges to management to make effective decisions.
Enterprise Resource Planning
In the 1990s, the need to bring the organization’s information back under centralized control became more apparent. The enterprise resource planning (ERP) system (sometimes just called enterprise software) was developed to bring together an entire organization in one software application. Key characteristics of an ERP include:
• An integrated set of modules: Each module serves different functions in an organization, such as Marketing, Sales, Manufacturing.
• A consistent user interface: An ERP is a software application that provides a common interface across all modules of the ERP and is used by an organization’s employees to access information
• A common database: All users of the ERP edit and save their information from the data source. This means that there is only one customer database, there is only one calculation for revenue, etc.
• Integrated business processes: All users must follow the same business rules and process throughout the entire organization”: ERP systems include functionality that covers all of the essential components of a business, such as how organizations track cash, invoices, purchases, payroll, product development, supply chain.
ERP systems were originally marketed to large corporations, given that they are costly. However, as more and more large companies began installing them, ERP vendors began targeting mid-sized and even smaller businesses. Some of the more well-known ERP systems include those from SAP, Oracle, and Microsoft.
To effectively implement an ERP system in an organization, the organization must be ready to make a full commitment, including the cost to train employees as part of the implementation.
All aspects of the organization are affected as old systems are replaced by the ERP system. In general, implementing an ERP system can take two to three years and several million dollars.
So why implement an ERP system? If done properly, an ERP system can bring an organization a good return on its investment. By consolidating information systems across the enterprise and using the software to enforce best practices, most organizations see an overall improvement after implementing an ERP. Business processes as a form of competitive advantage will be covered in chapter 9.
Customer Relationship Management
A customer relationship management (CRM) system is a software application designed to manage customer interactions, including customer service, marketing, and sales. It collects all data about the customers. The objectives of a CRM are:
• Personalize customer relationship to increase customer loyalty
• Improve communication
• Anticipate needs to retain existing or acquire new customers
Some ERP software systems include CRM modules. An example of a well-known CRM package in Salesforce
Supply Chain Management
Many organizations must deal with the complex task of managing their supply chains. At its simplest, a supply chain is a linkage between an organization’s suppliers, its manufacturing facilities, and its products' distributors. Each link in the chain has a multiplying effect on the complexity of the process. For example, if there are two suppliers, one manufacturing facility, and two distributors, then there are 2 x 1 x 2 = 4 links to handle. However, if you add two more suppliers, another manufacturing facility, and two more distributors, then you have 4 x 2 x 4 = 32 links to manage.
A supply chain management (SCM) system manages the interconnection between these links and the products' inventory in their various development stages. The Association provides a full definition of a supply chain management system for Operations Management: “The design, planning, execution, control, and monitoring of supply chain activities to create net value, building a competitive infrastructure, leveraging worldwide logistics, synchronizing supply with demand, and measuring performance globally.” 2 Most ERP systems include a supply chain management module.
Mobile Software
A mobile application, commonly called a mobile app, is a software application programmed to run specifically on a mobile device such as smartphones and tablets.
As we saw in chapter 2, smartphones and tablets are becoming a dominant form of computing, with many more smartphones being sold than personal computers. This means that organizations will have to get smart about developing software on mobile devices to stay relevant. With the rise of mobile devices' adoption, the number of apps is exploding in the millions (Forbes.com, 2020), and there is an app for just about anything a user is looking to do. Examples include apps as a flashlight, a step counter, a plant identifier, and games.
We will discuss the question of building a mobile app in Chapter 10. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/03%3A_Software/3.01%3A_Introduction_to_Software.txt |
Historically, for software to run on a computer, an individual copy of the software had to be installed on the computer, either from a disk or, more recently, after being downloaded from the Internet. The concept of “cloud” computing changes this model.
“The cloud” refers to applications, services, and data stored in data centers, server farms, and storage servers and accessed by users via the Internet. In most cases, the users don’t know where their data is actually stored. Individuals and organizations use cloud computing.
You probably already use cloud computing in some forms. For example, if you access your email via your web browser, you are using a form of cloud computing. If you use Google Drive’s applications, you are using cloud computing. Simultaneously, these are free versions of cloud computing, big business in providing applications and data storage over the web. Commercial and large applications can also exist on the cloud, such as the entire suite of CRM from Salesforce is offered via the cloud. Cloud computing is not limited to web applications: it can also be used for phone or video streaming services.
Advantages of Cloud Computing
• No software to install or upgrades to maintain.
• Available from any computer that has access to the Internet.
• Can scale to a large number of users easily.
• New applications can be up and running very quickly.
• Services can be leased for a limited time on an as-needed basis.
• Your information is not lost if your hard disk crashes or your laptop is stolen.
• You are not limited by the available memory or disk space on your computer.
Disadvantages of Cloud Computing
• Your information is stored on someone else’s computer
• You must have Internet access to use it. If you do not have access, you’re out of luck.
• You are relying on a third party to provide these services.
• You don’t know how your data is protected from theft or sold by your own cloud service provider.
Cloud computing can greatly impact how organizations manage technology. For example, why is an IT department needed to purchase, configure, and manage personal computers and software when all that is really needed is an Internet connection?
Using a Private Cloud
Many organizations are understandably nervous about giving up control of their data and applications using cloud computing. But they also see the value in reducing the need for installing software and adding disk storage to local computers. A solution to this problem lies in the concept of a private cloud. While there are various private cloud models, the basic idea is for the cloud service provider to rent a specific portion of their server space exclusive to a specific organization. The organization has full control over that server space while still gaining some of the benefits of cloud computing.
Virtualization
One technology that is utilized extensively as part of cloud computing is “virtualization.” Virtualization is using software to create a virtual machine that simulates a computer with an operating system. For example, using virtualization, a single computer that runs Microsoft Windows can host a virtual machine that looks like a computer with a specific Linux-based OS. This ability maximizes the use of available resources on a single machine. Companies such as EMC provide virtualization software that allows cloud service providers to provision web servers to their clients quickly and efficiently. Organizations are also implementing virtualization to reduce the number of servers needed to provide the necessary services. For more detail on how virtualization works, . | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/03%3A_Software/3.03%3A_Cloud_Computing.txt |
We just discussed different types of software and now can ask: How is software created? If the software is the set of instructions that tells the hardware what to do, how are these instructions written? If a computer reads everything as one and zero, do we have to learn how to write software that way? Thankfully, another software type is written, especially for software developers to write system software and applications - called programming languages. The people who can program are called computer programmers or software developers.
Analogous to a human language, a programming language consists of keywords, comments, symbols, and grammatical rules to construct statements as valid instructions understandable by the computer to perform certain tasks. Using this language, a programmer writes a program (called the source code). Another software then processes the source code to convert the programming statements to a machine-readable form, the ones, and zeroes necessary to execute the CPU. This conversion process is often known as compiling, and the software is called the compiler. Most of the time, programming is done inside a programming environment; when you purchase a copy of Visual Studio from Microsoft; It provides the developers with an editor to write the source code, a compiler, and help for many of Microsoft’s programming languages. Examples of well-known programming languages today include Java, PHP, and C's various flavors (Visual C, C++, C#.)
Thousands of programming languages have been created since the first programming language in 1883 by a woman named Ada Lovelace. One of the earlier English-like languages called COBOL has been in use since the 1950s to the present time in services that we still use today, such as payroll, reservation systems. The C programming language was introduced in the 1970s and remained a top popular choice. Some new languages such as C#, Swift are gaining momentum as well. Programmers select the best-matched language with the problem to be solved for a particular OS platform. For example, languages such as HTML and JavaScript are used to develop web pages.
It is hard to determine which language is the most popular since it varies. However, according to TIOBE Index, one of the companies that rank the popularity of the programming languages monthly, the top five in August 2020 are C, Java, Python, C++, and C# (2020). For more information on this methodology, please visit the TIOBE definition page. For those who wish to learn more about programming, Python is a good first language to learn because not only is it a modern language for web development, it is simple to learn and covers many fundamental concepts of programming that apply to other languages.
One person can write some programs. However, most software programs are written by many developers. For example, it takes hundreds of software engineers to write Microsoft Windows or Excel. To ensure teams can deliver timely and quality software with the least amount of errors, also known as bugs, formal project management methodologies are used, a topic that we will discuss in chapter 10.
Open-Source vs. Closed-Source Software
When the personal computer was first released, computer enthusiasts immediately banded together to build applications and solve problems. These computer enthusiasts were happy to share any programs they built and solutions to problems they found; this collaboration enabled them to innovate more quickly and fix problems.
As software began to become a business, however, this idea of sharing everything fell out of favor for some. When a software program takes hundreds of hours to develop, it is understandable that the programmers do not want to give it away. This led to a new business model of restrictive software licensing, which required payment for software to the owner, a model that is still dominant today. This model is sometimes referred to as closed source, as the source code remains private property and is not made available to others. Microsoft Windows, Excel, Apple iOS are examples of closed source software.
There are many, however, who feel that software should not be restricted. Like those early hobbyists in the 1970s, they feel that innovation and progress can be made much more rapidly if we share what we learn. In the 1990s, with Internet access connecting more and more people, the open-source movement gained steam.
Open-source software is software that has the source code available for anyone to copy and use. For non-programmers, it won’t be of much use unless the compiled format is also made available for users to use. However, for programmers, the open-source movement has led to developing some of the world's most-used software, including the Firefox browser, the Linux operating system, and the Apache webserver.
Some people are concerned that open-source software can be vulnerable to security risks since the source code is available. Others counter that because the source code is freely available, many programmers have contributed to open-source software projects, making the code less buggy and adding features, and fixing bugs much faster than closed-source software.
Many businesses are wary of open-source software precisely because the code is available for anyone to see. They feel that this increases the risk of an attack. Others counter that this openness decreases the risk because the code is exposed to thousands of programmers who can incorporate code changes to patch vulnerabilities quickly.
In summary, some benefits of the open-source model are:
• The software is available for free.
• The software source code is available; it can be examined and reviewed before it is installed.
• The large community of programmers who work on open-source projects leads to quick bug-fixing and feature additions.
Some benefits of the closed-source model are:
• Providing a financial incentive for software developers or companies
• Technical support from the company that developed the software.
Today there are thousands of open-source software applications available for download. An example of open-source productivity software is Open Office Suite. One good place to search for open-source software is , where thousands of software applications are available for free download.
Software Licenses
The companies or developers own the software they create. The software is protected by law either through patents, copyright, or licenses. It is up to the software owners to grant their users the right to use the software through the terms of the licenses.
For closed-source vendors, the terms vary depending on the price the users are willing to pay. Examples include single user, single installation, multi-users, multi-installations, per network, or machine.
They have specific permission levels for open-source vendors to grant using the source code and set the modified version conditions. Examples include free to distribute, remix, adapt for non-commercial use but with the condition that the newly revised source code must also be licensed under identical terms. While open-source vendors don’t make money by charging for their software, they generate revenues through donations or selling technical support or related services. For example, Wikipedia is a widely popular and online free-content encyclopedia used by millions of users. Yet, it relies mainly on donations to sustain its staff and infrastructure.
Reference
TIOBE Index for August 2020. Retrieved September 4, 2020, from https://www.tiobe.com
3.05: Summary
The software gives the instructions that tell the hardware what to do. There are two basic categories of software: operating systems and applications. Operating systems provide access to the computer hardware and make system resources available. Application software is designed to meet a specific goal. Productivity software is a subset of application software that provides basic business functionality to a personal computer: word processing, spreadsheets, and presentations. An ERP system is a software application with a centralized database that is implemented across the entire organization. Cloud computing is a software delivery method that runs on any computer with a web browser and access to the Internet. Software is developed through a process called programming, in which a programmer uses a programming language to put together the logic needed to create the program. The software can be an open-source or a closed-source model, and users or developers are granted different licensing terms.
3.06: Study Questions
Study Questions
1. Give your own definition of software. Explain the key terms in your definition.
2. Identify the key functions of the operating system
3. Match which of the following are operating systems and which are applications: Microsoft Excel, Google Chrome, iTunes, Windows, Android, Angry Birds.
4. List your favorite software application and explain what tasks it helps you accomplish
5. Explain what is a “killer” app and identify the killer app for the PC
6. List at least three basic categories of mobile apps and give an example of each.
7. Explain what an ERP system does.
8. Explain the difference between open-source software and closed-source software. Give an example of each.
9. Describe what a software license is.
10. Explain the process of creating a software program.
Exercises
1. Go online and find a case study about the implementation of an ERP system. Was it successful? How long did it take? Does the case study tell you how much money the organization spent?
2. What ERP system does your university or place of employment use? Find out which one they use and see how it compares to other ERP systems.
3. If you were running a small business with limited funds for information technology, would you consider using cloud computing? Find some web-based resources that support your decision.
4. Download and install . Use it to create a document or spreadsheet. How does it compare to Microsoft Office? Does the fact that you got it for free make it feel less valuable?
5. Go to and review their most downloaded software applications. Report back on the variety of applications you find. Then pick one that interests you and report back on what it does, the kind of technical support offered, and the user reviews.
6. Go online to research the security risks of open-source software. Write a short analysis giving your opinion on the different risks discussed.
7. What are three examples of programming languages? What makes each of these languages useful to programmers? | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/03%3A_Software/3.04%3A_Software_Creation.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• Explain the differences between data, information, and knowledge;
• Define the term database and identify the steps to creating one;
• Describe the role of a database management system;
• Describe the characteristics of a data warehouse; and
• Define data mining and describe its role in an organization.
This chapter explores how organizations use information systems to turn data into information and knowledge to be used for competitive advantage. We will discuss how different types of data are captured and managed, different types of databases, and how individuals and organizations use them.
04: Data and Databases
Introduction
You have already been introduced to the first two components of information systems: hardware and software. However, those two components by themselves do not make a computer useful. Imagine if you turned on a computer, started typing a document, but could not save a document. Imagine if you opened your music app, but there was no music to play. Imagine opening a web browser, but there were no web pages. Without data, hardware and software are not very useful! Data is the third component of an information system.
Data, Information, Knowledge, and Wisdom
Data is raw bits and pieces of information with no context, for example, your driver's license or your first name. The information system helps organize this information in a designed systematic manner to be useful to the user. The users can be individuals or businesses. This organized collection of interrelated data is called a database. The two highest levels of data are quantitative or qualitative. To know which to use depends on the question to be answered and the available resources. Quantitative data is numeric, the result of a measurement, count, or some other mathematical calculation. A quantitative example would be how many 5th graders attended music camp this summer. Qualitative data consist of words, descriptions, and narratives. A qualitative example would be a camper wearing a red tee-shirt. A number can be considered qualitative as well. If I tell you my favorite number is 5, that is qualitative data because it is descriptive, not the result of a measurement or mathematical calculation.
When using qualitative data and quantitative data, we need to understand the context of its use. There are advantages and disadvantages to each. This table encapsulates the advantages and disadvantages when gathering data.
Qualitative Data
Advantages
Disadvantages
Quantitative Data
Advantages
Disadvantages
By itself, data is a collection of components waiting to be analyzed. To be useful, it needs to be given context. Users and designers create meaning as they collect, reference, and organize the data. Information typically involves manipulating raw data to obtain an indication of magnitude, trends, and patterns in the data for a purpose. Returning to the example above, if I told you that “15, 23, 14, and 85″ are the numbers of students that had registered for an upcoming camp, that would be information. By adding the context – that the numbers represent the count of students registering for specific classes – I have added context to data which now is information. Information is data that has been analyzed, processed, structured, and avails itself to be useful.
Once we collect and understand the data, we put it into context, aggregate it, and analyze it. We then have information, and we can use it to make decisions for the individual and our organization. We can say that this consumption of information produces knowledge. . Knowledge can be viewed as information that facilitates action. This knowledge can be used to make decisions, set policies, and even spark innovation.
The final step up the information ladder is the step from knowledge (knowing a lot about a topic) to wisdom.
Wisdom is experience coupled with understanding and insight. We can say that someone has wisdom when combining their knowledge and experience to produce a deeper understanding of a topic. It often takes many years to develop wisdom on a particular topic and requires patience and expertise. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/04%3A_Data_and_Databases/4.01%3A_Introduction_to_Data_and_Databases.txt |
Data can be anything. Some examples of data are weights, prices, costs, numbers of items sold, names, places. Almost all software programs require data to do something useful. It can be straightforward, as a name of a place, a person, or a number. An example would be editing a document in a word processor such as Microsoft Word, the document you are working on is the data. The word-processing software can manipulate the data: create a new document, duplicate a document, or modify a document. Today we have a new type of data called biometrics, which are physical or behavioral human characteristics that can digitally identify a person. Examples would be facial recognition used for passports. Fingerprint authentication is used to unlock smartphones. Iris recognition uses high-resolution images of the iris. This data is stored for future identification. Many governments and high-security companies use iris recognition because it is considered to be errorless when identifying individuals.
Databases
Many information systems aim to transform data into information to generate knowledge that can be used for decision-making. To do this, the system must take or read the data, then put the data into context, and provide tools for aggregation and analysis. A database is designed for just such a purpose.
A database is an organized, meaningful collection of related information. It is an organized collection because, in a database, all data is interrelated and associated with other data. All information in a database should be related; separate databases should be created to manage unrelated information. For example, a database that contains information about employees' payroll should not also hold information about the company’s stock prices. Digital databases include things created by MS Excel, such as tables to more complicated databases used every day by people, from checking your balance at the bank to accessing medical records and online shopping. Databases help us to eliminate redundant information. It ensures more effective ways to access searches. Back in the day, databases would be a filing cabinet. For this text, we will only consider digital databases.
Relational Databases
Databases can be organized in many different ways and thus take many forms. DBMS (Database Management System) is software that facilitates the organization and manipulation of data. DBMS functions as an interface between the database and the end-user. The software is designed to store, define, retrieve and manage the data in the database. Other forms of databases today are relational databases. Examples of relational databases are Oracle (RDBMS), MySQL, SQL, and PostgreSQL. A relational database is one in which stores data in an organized fashion of rows and columns, which will create one or more tables of related information. Each table has a set of fields, which define the nature of the data stored in the table. A record is one instance of a set of fields in a table. To visualize this, think of an excel spreadsheet, the records as the rows of the table and the fields as the table columns. In the example below, we have a table of student information, with each row representing a student and each column representing one piece of information about the student. The relational database model does not scale well. The term scale here refers to a larger and larger database being distributed on a larger number of computers connected via a network. Some companies are looking to provide large-scale database solutions by moving away from the relational model to other, more flexible models. For example, Google now offers the App Engine Datastore, which is based on NoSQL. Developers can use the App Engine Datastore to develop applications that access data from anywhere in the world. Amazon.com offers several database services for enterprise use, including Amazon RDS, a relational database service, and Amazon DynamoDB, a NoSQL enterprise solution.
Relational Database Example
Figure \(2\): Relational database table adapted from David Bourgeois, Ph.D. is licensed under CC BY 4.0
Fields (Columns)
Records (Rows)
First Name
Last Name
Major
Birthdate
Ann Marie
Strong
Pre-Law
2/27/1997
Evan
Right
Business
12/4/1996
Michelle
Smith
Math
6/27/1995
4.03: Structured Query Language
Once you have a database designed and loaded with data, how will you do something useful with it? The primary way to work with a relational database is to use Structured Query Language, SQL (pronounced “sequel,” or stated as S-Q-L). Almost all applications that work with databases (such as database management systems, discussed below) use SQL to analyze and manipulate relational data. As its name implies, SQL is a language that can be used to work with a relational database or for streaming processing in a relational data stream management system. From a simple request for data to a complex update operation, SQL is a mainstay of programmers and database administrators. To give you a taste of what SQL might look like, here are a couple of examples using our Student Clubs database.
• The following query will retrieve a list of the first and last names of the club presidents:
SELECT "First Name," "Last Name" FROM "Students" WHERE "Students.ID" =
• The following query will create a list of the number of students in each club, listing the club name and then the number of members:
SELECT "Clubs.Club Name", COUNT("Memberships.Student ID") FROM "Clubs"
An in-depth description of how SQL works is beyond this introductory text's scope. Still, these examples should give you an idea of the power of using SQL to manipulate relational data. Many database packages, such as Microsoft Access, allow you to visually create the query you want to construct and then generate the SQL query for you.
Rows and Columns in a Table
In a relational database, all the tables are related by one or more fields so that it is possible to connect all the tables in the database through the field(s) they have in common. For each table, one of the fields is identified as a primary key. This key is the unique identifier for each record in the table. To help you understand these terms further, let’s walk through the process of designing the following database. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/04%3A_Data_and_Databases/4.02%3A_Examples_of_Data.txt |
Designing a Database
Suppose a university wants to create a database to track participation in student clubs. After interviewing several people, the design team learns that implementing the system is to give better insight into how the university funds clubs. This will be accomplished by tracking how many members each club has and how active the clubs are. The team decides that the system must keep track of the clubs, their members, and their events. Using this information, the design team determines that the following tables need to be created:
• Clubs: this will track the club name, the club president, and a short description of the club.
• Students: student name, e-mail, and year of birth.
• Memberships: this table will correlate students with clubs, allowing us to have any given student join multiple clubs.
• Events: this table will track when the clubs meet and how many students showed up.
Now that the design team has determined which tables to create, they need to define the specific information that each table will hold. This requires identifying the fields that will be in each table. For example, Club Name would be one of the fields in the Clubs table. First Name and Last Name would be fields in the Students table. Finally, since this will be a relational database, every table should have a field in common with at least one other table (in other words: they should have a relationship with each other).
To properly create this relationship, a primary key must be selected for each table. This key is a unique identifier for each record in the table. For example, in the Students table, it might be possible to use students’ first names to identify them uniquely. However, it is more than likely that some students will share the last name (like Mike, Stefanie, or Chris), so a different field should be selected. A student’s email address might be a good choice for a primary key since e-mail addresses are unique. However, a primary key cannot change, so this would mean that if students changed their email addresses, we would have to remove them from the database and then re-insert them – not an attractive proposition. Our solution is to create a value for each student — a user ID — that will act as a primary key. We will also do this for each of the student clubs. This solution is quite common and is the reason you have so many user IDs!
You can see the final database design in the figure below:
With this design, not only do we have a way to organize all of the information we need to meet the requirements, but we have also successfully related all the tables together. Here’s what the database tables might look like with some sample data. Note that the Memberships table has the sole purpose of allowing us to relate multiple students to multiple clubs.
Normalization
When designing a database, one important concept to understand is normalization. In simple terms, to normalize a database means to design it in a way that:
• Reduces redundancy of data between tables easier mapping
• Takes out inconsistent data.
• Information is stored in one place only.
• Gives the table as much flexibility as possible.
In the Student Clubs database design, the design team worked to achieve these objectives. For example, to track memberships, a simple solution might have been to create a Members field in the Clubs table and then list all of the members' names. However, this design would mean that if a student joined two clubs, then his or her information would have to be entered a second time. Instead, the designers solved this problem by using two tables: Students and Memberships.
In this design, when a student joins their first club, we must add the student to the Students table, where their first name, last name, e-mail address, and birth year are entered. This addition to the Students table will generate a student ID. Now we will add a new entry to denote that the student is a specific club member. This is accomplished by adding a record with the student ID and the club ID in the Memberships table. If this student joins a second club, we do not have to duplicate the student’s name, e-mail, and birth year; instead, we only need to make another entry in the Memberships table of the second club’s ID and the student’s ID.
The Student Clubs database design also makes it simple to change the design without major modifications to the existing structure. For example, if the design team was asked to add functionality to the system to track faculty advisors to the clubs, we could easily accomplish this by adding a Faculty Advisors table (similar to the Students table) and then adding a new field to the Clubs table to hold the Faculty Advisor ID.
Data Types
When defining the fields in a database table, we must give each field a data type. For example, the field Birth Year is a year, so it will be a number, while First Name will be text. Most modern databases allow for several different data types to be stored. Some of the more common data types are listed here:
• Text: for storing non-numeric data that is brief, generally under 256 characters. The database designer can identify the maximum length of the text.
• Number: for storing numbers. There are usually a few different number types selected, depending on how large the largest number will be.
• Yes/No: a special form of the number data type that is (usually) one byte long, with a 0 for “No” or “False” and a 1 for “Yes” or “True.”
• Date/Time: a special form of the number data type can be interpreted as a number or a time.
• Currency: a special form of the number data type that formats all values with a currency indicator and two decimal places.
• Paragraph Text: this data type allows for text longer than 256 characters.
• Object: this data type allows for data storage that cannot be entered via keyboards, such as an image or a music file.
The importance of properly defining data type is to improve the data's integrity and the proper storing location. We must properly define the data type of a field, and a data type tells the database what functions can be performed with the data. For example, if we wish to perform mathematical functions with one of the fields, we must tell the database that the field is a number data type. So if we have a field storing birth year, we can subtract the number stored in that field from the current year to get age.
Allocation of storage space for the defined data must also be identified. For example, if the First Name field is defined as a text(50) data type, fifty characters are allocated for each first name we want to store. However, even if the first name is only five characters long, fifty characters (bytes) will be allocated. While this may not seem like a big deal, if our table ends up holding 50,000 names, we allocate 50 * 50,000 = 2,500,000 bytes for storage of these values. It may be prudent to reduce the field's size, so we do not waste storage space. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/04%3A_Data_and_Databases/4.04%3A_Designing_a_Database.txt |
When introducing the concept of databases to students, they quickly decide that a database is similar to a spreadsheet. There are some similarities, but there are some big differences that we will review. A spreadsheet hopes to grow to a database one day.
Let's start with the spreadsheet. It is easy to create, edit and format. It is simple to use for beginners. It is made up of columns and rows and stores data in an organized fashion similar to a database table. The two leading spreadsheet applications are Google Sheets and Microsoft Excel. One of the very convenient things about spreadsheets is shared accessibility with multiple users. This is not the case with a database.
For simple uses, a spreadsheet can substitute for a database quite well. If a simple listing of rows and columns (a single table) is all that is needed, then creating a database is probably overkill. In our Student Clubs example, if we only needed to track a listing of clubs, the number of members, and the president's contact information, we could get away with a single spreadsheet. However, the need to include a listing of events and members' names would be problematic if tracked with a spreadsheet.
When several types of data must be mixed, or when the relationships between these types of data are complex, then a spreadsheet is not the best solution. A database allows data from several entities (such as students, clubs, memberships, and events) to be related together into one whole. While a spreadsheet does allow you to define what kinds of values can be entered into its cells, a database provides more intuitive and powerful ways to define the types of data that go into each field, reducing possible errors and allowing for easier analysis. Though not good for replacing databases, spreadsheets can be ideal tools for analyzing the data stored in a database. A spreadsheet package can be connected to a specific table or query in a database and used to create charts or perform analysis on that data.
A database has many similarities in looks of a spreadsheet utilizing tables that are made up of columns and rows. The database is a collection of structured raw material. The information is stored on the computer. A spreadsheet is easily editable with its rows and columns; this is not the case of a database. The database is formatted, so the field (column) is preconfigured. The database is also relational in that it has the ability to create relationships between records and tables. Spreadsheets and databases can both be edited by multiple authors. In a database, a log is created as changes are made. This is not the case with a spreadsheet. A spreadsheet is terrific for small projects, but a database would become more useful as the project grows.
Streaming
Streaming is a new easy way to view on-demand audio or video from a remote server. Companies offer audio and video files from their server that can be accessed remotely by the user. The data is transmitted from their server directly and continuously to your device. Streaming can be accessed by any device that connects to the internet. There is no need for large memory or having to wait for a large file to download. Stream technology is becoming very popular because of its convenience and accessibility. An example of some streaming services is Netflix, iTunes, and YouTube.
Other Types of Databases
The relational database model is the most used today. However, many other database models exist that provide different strengths than the relational model. In the 1960s and 1970s, the hierarchical database model connected data in a hierarchy, allowing for a parent/child relationship between data. The document-centric model allowed for more unstructured data storage by placing data into “documents” that could then be manipulated.
The concept of NoSQL (from the phrase “not only SQL”). NoSQL arose from the need to solve large-scale databases spread over several servers or even across the world. For a relational database to work properly, only one person must be able to manipulate a piece of data at a time, a concept known as record-locking. But with today’s large-scale databases (think Google and Amazon), this is not possible. A NoSQL database can work with data more loosely, allowing for a more unstructured environment, communicating changes to the data over time to all the servers that are part of the database. Many companies collect data for all sorts of reasons, from how many times you visit a site to what you are viewing at the site. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/04%3A_Data_and_Databases/4.05%3A_Sidebar-_The_Difference_between_a_Database_and_a_Spreadsheet.txt |
Big Data refers to capturing large complex data sets that conventional database tools do not have the processing power to analyze. Storing and analyzing that much data is beyond the power of traditional database management tools. Understanding the best tools and techniques to manage and analyze these large data sets is a problem that governments and businesses alike are trying to solve. Big data comes from many different areas such as text, images, audio, and videos. Businesses use this data and refer to it as predictive analytics or user behavior analytics. Companies such as Walmart and Amazon are now collecting big data, to see what searches their customers are looking at. Think of the number of customers and products these two powerhouses have and the amount of data generated.
4.07: Data Warehouse
As organizations have begun to utilize databases as the centerpiece of their operations, the need to fully understand and leverage the data they are collecting has become more and more apparent. However, directly analyzing the data needed for day-to-day operations is not a good idea; we do not want to tax the company's operations more than we need to. Further, organizations also want to analyze data in a historical sense: How does the data we have today compare with the same data set this time last month or last year? From these needs arose the concept of the data warehouse.
The data warehouse concept is simple: extract data from one or more of the organization’s databases and load it into the data warehouse (which is itself another database) for storage and analysis. However, the execution of this concept is not that simple. A data warehouse should be designed so that it meets the following criteria:
• It uses non-operational data. This means that the data warehouse uses a copy of data from the active databases that the company uses in its day-to-day operations, so the data warehouse must pull data from the existing databases on a regular, scheduled basis.
• The data is time-variant. This means that whenever data is loaded into the data warehouse, it receives a timestamp, which allows for comparisons between different time periods.
• The data is standardized. Because the data in a data warehouse usually comes from several different sources, it is possible that the data does not use the same definitions or units. For example, our Events table in our Student Clubs database lists the event dates using the mm/dd/yyyy format (e.g., 01/10/2013). A table in another database might use the format yy/mm/dd (e.g., 13/01/10) for dates. For the data warehouse to match up the dates, a standard date format would have to be agreed upon, and all data loaded into the data warehouse would have to be converted to use this standard format. This process is called extraction-transformation-load (ETL).
There are two primary schools of thought when designing a data warehouse: bottom-up and top-down. The bottom-up approach starts by creating small data warehouses, called data marts, to solve specific business problems. As these data marts are created, they can be combined into a larger data warehouse. The top-down approach suggests that we should start by creating an enterprise-wide data warehouse and then, as specific business needs are identified, create smaller data marts from the data warehouse.
Benefits of Data Warehouses
Organizations find data warehouses quite beneficial for many reasons:
• Ability to integrate data from multiple systems formatted with different software and compile it to gain deeper insight.
• The process of developing a data warehouse forces an organization to understand the data better than it is currently collecting and, equally important, what data is not being collected.
• A data warehouse provides a centralized view of all data being collected across the enterprise and provides a means for determining inconsistent data.
• Once all data is identified as consistent, an organization can generate one version of the truth. This is important when the company wants to report consistent statistics about itself, such as revenue or employees' numbers.
• By having a data warehouse, snapshots of data can be taken over time. This creates a historical record of data, which allows for an analysis of trends.
• A data warehouse provides tools to combine data, which can provide new information and analysis. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/04%3A_Data_and_Databases/4.06%3A_Big_Data.txt |
Data mining is the process of sorting through big data (measured in terabytes). In the past, there was a lack of data to analyze. The challenge is an overabundance of data that must be reviewed, which is called data overload. This becomes an issue because the user needs to evaluate which information is useful and which is not. Many businesses do mining to get detailed insight on their customers, products and to optimize business decisions. The analysis is executed with sophisticated programs. The programs can combine multiple databases. The end effect is so complex that companies must find a way to store the data. Data warehouses are needed. The data warehouse is where the information is stored and processed from the data mining. The price for a simple warehouse could start at \$10 million.
Companies like Google, Netflix, Amazon, and Facebook are big users of data mining. They seek to find out who their consumer is and how best to keep them and sell them more products. They also review their products. The means used are reviewing data and finding trends, patterns, and associations to make decisions. Generally, data mining is accomplished through automated means against extensive data sets, such as a data warehouse.
Examples of data mining include:
• An analysis of sales from a large grocery chain might determine that milk is purchased more frequently the day after it rains in cities with a population of less than 50,000.
• A bank may find that loan applicants whose bank accounts show particular deposit and withdrawal patterns are not good credit risks.
• A baseball team may find those collegiate baseball players with specific statistics in hitting, pitching, and fielding for more successful major league players.
In some cases, a data-mining project is begun with a hypothetical result in mind. For example, a grocery chain may already have some idea that the buying patterns change after it rains and want to get a deeper understanding of exactly what is happening. In other cases, there are no presuppositions, and a data-mining program is run against large data sets to find patterns and associations.
4.09: Database Management Systems
A database looks like one or more files. For the data in the database to be read, changed, added, or removed, a software program must access it. The software creates a database by building tables, forms, reports, and other important variables. Many software applications have this ability: iTunes can read its database to give you a listing of its songs (and play the songs); your mobile-phone software can interact with your list of contacts. Companies of all sizes use this software to enable themselves to streamline the data they have collected to be useful for multiple purposes such as marketing, customer service, and sales. Database management systems help businesses to collect complex data and customize it for their own use. When selecting Database Management Software (DBMS,) the company needs to know what they want to utilize and establish goals. Questions that need to be answered are; What software can you use to create a database, change a database’s structure, or analyze? For example, Apache OpenOffice.org Base can create, modify, and analyze databases in open-database (ODB) format. Microsoft’s Access DBMS is used to work with databases in its own Microsoft Access Database format. Both Access and Base have the ability to read and write to other database formats as well.
Microsoft Access and Open Office Base are examples of personal database-management systems. These systems are primarily used to develop and analyze single-user databases. These databases are not meant to be shared across a network or the Internet but are instead installed on a particular device and work with a single user at a time. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/04%3A_Data_and_Databases/4.08%3A_Data_Mining.txt |
Small and large organizations utilize enterprise databases for managing when collecting large complex data. An enterprise database is robust enough to handle multiple users' queries successfully simultaneously and can handle a range of 100 to 10,000 users at a time. (Technopedia, 2020). Computers have become networked and are now joined worldwide via the Internet, and a class of databases has emerged that can be accessed by two, ten, or even a million people. These databases are sometimes installed on a single computer to be accessed by a group of people at a single location or a small company. They can also be installed over several servers worldwide, meant to be accessed by millions in large companies. These relational enterprise database packages are built and supported by companies such as Oracle, Microsoft, and IBM. The open-source MySQL is also an enterprise database. Open-source databases are free and can be shared, storing vital information in software that the organization can control. An open-source database allows users to create a system based on their unique requirements and business needs. The source code can be customized to match any user preference. Open-source databases address the need to analyze data from a growing number of new applications at a lower cost. The deluge of social media and the Internet of Things (IoT) has ushered an age of massive data that needs to be collected and analyzed. The data only has value if an enterprise can analyze it to find useful patterns or real-time insights. The data contains vast amounts of information that can overload a traditional database. The flexibility and cost-effectiveness of open source database software have revolutionized database management systems. (Omnisci, 2020).
Sidebar: What Is Metadata?
The term metadata can be understood as “data about data.” For example, when looking at one of Year of Birth's values in the Students table, the data itself may be “1992″. The metadata about that value would be the field name Year of Birth, the last updated time, and the data type (integer). Another example of metadata could be for an MP3 music file, like the one shown in the image below; information such as the song's length, the artist, the album, the file size, and even the album cover art is classified as metadata. When a database is being designed, a “data dictionary” is created to hold the metadata, defining its fields and structure.
Data Governance
Data governance is the process of taking data and managing the availability, integrity, and usability in enterprise systems. Proper data governance ensures the data is consistent, trustworthy, and secured. We are in a time when organizations must pay close attention to privacy regulations and increasingly need to rely more on data analytics to optimize decision making and optimize operations. Data governance can be used at both the micro and macro levels. When we refer to micro, the focus is on the individual organization to ensure high data quality throughout the lifecycle to achieve optimal business objectives. The macro-level refers to cross-border flows by countries which are called international data governance.
4.11: Knowledge Management
We end the chapter with a discussion on the concept of knowledge management (KM). All companies accumulate knowledge over the course of their existence. Some of this knowledge is written down or saved, but not in an organized fashion. Much of this knowledge is not written down; instead, it is stored inside its employees' heads. Knowledge management is the process of formalizing the capture, indexing, and storing of the company’s knowledge to benefit from the experiences and insights that the company has captured during its existence.
Privacy Concerns
The increasing power of data mining has caused concerns for many, especially in the area of privacy. It is becoming easier in today’s digital world than ever to take data from disparate sources and combine them to do new forms of analysis. In fact, a whole industry has sprung up around this technology: data brokers. These firms combine publicly accessible data with information obtained from the government and other sources to create vast warehouses of data about people and companies that they can then sell. This subject will be covered in detail in chapter 12 – the chapter on the ethical concerns of information systems.
4.12: Sidebar- What is data science
Sidebar: What is data science?
Data science takes structured and unstructured data and uses scientific methods, processes, algorithms, and systems to extract knowledge and insight. It begins by procuring data from many sources such as web servers, logs, databases, APIs (application program interface), and online repositories. Once the acquisition has happened, the data must be cleaned and pipeline data. This is done by sorting and organizing relevant and usable data; this is the transformation process. Data Modeling is next; the goal is to create the best modeling that suits the company's needs when using the data. This can be done using metrics, algorithms, and analytics. The goal is to progress to AI and deep learning or machine learning. Data science problem solves company issues using data.
• Structured Data - Is data that is found in a fixed field within a record or file. It includes data contained in relational databases and spreadsheets. Such as:
• Date
• Time
• Census Data
• Facebook “Likes”
• Unstructured Data - Is information that is not organized and does not have a pre-defined model. Such as:
• Body of emails
• Tweets
• Facebook Status
• Video Transcripts
What is data analytics?
Data Analytics takes raw data gathered from data mining and analyzes the information to uncover relationships and patterns to find insight into the data when using it. Companies use these analytics to optimize problem-solving and assist in decision-making. The information is helpful to understand who your consumer is as well as marketing your company or product. This is all helpful to create efficiency and streamline operations. Data continuously being collected can then be adjusted as new criteria happen. Today's data analytics are deeper, larger in abundance, and retrieved quicker than yesteryear. The information is more accurate and detailed, which accelerates successful problem-solving.
Business Intelligence and Business Analytics
This is now a new trend. With tools such as data warehousing and data mining at their disposal, businesses learn how to use the information to their advantage. The term business intelligence is used to describe how organizations use to take data they are collecting and analyze it to obtain a competitive advantage. Besides using data from their internal databases, firms often purchase information from data brokers to understand their industries' big-picture understanding. Business analytics is the term used to describe internal company data to improve business processes and practices. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/04%3A_Data_and_Databases/4.10%3A_Enterprise_Databases.txt |
Summary
In this chapter, we learned about the role that data and databases play in the context of information systems. Data is made up of small facts and information without context. If you give data context, then you have information. Knowledge is gained when information is consumed and used for decision-making. A database is an organized collection of related information. Relational databases are the most widely used type of database, where data is structured into tables, and all tables must be related to each other through unique identifiers. A database management system (DBMS) is a software application used to create and manage databases and take the form of a personal DBMS, used by one small business or person versus an enterprise DBMS that multiple users can use. A data warehouse is a special form of database that takes data from other databases in an enterprise and organizes it for analysis. Data mining is the process of looking for patterns and relationships in large data sets. Many businesses use databases, big data, data warehouses, and data-mining techniques to produce business intelligence and gain a competitive advantage.
4.14: Study Questions
Study Questions
1. What is the difference between data, information, and knowledge?
2. Explain in your own words the difference between hardware and software components of information systems.
3. What is the difference between quantitative data and qualitative data? In what situations could the number 63 be considered qualitative data?
4. What are the characteristics of a relational database?
5. When would using a personal DBMS make sense?
6. What is the difference between a spreadsheet and a database? List three differences between them.
7. Describe what the term normalization means.
8. What is Big Data?
9. Name a database you interact with frequently. What would some of the field names be?
10. Describe the benefits and what open-source data is.
11. Name three advantages of using a data warehouse.
12. What is data mining?
Exercises
1. Review the design of the Student Clubs database earlier in this chapter. Reviewing the lists of data types given, what data types would you assign to each of the fields in each of the tables. What lengths would you assign to the text fields?
2. Review structured and unstructured data and list five reasons to use each.
3. Using Microsoft Access, download the database file of comprehensive baseball statistics from the website
4. SeanLahman.com. (If you don’t have Microsoft Access, you can download an abridged version of the file here that is compatible with Apache Open Office). Review the structure of the tables included in the database. Come up with three different data-mining experiments you would like to try, and explain which fields in which tables would have to be analyzed.
5. Do some original research and find two examples of data mining. Summarize each example and then write about what the two examples have in common.
6. Conduct some independent research on the process of business intelligence. Using at least two scholarly or practitioner sources, write a two-page paper giving examples of how business intelligence is being used.
7. Conduct some independent research on the latest technologies being used for knowledge management. Using at least two scholarly or practitioner sources, write a two-page paper giving examples of software applications or new technologies being used in this field. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/04%3A_Data_and_Databases/4.13%3A_Summary.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
Today’s computing and smart devices are expected to be always-connected devices to support the way we learn, communicate, do business, work, and play, in any place, on any devices, and at any time. In this chapter, we review the history of networking, how the Internet works, and the use of multiple networks in organizations today.
• 5.1: Introduction to Networking and Communication
The way we communicate has affected every important aspect of our lives and the world on a broad scale. Education, business, politics, etc.. are all heavily dependent on the internet to communicate effectively.
• 5.2: A Brief History of the Internet
This chapter presents a brief history of the Internet and the stage of information systems upon which our social and commerce connections progressively depend.
• 5.3: Networking Today
The evolution of the world we live i has been drastically impacted by the internet. most of us cannot imagine living without social media, texting, online shopping, etc.. in this chapter, we discuss networking today.
• 5.4: How has the Human Network Influenced you?
Online Communication has changed our lives, and one important aspect is education. Online learning has altered the education system to affect students' learning by providing students more opportunities and not limiting them to local institutions to receive an education.
• 5.5: Providing Resources in a Network
Networks connect various devices in our homes, offices, schools, etc. Many devices could be simultaneously connected to the same network, such as a printer, a laptop, a smartphone, and an iPad.
• 5.6: LANs, WANs, and the Internet
Devices and media are the hardware of the network. The messages being sent and received from one device to another are the software, and LANs and WANs connect the two devices to facilitate sending the message from the sender to the recipient.
• 5.7: Network Representations
Abbreviations apply to networks as well as people's names, school names, etc. Network representations are symbols utilized to represent the different hardware and connections that make up a network.
• 5.8: The Internet, Intranets, and Extranets
The internet is made up of many interconnected networks. LANs are connected to each other through a WAN.
• 5.9: Internet Connections
There are various ways in which one can connect to the internet. A connection to the internet could be through dial-up, cable, satellite, cellular, and DSL.
• 5.10: The Network as a Platform Converged Networks
There are different types of networks, converging networks, and separate networks. Separate networks do not allow different devices connected to different networks to communicate because they aren't interconnected. However, converged networks are built to convey data among various devices connected to the same network.
• 5.11: Reliable Network
Networks aid various applications and services when it comes to the physical infrastructure of the service or application. Underlying network architecture needs to deliver the four fundamental qualities, which are, Quality of Service (QoS), security, fault tolerance, and scalability.
• 5.12: The Changing Network Environment Network Trends
Technology is constantly evolving, and new network trends influence organizations and consumers.
• 5.13: Technology Trends in the Home
Networking trends in the home provide more convenient and user-friendly services, such as smart home technology, which interconnects different devices for habitual appliances.
• 5.14: Network Security
Like any other aspect in life, everything has its cons, and the cons of the internet are ensuring network safety and security. Ensuring a network is secure requires technologies, protocols, devices, tools, and techniques that keep data secure and moderate threat vectors.
• 5.15: Summary
• 5.16: Study Questions
05: Networking and Communication
We are at a basic turning point with many innovations to expand and engage our capacity to communicate. The globalization of the Web has succeeded faster than anybody has envisioned. The way social, commercial, political, and individual motivation happens is quickly changing to keep up with the advancement of this worldwide network. Within our improvement network, innovators will utilize the Web as a beginning point for their efforts, creating modern items and administrations particularly planned to require advantage of the network capabilities. As designers thrust the limits of what is conceivable, the capabilities of the interconnected systems that shape the Web will expand part within these projects' victory.
This chapter presents a brief history of the Internet and the stage of information systems upon which our social and commerce connections progressively depend. The fabric lays the foundation for investigating the administrations, innovations, and issues experienced by network experts as they plan, construct, and keep up the present-day network. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/05%3A_Networking_and_Communication/5.01%3A_Introduction_to_Networking_and_Communication.txt |
In the Beginning: ARPANET
The story of the Internet and networking can be traced back to the late 1950s. The US was in the Cold War's depths with the USSR, and each nation closely watched the other to determine which would gain a military or intelligence advantage. In 1957, the Soviets surprised the US with the launch of Sputnik, propelling us into the space age. In response to Sputnik, the US Government created the Advanced Research Projects Agency (ARPA), whose initial role was to ensure that the US was not surprised again. From ARPA, now called DARPA (Defense Advanced Research Projects Agency), the Internet first sprang. ARPA was the center of computing research in the 1960s, but there was just one problem: many computers could not talk to each other. In 1968, ARPA sent out a request for a communication technology proposal that would allow different computers located around the country to be integrated into one network. Twelve companies responded to the request, and a company named Bolt, Beranek, and Newman (BBN) won the contract and developed the first protocol for the network (Roberts, 1978). They began work right away and completed the job just one year later: in September 1969, the ARPANET was turned on. The first four nodes were at UCLA, Stanford, MIT, and the University of Utah.
The Internet and the World Wide Web
Over the next decade, the ARPANET grew and gained popularity. During this time, other networks also came into existence. Different organizations were connected to different networks. This led to a problem: the networks could not talk to each other. Each network used its own proprietary language or protocol (see sidebar for the definition of protocol) to send information back and forth. This problem was solved using the transmission control protocol/Internet protocol (TCP/IP). TCP/IP was designed to allow networks running on different protocols to have an intermediary protocol that would allow them to communicate. So as long as a network supporting TCP/IP, users could communicate with all other networks running TCP/IP. TCP/IP quickly became the standard protocol and allowed networks to communicate with each other. We first got the term Internet from this breakthrough, which means “an interconnected network of networks.”
As we moved into the 1980s, computers were added to the Internet at an increasing rate. These computers were primarily from government, academic, and research organizations. Much to the engineers' surprise, the early popularity of the Internet was driven by the use of electronic mail (see sidebar below). Using the Internet in these early days was not easy. To access information on another server, you had to know how to type in the commands necessary to access it and know the name of that device. That all changed in 1990 when Tim Berners-Lee introduced his World Wide Web project, which provided an easy way to navigate the Internet through the use of linked text (hypertext). The World Wide Web gained even more steam with the release of the Mosaic browser in 1993, which allowed graphics and text to be combined to present information and navigate the Internet. The Mosaic browser took off in popularity and was soon superseded by Netscape Navigator, the first commercial web browser, in 1994. The chart below shows the growth in internet users globally.
According to the International Telecommunications Union (ITU, 2020), over 53.6% or 4.1 billion people worldwide are using the internet, by the end of 2019.
The Internet has evolved from Web 1.0 to 2.0 (discussed in Chapter 1) to the many popular social media websites today.
Sidebar: “Killer” Apps for the Internet
When the personal computer was created, it was a great little toy for technology hobbyists and armchair programmers. As soon as the spreadsheet was invented, businesses took notice, and the rest is history. The spreadsheet was the killer app for the personal computer: people bought PCs to run spreadsheets.
The Internet was originally designed as a way for scientists and researchers to share information and computing power among themselves. However, as soon as electronic mail was invented, it began driving demand for the Internet.
We are seeing this again today with social networks, such as Facebook, Instagram. Many who weren’t convinced to have an online presence now feel left out without a social media account.
These killer apps and widespread adoption of the internet have driven explosive growth for information systems globally.
Sidebar: The Internet and the World Wide Web Are Not the Same Things
Many times, the terms “Internet” and “World Wide Web,” or even just “the web,” are used interchangeably. However, they are not the same thing at all!
The Internet is an interconnected network of networks. Many services run across the Internet: electronic mail, voice and video, file transfers, and, yes, the World Wide Web. The World Wide Web is simply one piece of the Internet. It is made up of web servers with HTML pages being viewed on devices with web browsers. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/05%3A_Networking_and_Communication/5.02%3A_A_Brief_History_of_the_Internet.txt |
Networks in Our Daily Lives
Among all of the fundamentals for human presence, the need to interact with others ranks underneath our need to maintain life. Communication is nearly as imperative to us as our dependence on air, water, nourishment, and shelter.
Today, networking systems have enabled people to connect from anywhere. Individuals can communicate and collaborate immediately with others. News ideas and discoveries are shared with the world in seconds. People can indeed interface and play with others without the physical barriers of seas and landmasses from wherever they locate.
Technology Then and Now
Envision a world without the Internet. No more Google, YouTube, texting, Facebook, Wikipedia, web-based gaming, Netflix, iTunes, and simple access to current data. No more social media, staying away from lines by shopping on the web, or rapidly looking into telephone numbers and guide headings to different areas at the snap of a finger. How unique would our lives be without the entirety of this? That was the world we lived in only 15 to 20 years back, as discussed in Chapter 1. Throughout the years, information systems have gradually extended and been repurposed to improve personal satisfaction for individuals all over the place.
No Boundaries
Progressions in systems administration advancements are maybe the most noteworthy changes on the planet today. They assist with making a world where national fringes, geographic separations, and physical confinements become less important, introducing ever-lessening obstacles.
Cisco Systems Inc. alludes to this as the human network. The human network fixates on the effect of the Internet and networks on individuals and organizations.
5.04: How has the Human Network Influenced you
Networks Support the Way We Learn
Networks have changed how we learn. Access to top-notch guidance is not, at this point, confined to understudies living in the vicinity to where that guidance is being conveyed.
Online distance learning has evacuated geographic hindrances and improved opportunities for students. Vigorous and dependable networks bolster and improve student learning encounters. They convey learning material in a wide scope of arrangements, including intelligent exercises, appraisals, and criticism.
Networks Support the Way We Communicate
The globalization of the Internet has introduced new types of correspondence that engage people to make data that a worldwide crowd can access.
A few types of communication include:
• Messaging: Texting empowers moment constant correspondence between at least two individuals. WhatsApp and Skype are examples of messaging tools that have gained huge popularity.
• Internet-based life: Social media comprises intelligent sites where individuals and networks make and offer client-created content with companions, family, peers, and the world. Facebook, Twitter, and LinkedIn are among the biggest social media platforms at this time.
• Joint effort tools: Without the limitations of area or time region, cooperation instruments permit people to speak with one another, frequently across a continuous intelligent video. The expansive circulation of information systems implies that individuals in remote areas can contribute on an equivalent premise with individuals in the core of largely populated places. An example of that would be online gaming, where several players are connected to the same server.
• Online journals: Blogs, which is a shortened form of "weblogs." In contrast to business sites, sites give anybody a way to impart their musings to a worldwide crowd without specialized information on website composition.
• Wikis: Wikis are website pages that gatherings of individuals can alter and see together. Like an individual diary, an individual often writes a blog, and a wiki collects creations from many people. All things considered, it might be dependent upon increasingly broad surveys and altering. Numerous organizations use wikis as their inner joint effort apparatus.
• Podcasting: Podcasting permits individuals to convey their sound chronicles to a wide crowd. The sound document is put on a site (or blog or wiki) where others can download it and play the account on their PCs, workstations, and other cell phones.
• Distributed (P2P) File Sharing: Peer-to-Peer document sharing permits individuals to impart records to one another without putting away and downloading them from a local server. The client joins the P2P arrangement by just introducing the P2P programming. Everybody has not grasped P2P document sharing. Numerous individuals are worried about disregarding the laws of copyrighted materials.
• Napster, which was released in 1999, was the first generation of P2P systems. Some well-known P2P systems are Xunlei, Bittorrent, and Gnutella.
Networks Support the Way We Work
In the business world, information systems were first utilized by organizations for internal uses and to oversee budgetary data, client data, and representative finance frameworks. These business systems advanced to empower the transmission of a wide range of data administrations, including email, video, informing, and communication.
The utilization of systems has been increasingly used to prepare workers for their effectiveness and efficiencies. Internet learning opportunities can diminish tedious and exorbitant travel yet still guarantee that all representatives are sufficiently prepared to play out their occupations in a protected and gainful way.
Networks Support the Way We Play
The Internet is utilized for customary types of amusement. We tune in to listen to music, see or view movies, read whole books and download material for future disconnected access. Live games and shows can be experienced as they are occurring or recorded and viewed on request.
Networked systems empower the making of new types of amusement, for example, internet games. Online multiplayer games have become very popular because they allow friends to play virtually when they can’t meet in person.
Indeed, even offline activities are improved utilizing network joint effort administrations. Worldwide, people with the same interests have interacted with each other quickly. We share normal encounters and pastimes well past our nearby neighborhood, city, or locale. Sports fans share opinions and realities about their preferred teams. Gatherers show valued assortments and get expert input about them.
Whatever type of entertainment we appreciate, systems are improving our experience. How would you play on the Internet? | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/05%3A_Networking_and_Communication/5.03%3A_Networking_Today.txt |
Networks of Many Sizes
Networks come in all sizes. They can go from basic networks consisting of two PCs to networks interfacing with many gadgets.
Basic networks introduced in homes empower sharing of assets, for example, printers, archives, pictures, and music between a couple of nearby PCs.
Worldwide internet users expect always to stay connected to the internet. They expect their connected devices to do the following:
• Stay connected to the internet to complete their work.
• Have the ability to send and receive data fast.
• Have the ability to send small and large quantities of data globally via any device connected to the internet.
Home office networks and small office networks are regularly set up by people who work from home or remote offices. They need to associate with a corporate network or other concentrated assets. Moreover, numerous independently employed business people utilize home office and little office networks to publicize and sell items, request supplies and speak with clients.
The Internet is the biggest network presently. Indeed, the term Internet implies a network of networks. The internet is the global worldwide network that connects millions of computers around the world. A computer can connect to another computer in a different country via the internet.
Clients and Servers
All PCs associated with a network are named hosts. Hosts are also called end devices.
Servers are PCs with programming that empower them to give data, similar to emails or website pages, to other network devices called clients. Each assistance requires separate server programming. For instance, a server requires web server programming to give web administrations to the network. A PC with server programming can offer types of assistance at the same time to one or numerous customers. Furthermore, a solitary PC can run numerous sorts of server programming. It might be vital for one PC to go about as a document server, a web server, and an email server in a home or private company.
Clients are PCs with programming introduced that empower them to ask for and show the server's data. A case of client programming is an internet browser, similar to Chrome or Firefox. A solitary PC can likewise run different kinds of custom programming. For instance, a client can browse email and view a site page while texting and tuning in to Internet radio.
Peer-to-Peer
Client and server programming ordinarily run on discrete PCs, yet it is also feasible for one PC to simultaneously complete the two jobs. In private companies and homes, hosts work as servers or clients on the network. This sort of system is known as a shared network. An example of that would be several users connected to the same printer from their individual devices.
5.06: LANs WANs and the Internet
Overview of Network Components
The link between the sender and the receiver can be as simple as a single cable connection between these two devices or more sophisticated as a set of switches and routers between them.
The network framework contains three classes of network segments:
• Devices
• Media
• Services
Devices and media are the physical components, or equipment, of the network. Equipment is regularly the noticeable segment of the network stage, for example, a PC, switch, remote passageway, or the cabling used to associate the devices.
Administrations incorporate a significant number of the basic network applications individuals utilize each day, similar to email facilitating administrations and web facilitating administrations. Procedures give the usefulness that coordinates and moves the messages through the network. Procedures are more subtle to us yet are basic to the activity of networks.
End Devices
An end device is either the source or destination of a message transmitted over the network. Each end device is identified by an IP address and a physical address. Both addresses are needed to communicate over a network. IP addresses are unique logical IP addresses that are assigned to every device within a network. If a device moves from one network to another, then the IP address has to be modified.
Physical addresses, also known as MAC (Media Access Control) addresses, are unique addresses assigned by the device manufacturers. These addresses are permanently burned into the hardware.
Intermediary Network Devices
Some devices act as intermediaries between devices. They are called delegated devices. These delegate devices give availability and guarantee that information streams over the network.
Routers utilize the destination end device address, related to data about the network interconnections, to decide how messages should take through the network.
Network Media
A medium called network media carries the act of transport data. The medium gives the channel over which the message makes a trip from source to destination.
Present-day organizations basically utilize three sorts of media to interconnect devices and give the pathway over which information can be transmitted.
These media are:
• Metallic wires within cables (Copper) - information is encoded into electrical driving forces.
• Glass or plastic fibers (fiber optic cable) - information is encoded as beats of light.
• Wireless transmission - information is encoded utilizing frequencies from the electromagnetic range.
Various sorts of network media have various highlights and advantages. Not all network media have similar qualities, nor are they all appropriate for the same purpose.
Bluetooth
While Bluetooth is not generally used to connect a device to the Internet, it is an important wireless technology that has enabled many functionalities that are used every day. When created in 1994 by Ericsson, it was intended to replace wired connections between devices. Today, it is the standard method for connecting nearby devices wirelessly. Bluetooth has a range of approximately 300 feet and consumes very little power, making it an excellent choice for various purposes.
Some applications of Bluetooth include: connecting a printer to a personal computer, connecting a mobile phone and headset, connecting a wireless keyboard and mouse to a computer, and connecting a remote for a presentation made on a personal computer. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/05%3A_Networking_and_Communication/5.05%3A_Providing_Resources_in_a_Network.txt |
To draw a diagram of a network, symbols are utilized by network professionals to represent the different devices and connections which make up a network.
A diagram gives a simple method to see how devices in a huge network are associated. This kind of "picture" of a network is known as a topology diagram. The capacity to perceive the legitimate portrayals of the physical systems administration segments is basic to have the option to imagine the association and activity of a network.
Notwithstanding these portrayals, particular phrasing is utilized while discussing how every one of these devices and media interfaces with one another. Significant terms to recall are:
• Network Interface Card: A NIC or LAN connector gives the physical association with the PC or opposite end device's network. The media that are associating the PC to the network administration device plug legitimately into the NIC.
• Physical Port: A connector or outlet on a network administration device where the media is associated with an end device or another network administration device.
• Interface: Specialized ports on a network administration device that associate with singular networks. Since switches are utilized to interconnect networks, the ports on a network allude to network interfaces.
Topology Diagrams
Understanding topology diagrams are required for anybody working with a network. They give a visual guide of how the network is associated.
There are two sorts of Topology diagrams:
• Physical topology and Logical topology diagrams. The physical topology diagrams identify the physical location of intermediary devices and cable installation.
• The Logical topology diagrams identify devices, addressing schemes, and ports.
With physical topology, it is quite self-explanatory. It is how they are interconnected with cables and wires physically. The logical topology is how connected devices are seen to the user.
Types of Networks
Networks foundations can fluctuate extraordinarily regarding:
• Size of the territory secured
• Number of users connected
• Number and kinds of administrations accessible
• Territory of obligation
The two most normal sorts of system frameworks:
• Local Area Network (LAN): A network framework that gives access to clients and end devices in a little topographical zone, commonly an enterprise, small business, home, or small business network possessed and oversaw by an individual or IT department.
• Wide Area Network (WAN): A network foundation that gives access to different networks over a wide topographical region, commonly possessed and overseen by a broadcast communications specialist co-op.
Different kinds of networks include:
• Metropolitan Area Network (MAN): A network foundation that traverses a physical region bigger than a LAN yet littler than a WAN (e.g., a city). Keep an eye on are ordinarily worked by a solitary substance, for example, a huge association.
• Wireless LAN (WLAN): Like a LAN, it remotely interconnects clients and focuses on a little geological region.
• Storage Area Network (SAN): A network foundation intended to help record servers and give information stockpiling, recovery, and replication.
Local Area Networks
LANs are a network foundation that traverses a little topographical territory. Explicit highlights of LANs include:
• LANs interconnect end devices in a restricted region, for example, a home, school, place of business, or grounds.
• A solitary association or person normally directs a LAN. The managerial control that oversees the security and access control arrangements is upheld on the network level.
• LANs give rapid data transfer capacity to inward end gadgets and delegate devices.
Wide Area Networks
WANs are a network foundation that traverses a wide topographical zone. WANs are ordinarily overseen by specialist organizations (SP) or Internet Service Providers (ISP).
Explicit highlights of WANs include:
• WANs interconnect LANs over wide geological zones, for example, between urban areas, states, territories, nations, or the mainland.
• Numerous specialist organizations typically manage WANs.
• WANs ordinarily give more slow speed joins between LANs | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/05%3A_Networking_and_Communication/5.07%3A_Network_Representations.txt |
The Internet
The Internet is an overall assortment of interconnected networks (internetworks or web for short).
A portion of the LAN models is associated with one another through a WAN association. WANs are then associated with one another. The WAN association lines speak to all the assortments of ways we interface networks. WANs can connect through copper wires, fiber optic cables, and wireless transmissions.
No individual or group doesn't own the Internet. Guaranteeing compelling correspondence over this various framework requires the use of steady and generally perceived advances and norms, just as the collaboration of many network organization offices. Some associations have been produced to keep up the structure and normalization of Internet conventions and procedures. These organizations incorporate the , Internet Corporation for Assigned Names and Numbers (ICANN), and the , in addition to numerous others.
Have you ever wondered how your smartphone can function the way it does? Have you ever wondered how you can search for information on the web and find it within milliseconds? The world’s largest implementation of client/server computing and internetworking is the Internet.
The world’s largest implementation of client/server computing and internetworking is the Internet. The internet is also a system, which is the most extensive public way of communicating. The internet began in the 20th century; it initially started as a network for the U.S Department of Defense to globally connect university professors and scientists. Most small businesses and homes have access to the internet by subscribing to an internet service provider (ISP), a commercial organization with a permanent connection to the internet, which sells temporary connections to retail subscribers. For example, AT&T, NetZero, and T-Mobile. A DSL (Digital subscriber line) operates over existing telephone lines to carry data, voice, and video transmission rates. The base of the internet is TCP/IP networking protocol suite. When two users on the internet exchange messages, each message is decomposed into packets using the TCP/IP protocol.
Have you ever wondered what happens when you type a URL in the browser and press enter? The browser checks a DNS record in the cache to find the corresponding IP address to the domain. First, you type in a specific URL into your browser. The browser then checks the cache for a DNS record to find the website's corresponding IP address. If the URL is not in the cache, ISP’s (Internet Service Provider)’s DNS server starts a DNS query to find the server's IP address that hosts the website. The browser then starts a TCP connection with the server. Then, the browser sends an HTTP request to the webserver. After that, the server handles the request and sends an HTTP response back. Finally, the browser shows the HTML content. For example, www.Wikipedia.org/ has an IP address, that specific IP address could be searched starting with http:// on a browser/ The DNS contains a list of URLs, including their IP addresses.
The DNS (Domain Name System) changes domain names into IP addresses. The domain name is the English name of the thing, and that has 32-bits which are unique and numeric to that English name. To access a computer on the internet, they only need to specify the domain name.
Intranets and Extranets
There are two different terms which are like the term Internet: Intranets and Extranets.
Intranet is a term frequently used to describe a private association of LANs and WANs that has a place with an association. It is intended to be available only for approved individuals, workers, or others of an organization.
An extranet is a term used to describe the case when an organization wants to give secure and safe access to people who work for another organization yet expect access to the association's information. Examples of extranets include:
• An organization that is giving access to outside providers and temporary workers.
• An emergency clinic gives a booking system to specialists so they can make arrangements for their patients.
• A nearby office of training gives spending plans and staff data to the schools in its region. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/05%3A_Networking_and_Communication/5.08%3A_The_Internet_Intranets_and_Extranets.txt |
Internet Access Technologies
There is a wide range of approaches to associate users and associations with the Internet.
Home clients (telecommuters) and workplaces regularly require an association with an Internet Service Provider (ISP) to access the Internet. Association alternatives change significantly among ISP and topographical areas. Notwithstanding, companies incorporate a broadband link, broadband computerized endorser line (DSL), remote WANs, and versatile administrations.
Associations commonly expect access to other corporate destinations and the Internet. Quick associations are required to help business administrations, including IP telephones, video conferencing, and server farm stockpiling.
Business-class interconnections are normally given by specialist organizations (SP). Well-known business-class administrations incorporate business DSL, rented lines, and Metro Ethernet.
Home and Small Office Internet Connections
Regular connection choices for little office and home office users:
• Cable: Typically offered by digital TV specialist co-ops, the Internet information signal is carried on a similar link that conveys satellite TV. It gives a high transmission capacity, consistently on, association with the Internet.
• DSL: Digital Subscriber Lines gives a high data transmission, consistently on, association with the Internet. DSL runs over a phone line when all is said in done, small office and home office clients associate utilizing Asymmetrical DSL (ADSL), which implies that the download speed is quicker than the upload speed.
• Cellular: For a Cell phone network to connect, it utilizes cellular internet access. Any place you can get a phone signal, you can get cell Internet. Execution will be restricted by the telephone's abilities and the cell tower to which it is associated. The fourth generation of broadband cellular network technology is 4G, which most people are familiar with because it is on smartphones. 5G is upcoming and expected to be faster than and succeed 4G by 100 times, which will have the ability to transmit a lot more data at a much faster pace than 4G.
• Satellite: Internet access through satellite is a genuine advantage in those territories that would somehow or another have no Internet availability by any means. Satellite dishes require a clear line of sight to the satellite.
• Dial-up telephone: An economical choice that utilizes any telephone line and a modem. The low transmission capacity supported by a dial-up modem association is normally not adequate for huge information transfer. However, it is still a valuable choice wherever other options are not available such as in rural areas or remote locations where phones are the only means of communication.
Fiber optic links are increasingly becoming more available to home and small businesses. This empowers an ISP to give higher data transmission speeds and bolster more administrations, for example, Internet, telephone, and TV.
Business Internet Connections
Corporate connection choices contrast from home client alternatives. Organizations may require higher transmission capacity, devoted data transmission, and oversaw administrations. Business connection options include:
• Dedicated Leased Line: Leased lines are really saved circuits inside the specialist organization's system that interface geologically isolated workplaces for private voice or potentially information organizing. The circuits are ordinarily leased at a month-to-month or yearly rate. They can be costly.
• Ethernet WAN: Ethernet WANs broaden LAN access into the WAN. Ethernet is a LAN innovation you will find out about in a later section. The advantages of Ethernet are currently being reached out into the WAN.
• DSL: Business DSL is accessible in different organizations. A famous decision is Symmetric Digital Subscriber Lines (SDSL) which is like the purchaser rendition of DSL. However, it gives transfers and downloads at similar paces.
• Satellite: Like small office and home office clients, satellite help can give an association when a wired arrangement isn't accessible.
The decision of connection shifts relying upon topographical area and specialist organization accessibility.
Sidebar: An Internet Vocabulary Lesson
Networking communication is full of some very technical concepts based on some simple principles. Learn the terms below, and you will be able to hold your own in a conversation about the Internet.
• Packet: The fundamental unit of data transmitted over the Internet. When a device intends to send a message to another device (for example, your PC sends a request to YouTube to open a video), it breaks the message down into smaller pieces, called packets. Each packet has the sender’s address, the destination address, a sequence number, and a piece of the overall message to be sent.
• Hub: A simple network device connects other devices to the network and sends packets to all the devices connected to it.
• Bridge: A network device that connects two networks and only allows packets through that are needed.
• Switch: A network device that connects multiple devices and filters packets based on their destination within the connected devices.
• Router: A device that receives and analyzes packets and then routes them towards their destination. In some cases, a router will send a packet to another router; it will send it directly to its destination in other cases.
• IP Address: Every device that communicates on the Internet, whether it be a personal computer, a tablet, a smartphone, or anything else, is assigned a unique identifying number called an IP (Internet Protocol) address. Historically, the IP-address standard used has been IPv4 (version 4), which has the format of four numbers between 0 and 255 separated by a period. For example, the domain Saylor.org has an IP address of 107.23.196.166. The IPv4 standard has a limit of 4,294,967,296 possible addresses. As the use of the Internet has proliferated, the number of IP addresses needed has grown to the point where IPv4 addresses will be exhausted. This has led to the new IPv6 standard, which is currently being phased in. The IPv6 standard is formatted as eight groups of four hexadecimal digits, such as 2001:0db8:85a3:0042:1000:8a2e:0370:7334. The IPv6 standard has a limit of 3.4×1038 possible addresses. For more detail about the new IPv6 standard, see this Wikipedia article.
• Domain name: If you had to try to remember the IP address of every web server you wanted to access, the Internet would not be nearly as easy to use. A domain name is a human-friendly name for a device on the Internet. These names generally consist of a descriptive text followed by the top-level domain (TLD). For example, Wikipedia's domain name is Wikipedia.org; Wikipedia describes the organization, and .org is the top-level domain. In this case, the .org TLD is designed for nonprofit organizations. Other well-known TLDs include .com , .net , and .gov . For a complete list and description of domain names, see this Wikipedia article.
• DNS: DNS stands for “domain name system,” which acts as the directory on the Internet. A DNS server is queried when a request to access a device with a domain name is given. It returns the IP address of the device requested, allowing for proper routing.
• Packet-switching: When a packet is sent from one device out over the Internet, it does not follow a straight path to its destination. Instead, it is passed from one router to another across the Internet until it reaches its destination. In fact, sometimes, two packets from the same message will take different routes! Sometimes, packets will arrive at their destination out of order. When this happens, the receiving device restores them to their proper order. For more details on packet switching, see this interactive web page.
• Protocol: In computer networking, a protocol is the set of rules that allow two (or more) devices to exchange information back and forth across the network. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/05%3A_Networking_and_Communication/5.09%3A_Internet_Connections.txt |
Traditional Separate Networks
Consider a school which was built thirty years ago. A few study halls were cabled for the data network, phone network, and video network for TVs in those days and these different networks couldn't speak with one another.
Each network utilized various innovations to convey the correspondence signal. Each network had its own arrangement of rules and measures to guarantee successful correspondence.
The Converging Network
Today, the separate data, phone, and video networks are converging. In contrast to traditional networks, merged networks are equipped for conveying information, voice, and video between a wide range of sorts of devices over a similar system foundation.
This network foundation utilizes a similar arrangement of rules, understandings, and implementation standards.
5.11: Reliable Network
Network Architecture
Networks must help a wide scope of applications and services, just as they work over a wide range of cables and devices, making up the physical infrastructure. In this specific situation, the term network architecture alludes to the technologies that help the foundation and the programmed services and rules, or protocols, that move data over the network.
As networks advance, we are finding that there are four fundamental qualities that the underlying architectures need to deliver to meet users desires:
• Fault Tolerance
• Scalability
• Quality of Service (QoS)
• Security
Fault Tolerance
The desire is that the Internet is consistently accessible to a great many clients who depend on it. This requires a network architecture that is worked to tolerate flaws. A fault-tolerant network restrains the effect of failure, with the goal that the least number of devices are influenced. It is additionally worked in a manner that permits brisk recuperation when such a disappointment happens. These networks rely upon various ways between the source and goal of a message. If a path fails, the messages can be instantly sent over an alternate link. Having numerous ways to a goal is known as redundancy.
One way dependable networks give repetition is by executing a packet-switched network. Packet switching parts traffic into packets that are steered over a shared network. For example, a solitary message, an email, or a video stream, is broken into multiple message blocks, called packets. Every packet has the important addressing information of the source and goal of the message. The routers inside the network switch the packets dependent on the state of the network at that point. This implies all the packets in a solitary message could take totally different ways to the goal.
Scalability
A scalable network can grow rapidly to help new users and applications without affecting the service's performance being conveyed to existing users.
Another network can be effortlessly added to a current network. Furthermore, networks are versatile because the designers observe acknowledged protocols and standards. This permits software and hardware vendors to improve items and administrations without stressing over structuring another arrangement of rules for working inside the network.
Quality of Service
Quality of Service (QoS) is additionally a consistently expanding requirement of networks today. New applications accessible to users over internetworks, for example, voice and live video transmissions, make better standards for the quality of the delivered services. Have you at any point attempted to watch a video with steady breaks and stops? As information, voice, and video content keep on combining into a similar system, QoS turns into an essential instrument for overseeing blockage and guaranteeing dependable conveyance of substance to all users.
Congestion happens when the interest for bandwidth surpasses the amount which is accessible. Network bandwidth is estimated in the number of bits transmitted in a solitary second or bits per second (bps). When synchronous correspondences have endeavored over the network, the interest for network bandwidth can surpass its accessibility, making network congestion.
When traffic volume is more prominent than what can be shipped over the network, devices queue or hold the packets in memory until assets become accessible to transmit them.
With a QoS strategy set up, the router can deal with data and voice traffic progression, offering priority to voice communications if the network encounters congestion.
Security
Vital individual and business resources are the network infrastructure, services, and the data contained on network-attached devices.
Two kinds of network security worries must be addressed: network infrastructure security and information security.
Ensuring a network infrastructure incorporates the physical securing of devices that give network connectivity and forestalling unapproved access to the management software that resides on them.
Data security ensures that the data contained inside the packets being transmitted over the network and the data put away on network-attached devices. To accomplish the objectives of network security, there are three essential requirements:
• Confidentiality: Data secrecy implies that just the planned and approved recipients can access and read information.
• Integrity: Data honesty affirms that the data has not been adjusted in transmission, from root to goal.
• Availability- Data accessibility implies confirmation of timely and solid access to information services for approved users. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/05%3A_Networking_and_Communication/5.10%3A_The_Network_as_a_Platform_Converged_Networks.txt |
New Trends
As new technologies and end-user devices come to market, businesses and purchasers must keep on acclimating to this ever-evolving condition. The job of the network is changing to empower the connections between individuals, devices, and data. There are a few new networking trends that will impact organizations and purchasers. A portion of the top trends include:
• Bring Your Own Device (BYOD)
• Video communications
• Online collaboration
• Cloud computing
Bring Your Own Device
The idea of any device, to any content, in any way, is a significant worldwide trend that requires huge changes to the manner in which devices are utilized. This trend is known as Bring Your Own Device (BYOD).
BYOD is about end users having the opportunity to utilize individual tools in order to get to data and convey over a business or campus network. With the development of customer devices and the related drop in cost, representatives and students can be relied upon to have probably the most progressive computing and networking tools for individual use. These individual tools can be laptops, e-books, tablets, cell phones, and tablets. These can be devices bought by the organization or school, bought by the individual, or both.
BYOD implies any device, with any possession, utilized anyplace. For instance, previously, a student who expected to get access to the campus network or the Internet needed to utilize one of the school's PCs. These devices were commonly constrained and seen as instruments just for work done in the study hall or in the library. Expanded availability through portable and remote access to the campus network gives students a lot of adaptability and opens doors of learning for the student.
Online Collaboration
People want to connect with the network, for access to data applications, in addition to team up with each other.
Collaboration is characterized as "the demonstration of working with another or others on a joint venture." Collaboration tools, give representatives, students, instructors, clients, and accomplices an approach to quickly interface, connect, and accomplish their targets.
For businesses, collaboration is a basic and vital need that associations are utilizing to sustain their competition. Collaboration is additionally a need in training. Students need to work together to help each other in learning, to create group abilities utilized in the workplace, and to cooperate on group based projects.
Video Communication
Another trend in networking that is basic to the correspondence and joint effort exertion is video. Video is being utilized for interchanges, cooperation, and amusement. Video calls can be made to and from anyplace with an Internet connection.
Video conferencing is an incredible asset for speaking with others from a distance, both locally and worldwide. Video is turning into a basic necessity for successful joint effort as associations stretch out across geographic and social limits.
Cloud Computing
Cloud computing is another worldwide trend changing how we access and store information. Cloud computing permits us to store individual files, even backup our whole hard disk drive on servers over the Internet. Applications, for example, word processing, and photograph editing, can be accessed utilizing the Cloud.
When it comes to businesses, cloud computing expands IT's capabilities without requiring interest in new infrastructure, preparing new faculty, or permitting new software. These services are accessible on request and conveyed economically to any device on the planet without trading off security or capacity.
There are four essential Clouds: Public Clouds, Private Clouds, Hybrid Clouds, and Custom Clouds. Snap each Cloud to find out additional.
Cloud computing is conceivable because of data centers. A data center is an office used to house PC frameworks and related parts. A data center can consume one room of a building, at least one story, or the whole thing. Data centers are commonly over the top expensive to manufacture and keep up. Therefore, just huge associations utilize secretly fabricated data centers to house their information and offer users assistance. Smaller associations that can't afford to keep up their own private data center can lessen the general expense of ownership by renting server and capacity services from a bigger data center association in the Cloud. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/05%3A_Networking_and_Communication/5.12%3A_The_Changing_Network_Environment_Network_Trends.txt |
Networking trends are not just influencing how we work or study, and they are also changing pretty much every part of the home.
The most up-to-date home trends incorporate smart home technology, a technology that is coordinated into habitual appliances, permitting them to interconnect with different devices, making them progressively 'smart' or automated. For instance, envision having the option to set up a dish and spot it in the broiler for cooking before going out for the afternoon. Envision if the stove knew of the dish it was cooking and was associated with your 'schedule of occasions' so it could figure out what time you will be eating and change start times and length of cooking accordingly. It could even modify cooking times and temperatures dependent on plan changes. Furthermore, a cell phone or tablet connection permits the user to interface with the broiler straightforwardly to make any ideal changes. When the dish is "accessible," the stove sends an alarm message to a predefined end-user device that the dish is done and warming.
This situation isn't long-off. Actually, smart home technology is being created for all rooms inside a house. It will turn out to be a greater degree of reality as home networking and high-speed Internet technology become progressively far-reaching. New home networking technologies are being grown day by day to meet these sorts of developing technology needs.
Powerline Networking
Powerline networking is a rising trend for home networking that utilizes existing electrical wiring to connect devices.
The idea of "no new wires" signifies the capacity to connect a device to the network where there is an electrical outlet. This spares the expense of introducing data cables and with no extra expense to the electrical bill. Utilizing similar wiring that conveys power, powerline networking sends information by sending data on specific frequencies.
Utilizing a standard powerline adapter, devices can connect with the LAN any place there is an electrical outlet. Powerline networking is beneficial when wireless access points can't be utilized or can't arrive at all to the devices in the home. Powerline networking isn't intended to fill in for committed cabling in data networks. But it is an alternative when data network cables or wireless communications are not a reasonable choice.
Wireless Broadband
Connecting with the Internet is indispensable in savvy home innovation. DSL and cable are basic advances used to connect homes and private companies to the Internet. Nonetheless, remote access might be another choice in numerous regions.
Another remote answer for home and independent companies is wireless broadband. This uses the equivalent cell innovation to get to the Internet with an advanced mobile phone or tablet. A radio wire is introduced outside the house, giving either remote or wired availability for home devices. In numerous zones, home wireless broadband is contending legitimately with DSL and cable services.
Wireless Internet Service Provider (WISP)
Wireless Internet Service Provider (WISP) is an ISP that connects subscribers of an assigned passage or problem area utilizing comparable remote innovations found in-home wireless local area networks (WLANs). WISPs are all the more usually found in provincial situations where DSL or cable services are not accessible.
Though a different transmission tower might be introduced for the antenna, the antenna is usually connected to a current raised structure, such as a water tower or a radio pinnacle. A little dish or radio wire is introduced on the subscriber's rooftop in the WISP transmitter's scope. The subscriber's entrance unit is associated with the wired system inside the home. From the home user's point of view, the arrangement isn't vastly different from DSL or cable service. The principle distinction is that the home's connection to the ISP is remote rather than a physical link.
Sidebar: Why Doesn’t My Cell Phone Work When I Travel Abroad?
As mobile phone technologies have evolved, providers in different countries have chosen different communication standards for their mobile phone networks. In the US, both of the two competing standards exist GSM (used by AT&T and T-Mobile) and CDMA (used by the other major carriers). Each standard has its pros and cons, but the bottom line is that phones using one standard cannot easily switch to the other.
In the US, this is not a big deal because mobile networks exist to support both standards. But when you travel to other countries, you will find that most of them use GSM networks, with the one big exception being Japan, which has standardized on CDMA. It is possible for a mobile phone using one type of network to switch to the other type of network by switching out the SIM card, which controls your access to the mobile network. However, this will not work in all cases. If you are traveling abroad, it is always best to consult with your mobile provider to determine the best way to access a mobile network. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/05%3A_Networking_and_Communication/5.13%3A_Technology_Trends_in_the_Home.txt |
Security Threats
Network security is an indispensable piece of computer networking today, whether or not the network is restricted to a home domain with a solitary connection with the Internet or as extensive as an organization with many users. The network security that is executed must consider the environment, just as the system's devices and prerequisites. It must have the option to keep the data secure while considering the quality of service anticipated from the network.
Ensuring a network is secure includes technologies, protocols, devices, tools, and techniques to keep data secure and moderate threat vectors. Threat vectors might be external or internal. Numerous external network security threats today are spread over the Internet.
The most widely recognized external threats to networks include:
• Viruses, worms, and Trojan horses- malignant programming and subjective code running on a client device
• Spyware and adware - software installed on a user device that covertly gathers data about the user Zero-day attacks, likewise called zero-hour attacks - an assault that happens on a principal day that a defenselessness gets known
• Hacker attacks- an assault by an educated individual to user devices or network assets
• Denial of service attacks- assaults intended to slow or crash applications and procedures on a network device
• Data interception and theft - an assault to catch private data from an association's network
• Identity theft- an assault to take the login qualifications of a user to get to private information
It is similarly critical to think about internal threats. There have been numerous examinations showing that the most well-known data breaches happen due to the network's internal users. This can be credited to lost or taken devices, inadvertent abuse by workers, and in the business condition, even malignant representatives. With the advancing BYOD systems, corporate information is considerably more powerless. Accordingly, it is critical to address both outside and interior security dangers when building up a security strategy.
Security Solutions
No single arrangement can shield the network from the many threats that exist. Consequently, security ought to be implemented in various layers, utilizing more than one security arrangement. If one part of the security fails to recognize and shield the network, others will stand.
A home network security execution is typically rather essential. It is commonly executed on the interfacing end devices, just as connected with the Internet, and can even depend on contracted services from the ISP.
Conversely, the network security implementation for a corporate network, for the most part, comprises numerous segments incorporated with the network to screen and channel traffic.
In a perfect world, all segments cooperate, which limits maintenance and improves overall security.
Network security parts for a home or little office network should at least incorporate the following:
• Antivirus and antispyware: These are utilized to shield end devices from getting contaminated with vindictive software.
• Firewall filtering: This is utilized to prevent unapproved access to the network. This may incorporate a host-based firewall system that is actualized to forestall unapproved access to the end device or an essential separating service on the home router to keep unapproved access from the outside world into the network.
Bigger networks and corporate networks frequently have other security necessities:
• Dedicated firewall systems: These are utilized to develop further firewall abilities that can channel a lot of traffic with greater granularity.
• Access control lists (ACL): These are utilized to channel access and traffic sending additionally.
• Intrusion prevention systems (IPS): These are utilized to distinguish quick-spreading dangers, for example, zero-day or zero-hour assaults.
• Virtual Private Networks (VPN): These are utilized to give secure access to telecommuters.
Networks security necessities must consider the network condition, just like the different applications and processing prerequisites. Both home situations and organizations must have the option to secure their data yet consider the quality of service that is anticipated from every innovation. Furthermore, the security arrangement executed must be versatile to the developing and changing trends of the network.
The study of network security dangers and relief strategies begins with a concise understanding of the underlying switching and routing infrastructure utilized to organize network services.
5.15: Summary
Summary
Networks and the Internet have changed how we impart, learn, work, and even play.
Networks come in all sizes. They can run from basic networks consisting of two PCs to networks connecting a large number of devices.
The Internet is the biggest network presently. Truth be told, the term Internet implies a 'network of networks.'
The Internet offers the types of assistance that empower us to interface and speak with our families, companions, work, and interests.
The network foundation is the stage that underpins the network. It gives the steady and dependable channel over which correspondence can happen. It comprises network parts, including end devices, halfway devices, and network media.
Networks must be dependable. This implies the network must be tolerant to flaws, adaptable, give quality of service, and guarantee the network's data and assets. Network security is a basic piece of PC networking, whether or not the network is restricted to a home situation with a solitary connection with the Internet or as extensive as an enterprise with many users. No single arrangement can shield the network from the assortment of dangers that exist. Consequently, security ought to be executed in numerous layers, utilizing more than one security arrangement.
The network infrastructure can change significantly based on size, many users, and the sorts of upheld administrations. The network infrastructure must develop and change by how the network is utilized. The routing and switching stage is the establishment of any networked framework.
5.16: Study Questions
Study Questions
1. Identify the first four locations hooked up to the ARPANET
2. Describe the difference between the Internet and the World Wide Web
3. List three of your favorite Web 2.0 apps or websites
4. Identify the killer app for the Internet
5. List a few home internet connections
6. List a few business internet connections
7. Describe the difference between a LAN and a WAN
8. Describe the difference between an intranet and an extranet
9. Explain what a network topology is
10. Explain what powerline networking is
Exercises
1. Give an example of each of the following terms:
• Wireless LAN (WLAN)
• Wide-area network (WAN)
• Intranet
• Local-area network (LAN)
• Extranet
1. Give an example for each of the following:
• Fault tolerance
• Scalability
• Quality of service (QoS)
• Security
1. Create a google account at - create a new document using google docs, share the document with others and explore document sharing via your google account.
2. Find the IP address of your computer. Explain the steps how you find it.
3. Identify your or your school’s Internet service provider.
4. Pretend that you are planning a trip to three foreign countries in the next month. Consult your wireless carrier to determine if your mobile phone would work properly in those countries. Identify if there are costs and other alternatives to have your phone work properly. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/05%3A_Networking_and_Communication/5.14%3A_Network_Security.txt |
Learning Objectives
Upon completion of this chapter, you will be able to:
We discuss the information security triad of confidentiality, integrity, and availability. We will review different types of threats and associated costs for individuals, organizations, and nations. We will discuss different security tools and technologies, how security operation centers can secure organizations’ resources and assets, and a primer on personal information security.
06: Information Systems Security
As computers and other digital devices have become essential to business and commerce, they have also increasingly become a target for attacks. For a company or an individual to use a computing device with confidence, they must first be assured that the device is not compromised in any way and that all communications will be secure. This chapter reviews the fundamental concepts of information systems security and discusses some of the measures that can be taken to mitigate security threats. The chapter begins with an overview focusing on how organizations can stay secure. Several different measures that a company can take to improve security will be discussed. Finally, you will review a list of security precautions that individuals can take to secure their personal computing environment.
6.02: The Information Security Triad- Confidentiality Integrity Avai
The Information Security Triad: Confidentiality, Integrity, Availability (CIA)
Confidentiality
Protecting information means you want to restrict access to those who are allowed to see it. This is sometimes referred to as NTK, Need to Know, and everyone else should be disallowed from learning anything about its contents. This is the essence of confidentiality. For example, federal law requires that universities restrict access to private student information. Access to grade records should be limited to those who have authorized access.
Integrity
Integrity is the assurance that the information being accessed has not been altered and truly represents what is intended. Just as people with integrity mean what they say and can be trusted to represent the truth consistently, information integrity means information truly represents its intended meaning. Information can lose its integrity through malicious intent, such as when someone who is not authorized makes a change to misrepresent something intentionally. An example of this would be when a hacker is hired to go into the university’s system and change a student’s grade.
Integrity can also be lost unintentionally, such as when a computer power surge corrupts a file or someone authorized to make a change accidentally deletes a file or enters incorrect information.
Availability
Information availability is the third part of the CIA triad. Availability means information can be accessed and modified by anyone authorized to do so in an appropriate time frame. Depending on the type of information, an appropriate timeframe can mean different things. For example, a stock trader needs information to be available immediately, while a salesperson may be happy to get sales numbers for the day in a report the next morning. Online retailers require their servers to be available twenty-four hours a day, seven days a week. Other companies may not suffer if their web servers are down for a few minutes once in a while.
You'll learn about who, what, and why of cyber-attacks in this chapter. Different people commit cybercrime for different purposes. Security Operations Centers are designed to fight cybercrime. Jobs in a Security Operations Center (SOC) can be obtained by earning certifications, seeking formal education, and using employment services to gain internship experience and job opportunities.
The Danger
In chapter 5, we discussed various security threats and possible solutions. Here are a few scenarios to illustrate how hackers trick users.
Hijacked People
Melanie stopped at her favorite coffee shop to grab her drink for the afternoon. She placed her order, paid the clerk, and waited to fulfill orders' backup while the baristas worked furiously. Melanie took her phone out, opened the wireless client, and linked to what she thought was the free wireless network for the coffee shop.
Sitting in the corner of the store, however, a hacker had just set up a free, wireless hotspot "rogue" posing as the wireless network of the coffee shop. The hacker hijacked her session when Melanie logged on to her bank's website and accessed her bank accounts.
Hijacked Companies
Jeremy, an employee of a large, publicly-held corporation's finance department, receives an email from his CEO with an enclosed file in Adobe’s PDF format. The PDF regards earnings for the organization in the third quarter. Jeremy does not recall his department making the PDF. His interest is triggered, and he opens his attachment.
The same scenario plays out around the company as thousands of other workers are enticed to click on the attachment successfully. As the PDF opens, ransomware is mounted on the workers' computers, including Jeremy’s, and the process of storing and encrypting corporate data begins. The attackers' target is financial gain, as they keep the company's ransom data until they get paid. The consequences for opening an attachment in a spam mail or from an unfamiliar address could be disastrous, as with Jeremy’s case.
Targeted Nations
Some of today's malware is so sophisticated and expensive to create that security experts believe that it could be created only by a nation-state or group of nations. This malware can be designed to attack vulnerable infrastructures, such as the water network or electric grid.
This was the aim of the Stuxnet worm, infecting USB drives. The movie World War 3.0 Zero Days tells a story of a malicious computer worm called Stuxnet. Stuxnet has been developed to penetrate Programmable Logic Controllers (PLCs) from vendors-supported nuclear installations. The worm was transmitted into the PLCs from infected USB drives and ultimately damaged centrifuges at these nuclear installations.
Threat Actors
Threat actors include amateurs, hacktivists, organized crime groups, state-funded groups, and terrorist organizations. Threat actors are individuals or a group of individuals conducting cyber-attacks on another person or organization. Cyberattacks are intentional, malicious acts intended to harm another individual or organization. The major motivations behind cyberattacks are money, politics, competition, and hatred.
Known as script kiddies, amateurs have little or no skill. They often use existing tools or instructions to start attacks found on the Internet. Some are only curious, while others seek to show off their abilities by causing damage. While they use simple methods, the outcomes can often be catastrophic.
Hacktivists
A hacktivist can act independently or as a member of an organized group. Hacktivists are hackers who rage against many social and political ideas. Hacktivists openly demonstrate against organizations or governments by publishing articles and images, leaking classified information, and crippling web infrastructure through distributed denial of service ( DDoS) attacks with illegal traffic. A denial of service (DoS) attack is one of the most powerful cyberattacks in which the attacker bombards the target with traffic requests that overwhelm the target server in an attempt to crash it. A distributed denial of service (DDoS) attack is a more sophisticated version of DoS in which a set of distributed computer systems attacks the target.
Financial Gain
The financial gain motivates much of the hacking activity that constantly threatens our security. Cyber Criminals are people who utilize technology for their own malicious intentions, such as stealing personal information to make a profit. Such cybercriminals want access to our bank accounts, personal data, and everything else they can use for cash flow generation.
Trade Secrets and Global Politics
In the past few years, several reports have been seen of nation-states hacking other nations or otherwise intervening with internal policies. National states are also keen to use cyberspace for industrial spying. Intellectual property theft can give a country a considerable advantage in international trade.
Defending against the consequences of state-sponsored cyberespionage and cyber warfare will continue to be a priority for cybersecurity professionals.
How Secure is the Internet of Things
The Internet of Things ( IoT) is rapidly expanding all around us. The internet of things is a network of physical objects that collect and share data over the internet. We're now beginning to enjoy the IoT rewards. There is a constant creation of new ways of using connected things. The IoT helps people link items so they can enhance their quality of life. Smart security systems, smart kitchen appliances, smartwatches, and smart heating systems are few examples of the IoT products available today.
For starters, many people now use connected wearable devices to monitor their fitness activities. How many devices do you currently own that link to the Internet or your home network?
How safe are those devices? For instance, who wrote the software to support the embedded hardware (aka firmware)? Has the programmer been paying attention to the security flaws? Are your home thermostats connected to the internet? Your Electronic Video Recorder (DVR)? When there are security bugs, can the firmware be patched in the system to fix the vulnerability? The new firmware will not update many computers on the Internet. For updating with patches, some older devices were not even developed. These two conditions put the users of such devices to face threats and security risks.
Reference
World War 3 Zero Days (Official Movie Site) - Own It on DVD or Digital HD. Retrieved September 6, 2020, from www.zerodaysfilm.com | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/06%3A_Information_Systems_Security/6.01%3A_Introduction.txt |
To ensure the confidentiality, integrity, and availability of information, organizations can choose from various tools. Each of these tools can be utilized as a part of an overall information-security policy, which will be discussed in the next section.
Authentication
The most common way to identify people is through physical appearance, but how do we identify someone sitting behind a computer screen or at the ATM? Tools for authentication are used to ensure that the person accessing the information is, indeed, who they present themselves to be.
Authentication can be accomplished by identifying someone through one or more of three factors: something they know, something they have, or something they are. For example, the most common form of authentication today is the user ID and password. In this case, the authentication is done by confirming something that the user knows (their ID and password). But this form of authentication is easy to compromise (see sidebar), and stronger forms of authentication are sometimes needed. Identifying someone only by something they have, such as a key or a card, can also be problematic. When that identifying token is lost or stolen, the identity can be easily stolen. The final factor, something you are, is much harder to compromise. This factor identifies a user through physical characteristics, such as an eye-scan or fingerprint. Identifying someone through their physical characteristics is called biometrics.
A more secure way to authenticate a user is to do multi-factor authentication. Combining two or more of the factors listed above makes it much more difficult for someone to misrepresent themselves. An example of this would be the use of an . The RSA device is something you have and will generate a new access code every sixty seconds. To log in to an information resource using the RSA device, you combine something you know, a four-digit PIN, with the device's code. The only way to properly authenticate is by both knowing the code and having the RSA device.
Access Control
Once a user has been authenticated, the next step is to ensure that they can access the appropriate information resources. This is done through the use of access control. Access control determines which users are authorized to read, modify, add, and/or delete information. Several different access control models exist. Here we will discuss two: the access control list (ACL) and role-based access control (RBAC).
For each information resource that an organization wishes to manage, a list of users who have the ability to take specific actions can be created. This is an access control list or ACL. For each user, specific capabilities are assigned, such as reading, writing, deleting, or adding. Only users with those capabilities are allowed to perform those functions. If a user is not on the list, they have no ability even to know that the information resource exists.
ACLs are simple to understand and maintain. However, they have several drawbacks. The primary drawback is that each information resource is managed separately. If a security administrator wanted to add or remove a user to a large set of information resources, it would not be easy. And as the number of users and resources increases, ACLs become harder to maintain. This has led to an improved method of access control, called role-based access control, or RBAC. With RBAC, instead of giving specific users access rights to an information resource, users are assigned to roles, and then those roles are assigned access. This allows the administrators to manage users and roles separately, simplifying administration and, by extension, improving security.
Encryption
An organization often needs to transmit information over the Internet or transfer it on external media such as a USB. In these cases, even with proper authentication and access control, an unauthorized person can access the data. Encryption is a process of encoding data upon its transmission or storage so that only authorized individuals can read it. This encoding is accomplished by a computer program, which encodes the plain text that needs to be transmitted; then, the recipient receives the ciphertext and decodes it (decryption). For this to work, the sender and receiver need to agree on the method of encoding so that both parties can communicate properly. Both parties share the encryption key, enabling them to encode and decode each other’s messages. This is called symmetric key encryption. This type of encryption is problematic because the key is available in two different places.
An alternative to symmetric key encryption is public-key encryption. In public-key encryption, two keys are used: a public key and a private key. To send an encrypted message, you obtain the public key, encode the message, and send it. The recipient then uses the private key to decode it. The public key can be given to anyone who wishes to send the recipient a message. Each user needs one private key and one public key to secure messages. The private key is necessary to decrypt something sent with the public key.
Sidebar: Password Security
The security of a password depends on its strengths to guard against brute-force guesses. Strong passwords reduce overall breaches of security because it is harder for criminals to guess.
Password policies and technologies have evolved to combat security threats, from short to long passwords, from single-factor authentication to multi-factor authentications. Most companies now have specific requirements for users to create passwords and how they are authenticated.
Below are some of the more common policies that organizations should put in place.
• Require complex passwords that make it hard to guess. For example, a good password policy requires the use of a minimum of eight characters, and at least one upper-case letter, one special character, and one number.
• Change passwords regularly. Users must change their passwords regularly. Users should change their passwords every sixty to ninety days, ensuring that any passwords that might have been stolen or guessed will not be used against the company.
• Train employees not to give away passwords. One of the primary methods used to steal passwords is to figure them out by asking the users or administrators. Pretexting occurs when an attacker calls a helpdesk or security administrator and pretends to be a particular authorized user having trouble logging in. Then, by providing some personal information about the authorized user, the attacker convinces the security person to reset the password and tell him what it is. Another way that employees may be tricked into giving away passwords is through email phishing.
• Train employees not to click on a link. Phishing occurs when a user receives an email that looks as if it is from a trusted source, such as their bank or their employer. In the email, the user is asked to click a link and log in to a website that mimics the genuine website and enter their ID and password, which the attacker then captures.
Backups
Another essential tool for information security is a comprehensive backup plan for the entire organization. Not only should the data on the corporate servers be backed up, but individual computers used throughout the organization should also be backed up. A good backup plan should consist of several components.
• A full understanding of the organizational information resources. What information does the organization actually have? Where is it stored? Some data may be stored on the organization’s servers, other data on users’ hard drives, some in the cloud, and some on third-party sites. An organization should make a full inventory of all of the information that needs to be backed up and determine the best way to back it up.
• Regular backups of all data. The frequency of backups should be based on how important the data is to the company, combined with the company's ability to replace any data that is lost. Critical data should be backed up daily, while less critical data could be backed up weekly.
• Offsite storage of backup data sets. If all of the backup data is being stored in the same facility as the original copies of the data, then a single event, such as an earthquake, fire, or tornado, would take out both the original data and the backup! It is essential that part of the backup plan is to store the data in an offsite location.
• Test of data restoration. Regularly, the backups should be put to the test by having some of the data restored. This will ensure that the process is working and will give the organization confidence in the backup plan.
Besides these considerations, organizations should also examine their operations to determine what effect downtime would have on their business. If their information technology were to be unavailable for any sustained period of time, how would it impact the business?
Additional concepts related to backup include the following:
• Universal Power Supply (UPS). A UPS is a device that provides battery backup to critical components of the system, allowing them to stay online longer and/or allowing the IT staff to shut them down using proper procedures to prevent the data loss that might occur from a power failure.
• Alternate or “hot” sites. Some organizations choose to have an alternate site where their critical data replica is always kept up to date. When the primary site goes down, the alternate site is immediately brought online to experience little or no downtime.
As information has become a strategic asset, a whole industry has sprung up around the technologies necessary for implementing a proper backup strategy. A company can contract with a service provider to back up all of their data or purchase large amounts of online storage space and do it themselves. Most large businesses now use technologies such as storage area networks and archival systems.
Firewalls
Another method that an organization should use to increase security on its network is a firewall. A firewall can exist as hardware or software (or both). A hardware firewall is a device connected to the network and filters the packets based on a set of rules. A software firewall runs on the operating system and intercepts packets as they arrive at a computer. A firewall protects all company servers and computers by stopping packets from outside the organization’s network that does not meet a strict set of criteria. A firewall may also be configured to restrict the flow of packets leaving the organization. This may be done to eliminate the possibility of employees watching YouTube videos or using Facebook from a company computer.
Some organizations may choose to implement multiple firewalls as part of their network security configuration, creating one or more sections of their partially secured network. This segment of the network is referred to as a DMZ, borrowing the term demilitarized zone from the military. It is where an organization may place resources that need broader access but still need to be secured.
Intrusion Detection Systems
Another device that can be placed on the network for security purposes is an intrusion detection system or IDS. An IDS does not add any additional security; instead, it provides the functionality to identify if the network is being attacked. An IDS can be configured to watch for specific types of activities and then alert security personnel if that activity occurs. An IDS also can log various types of traffic on the network for analysis later. An IDS is an essential part of any good security setup.
Sidebar: Virtual Private Networks
Using firewalls and other security technologies, organizations can effectively protect many of their information resources by making them invisible to the outside world. But what if an employee working from home requires access to some of these resources? What if a consultant is hired to work on the internal corporate network from a remote location? In these cases, a virtual private network (VPN) is called for.
A VPN allows a user outside of a corporate network to detour around the firewall and access the internal network from the outside. A combination of software and security measures lets an organization allow limited access to its networks while at the same time ensuring overall security.
Physical Security
An organization can implement the best authentication scheme globally, develop the best access control, and install firewalls and intrusion prevention. Still, its security cannot be complete without the implementation of physical security. Physical security is the protection of the actual hardware and networking components that store and transmit information resources. To implement physical security, an organization must identify all of the vulnerable resources and ensure that these resources cannot be physically tampered with or stolen. These measures include the following.
• Locked doors: It may seem obvious, but all the security in the world is useless if an intruder can walk in and physically remove a computing device. High-value information assets should be secured in a location with limited access.
• Physical intrusion detection: High-value information assets should be monitored through the use of security cameras and other means to detect unauthorized access to the physical locations where they exist.
• Secured equipment: Devices should be locked down to prevent them from being stolen. One employee’s hard drive could contain all of your customer information, so it must be secured.
• Environmental monitoring: An organization’s servers and other high-value equipment should always be kept in a monitored room for temperature, humidity, and airflow. The risk of server failure rises when these factors go out of a specified range.
• Employee training: One of the most common ways thieves steal corporate information is to steal employee laptops while employees are traveling. Employees should be trained to secure their equipment whenever they are away from the office.
Security Policies
Besides the technical controls listed above, organizations also need to implement security policies as a form of administrative control. In fact, these policies should really be a starting point in developing an overall security plan. A good information-security policy lays out the guidelines for employee use of the information resources of the company. It provides the company recourse in the case that an employee violates a policy.
A security policy should be guided by the information security triad discussed above. It should lay out guidelines and processes for employees to follow to access all resources to maintain the three categories' integrity: confidentiality, integrity, and availability.
Policies require compliance and need to be enforceable; failure to comply with a policy will result in disciplinary action. SANS Institute’s Information Security Policy Page (2020) lists many templates for different types of security policies. One example of a security policy is how remote access should be managed, which can be found here.
A security policy should also address any governmental or industry regulations that apply to the organization. For example, if the organization is a university, it must be aware of the Family Educational Rights and Privacy Act (FERPA), which restricts who has access to student information. Health care organizations are obligated to follow several regulations, such as the Health Insurance Portability and Accountability Act (HIPAA).
Sidebar: Mobile Security
As mobile devices such as smartphones and tablets proliferate, organizations must be ready to address the unique security concerns that these devices use. One of the first questions an organization must consider is whether to allow mobile devices in the workplace.
Many employees already have these devices, so the question becomes: Should we allow employees to bring their own devices and use them as part of their employment activities? Or should we provide the devices to our employees? Creating a BYOD (“Bring Your Own Device”) policy allows employees to integrate themselves more fully into their job and bring higher employee satisfaction and productivity. It may be virtually impossible to prevent employees from having their own smartphones or iPads in the workplace in many cases. If the organization provides the devices to its employees, it gains more control over the use of the devices, but it also exposes itself to the possibility of an administrative (and costly) mess.
Mobile devices can pose many unique security challenges to an organization. Probably one of the biggest concerns is the theft of intellectual property. It would be a straightforward process for an employee with malicious intent to connect a mobile device either to a computer via the USB port or wirelessly to the corporate network and download confidential data. It would also be easy to take a high-quality picture using a built-in camera secretly.
When an employee has permission to access and save company data on their device, a different security threat emerges: that device now becomes a target for thieves. Theft of mobile devices (in this case, including laptops) is one of the primary methods that data thieves use.
So, what can be done to secure mobile devices? It will start with a good policy regarding their use. Specific guidelines should include password policy, remote access, camera usage, voice recording, among others.
Besides policies, there are several different tools that an organization can use to mitigate some of these risks. For example, if a device is stolen or lost, geolocation software can help the organization find it. In some cases, it may even make sense to install remote data-removal software, which will remove data from a device if it becomes a security risk.
Usability
When looking to secure information resources, organizations must balance the need for security with users’ need to access and use these resources effectively. If a system’s security measures make it difficult to use, then users will find ways around the security, which may make the system more vulnerable than it would have been without the security measures! Take, for example, password policies. If the organization requires an extremely long password with several special characters, an employee may resort to writing it down and putting it in a drawer since it will be impossible to memorize.
Reference:
Security Policy Templates. Retrieved September 6, 2020, from SANS Institute’s Information Security www.sans.org/information-security-policy/ | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/06%3A_Information_Systems_Security/6.03%3A_Tools_for_Information_Security.txt |
Chapter 5 discussed the different security threats and solutions. However, users need to safeguard their personal information as well.
Personally identifiable information (PII)
According to the FBI's Internet Crime Complaint Center (IC3), \$13.3 Billion of total losses has been reported from 2016 to 2020 (IC3, 2020). Examples of crime types include phishing, personal data breach, identity theft, credit card fraud. The age of the victim ranges from 20 to 60 years old. For a detailed report, see the 2020 Internet crime report. The true number may be even higher since many victims did not report for a variety of reasons.
Personally identifiable information (PII) is any information that can be used to identify a person positively. Particular PII Examples include:
One of the cybercriminals' most lucrative targets is acquiring PII lists that can then be sold on the dark web. The dark web can only be accessed through special software, and cybercriminals use it to shield their activities. Stolen PII can be used to build fraudulent accounts, such as short-term loans and credit cards.
Protected Health Information (PHI) is a subset of PII. The medical community produces and manages PHI-containing electronic medical records (EMRs). In the U.S., the Health Insurance Portability and Transparency Act ( HIPAA) governs PHI handling. In the European Union, a similar law is called data security.
Lost Competitive Advantage
In cyberspace, companies are constantly concerned about corporate hacking. Another major concern is the loss of trust that occurs when a firm cannot protect its customers' personal data. The loss of competitive advantage may result from this loss of confidence rather than from stealing trade secrets by another firm or country.
Reference:
2020 IC3 Report. Retrieved April 6, 2021, from https://www.ic3.gov/Media/PDF/AnnualReport/2020_IC3Report.pdf
6.05: Fighters in the War Against Cybercrime- The Modern Security Op
Besides the tools and practices discussed earlier to protect ourselves, companies also have increased their investment to fight against cybercrime. One such investment is a dedicated center called Security Operations Center to safeguard companies from internal and external threats.
Elements of a SOC
Defending against today's threats requires a formalized, structured, and disciplined approach that is carried out by Security Operations Centers professionals who work closely with other groups such as IT or networking staff. SOCs offers a wide variety of services tailored to meet customer needs, from monitoring and compliance to comprehensive threat detection and hosted protection. SOCs may be wholly in-house, owned and run by a company, or security providers, such as Cisco Systems Inc.'s Managed Security Services, may be contracted to elements of a SOC. The key elements of a SOC are individuals, processes, and technology.
A great way to fight against threats is through Artificial Intelligence (AI) and machine learning. AI and machine learning use multi-factor authentication, malware scanning, and fighting spam and phishing to fight against threats.
Process in the SOC
SOC professionals monitor all suspicious activities and follow a set of rules to verify if it is a true security incident before escalating to the next level severity for the incident for appropriate security experts to take appropriate actions.
The SOC has four principal functions:
• Use network data to check the security warnings
• Evaluate accidents that have been checked and determine how to proceed
• Deploy specialists to evaluate risks at the highest possible level.
• Provide timely communication by SOC management to the company or clients
Technologies deployed in the SOC include:
• Event collection, correlation, and analysis
• Security monitoring
• Security control
• Log management
• Vulnerability assessment
• Vulnerability tracking
• Threat intelligence
Enterprise and Managed Security
The organization will benefit from the implementation of an enterprise-level SOC for medium and large networks. The SOC could be a complete solution within the company. Yet many larger organizations will outsource at least part of the SOC operations to a security solution provider such as Cisco Systems Inc. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/06%3A_Information_Systems_Security/6.04%3A_Threat_Impact.txt |
Much of the business networks will still be up and running. Security staff recognizes that network stability must be maintained for the company to achieve its goals.
Any company or industry has a small tolerance for downtime on networks. Usually, this tolerance is based on calculating downtime costs with the cost of insuring against downtime.
For example, using a router as a single point of failure could be tolerable in a small retail business with only one location. However, if a large portion of that company's sales is from online shoppers, the owner may want to have a redundancy degree to ensure there is always a connection.
Desired uptime is also expressed in the number of down-minutes in a year. For example, an uptime of "five nines" means the network is up by 99.999 percent of the time or down by no more than 5.256 minutes a year. "Four nines" would be a 52.56-minute downtime per capita.
Availability %
Downtime
99.8%
17.52 hours
99.9% (“three nines”)
8.76 hours
99.99% (“four nines”)
52.56 minutes
99.999% (“five nines”)
5.256 minutes
99.9999% (“six nines”)
31.5 seconds
99.99999% (“seven nines”)
3.15 seconds
But security cannot be so powerful that it interferes with employee needs or business functions. This is often a tradeoff between good security and allowing companies to work efficiently.
6.07: Summary
Summary
People, businesses, and even nations can all fall victim to cyberattacks. There are different types of attackers, including amateurs attacking for fun and prestige, hacktivists hacking for a political cause, and professional hackers attacking for profit. Besides, nations that attack other nations to gain an economic advantage by intellectual property theft or harm or destroy another country's properties. The vulnerable networks are PC and server business networks and the thousands of computers on the Internet of Things.
Fight against cyberattacks requires people, processes, and technology to follow best practices and good security policies. There are tools that users can employ to protect personally identifiable information. There are policies that companies can require of their customers and employees to protect their resources. Companies can also invest in dedicated Security Operations Centers (SOCs) for cybercrime prevention, identification, and response.
6.08: Study Questions
Study Questions
1. Briefly define the three components of the information security triad
2. Explain what authentication means
3. Give two examples of a complex password
4. Give three examples of threat actors
5. Name two motivations of hacktivists to commit cybercrime
6. List five ways to defend against cyber attacks
7. List three examples of PII
8. Briefly explain the role of SOC
9. Explain the purpose of security policies
10. Explain how information availability related to a successful organization
Exercises
1. Research and analyze cybersecurity incidents to come up with scenarios of how organizations can prevent an attack.
2. Discuss some IoT (Internet of Things) application vulnerabilities with non-techie and techie technology users, then compare and contrast their different perspectives and reactions to IoT vulnerabilities.
3. Describe one multi-factor authentication method that you have experienced and discuss the pros and cons of using multi-factor authentication.
4. Identify the password policy at your place of employment or study. Assess if it is a good policy or not. Explain.
5. Take inventory of possible security threats that your home devices may be exposed to. List them and discuss their potential effects and what you plan to do about them.
6. Recall when you last back up your data. Discuss the method you use. Define a backup policy for your home devices.
7. Research the career of a SOC professional. Report what certificate training it requires to become SOC professionals, what the demand is for this career, and their salary range. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/01%3A_What_Is_an_Information_System/06%3A_Information_Systems_Security/6.06%3A_Security_vs._Availability.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• Describe Porter’s competitive forces model and how information technology impacts competitive advantage.
• Describe Porter’s value chain model and its relationship to IT.
• Describe information systems that can provide businesses with a competitive advantage.
• Describe the collaborative systems that workers can use to contribute to their organization.
• Distinguish between a structured and an unstructured decision and its connection to IT.
• Discuss the challenges associated with a sustainable competitive advantage.
07: Leveraging Information Technology (IT) for Competitive Advantage
For over fifty years, since the microprocessor's invention, computing technology has been a part of the business. From UPC scanners and computer registers at your local neighborhood store to huge inventory databases used by companies like Amazon, information technology has become the backbone of commerce. Organizations have spent trillions of dollars on information technologies. But has all this investment in IT made a difference? Do computers increase productivity? Are companies that invest in IT more competitive? This chapter will look at the value IT can bring to an organization and try to answer these questions. We will begin by highlighting two important works from the past two decades.
7.02: The Producti
In 1991, Erik Brynjolfsson wrote an article, published in the Communications of the ACM, entitled “The Productivity Paradox of Information Technology: Review and Assessment” By reviewing studies about the impact of IT investment on productivity, Brynjolfsson was able to conclude that the addition of information technology to business had not improved productivity at all – the “productivity paradox.” He concluded that this paradox resulted from our inability to unequivocally document any contribution after so much effort due to the lack of quantitative measures.
In 1998, Brynjolfsson and Lorin Hitt published a follow-up paper entitled “ . ” In this paper, the authors utilized new data that had been collected and found that IT did, indeed, provide a positive result for businesses. Further, they found that sometimes the true advantages in using technology were not directly relatable to higher productivity but to “softer” measures, such as the impact on organizational structure. They also found that the impact of information technology can vary widely between companies.
IT Doesn’t Matter
Just as a consensus was forming about IT's value, the Internet stock market bubble burst; two years later, in 2003, Harvard professor Nicholas Carr wrote his article “IT Doesn’t Matterin the Harvard Business Review. In this article, Carr asserts that as information technology has become more ubiquitous, it has also become less of a differentiator. In other words: because information technology is so readily available and the software used so easily copied, businesses cannot hope to implement these tools to provide any competitive advantage. IT is essentially a commodity, and it should be managed like one: low cost, low risk. IT management should see themselves as a utility within the company and work to keep costs down. For IT, providing the best service with minimal downtime is the goal. As you can imagine, this article caused quite an uproar, especially from IT companies. Many articles were written in defense of IT; many others in support of Carr.
The best thing to come out of the article and the subsequent book was that it opened up discussion on IT's place in a business strategy and exactly what role IT could play in competitive advantage. It is that question that we want to address in the rest of this chapter. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/02%3A_Information_Systems_for_Strategic_Advantage/07%3A_Leveraging_Information_Technology_(IT)_for_Competitive_Advantage/7.01%3A_Introduction.txt |
What do Walmart, Apple, and McDonald’s have in common?
All three businesses have a Competitive advantage. What does it mean when a company has a competitive advantage? What are the factors that play into it? According to Michael Porter in his book “Competitive Advantage: Creating and Sustaining Superior Performance,” a company is said to have a competitive advantage over its rivals when it can sustain profits that exceed the industry's average. Porter identified two basic types of competitive advantage:
• Cost advantage: When the firm can deliver the same benefits as competitors but at a lower cost. McDonald's and Walmart both utilize economies of scale to maintain their cost advantage.
• Differentiation advantage: When a firm can deliver benefits that exceed those of competing products. Apple’s innovative products that complement each other and share the same operating system offer a unique product that gives consumers a sense of exclusivity, and their trade-in programs build consumer loyalty.
The question, then, is: How can information technology be a factor in achieving Competitive advantage? We will explore this question by using:
• Two analysis tools from Porter’s book “Competitive advantage: Creating and Sustaining Superior Performance:
• The value chain
• The Five Forces model.
• Porter’s analysis in his 2001 article “Strategy and the Internet.”
The Value Chain
In his book, Porter analyzes the basis of competitive advantage and describes how a company can achieve it using the value chain as a framework. A value chain is a step-by-step business model transforming a product or service from an idea (i.e., materials) to reality ( i.e., products or services). Value chains help increase a business’s efficiency so the business can deliver the most value(i.e., profit) for the least possible cost. Each step (or activity) in the value chain contributes to a product or service's overall value. While the value chain may not be a perfect model for every type of company, it does provide a way to analyze just how a company is producing value.
The value chain is made up of two sets of activities: primary activities and support activities. We will briefly examine these activities and discuss how information technology can create value by contributing to cost advantage or differentiation advantage, or both.
The primary activities are the functions that directly impact the creation of a product or service, its sales, and after-sales service. The goal of the primary activities is to add more value than they cost. The primary activities are:
• Inbound logistics: Purchasing, Receiving, and storing raw materials. Information technology can make these processes more efficient, such as with supply-chain management systems, which allow the suppliers to manage their own inventory. Starbucks has company-appointed coffee buyers that select the finest quality coffee beans from producers in Latin America, Africa, and Asia.
• Operations: Any part of a business involved in converting the raw materials into the final products or services is part of operations. From manufacturing to business process management (covered in chapter 8), information technology can provide more efficient processes and increase innovation through information flows.
• Outbound logistics: These functions include order processing and warehousing required to get the product out to the customer. As with inbound logistics, IT can improve processes, such as allowing for real-time inventory checks. IT can also be a delivery mechanism itself.
• Marketing/Sales: The functions that will entice buyers to purchase the products (advertising, salesforce) are part of sales and marketing. Information technology is used in almost all aspects of this activity. From online advertising to online surveys, IT can innovate product design and reach customers like never before. The company website can be a sales channel itself.
• Service: The functions a business performs after the product has been purchased, such as installation, customer support, complaint resolution, and repair to maintain and enhance its value, are part of the service activity. Service can be enhanced via technology as well, including support services through websites and knowledge bases.
The support activities are the functions in an organization that support and cut across all primary activities. The support activities are:
• Firm infrastructure: Organizational functions such as finance, accounting, ERP Systems (covered in chapter 9), and quality control, all of which depend on information technology.
• Technology development: Technological advances and innovations support the primary activities. These advances are then integrated across the company to add value in different departments. Information technology would fall specifically under this activity.
• Procurement: Acquiring the raw materials used in the creation of products and services is called procurement. Business-to-business e-commerce can be used to improve the acquisition of materials.
A value chain is a powerful tool in analyzing and breaking down a company into its relevant activities that result in higher prices and lower cost, by understanding how these activities are connected and the company’s strategic objectives, companies can identify their core competencies and insight into how information technology can be used to achieve a competitive advantage.
Look at this example of the Starbucks value chain model analysis that includes a short video by Prableen Bajpai: .
Porter’s Five Forces
Porter recognized that other factors could impact a company’s profit in addition to competition from their rivals. He developed the “five forces'' model as a framework for analyzing the competition in an industry and its strengths and weaknesses. The model consists of five elements, each of which plays a role in determining an industry's average profitability.
In 2001, Porter wrote an article entitled ”Strategy and the Internet,” in which he takes this model and looks at how the Internet(and IT) impacts an industry's profitability. Although the model's details differ from one industry to another, its general structure of the five forces is universal. Let’s have a look at how the internet plays a role in Porter’s five forces model:
• Threat of New Entrants: The easier it is to enter an industry, the tougher it will be to profit in that industry. The Internet has an overall effect of making it easier to enter industries. Traditional barriers such as the need for a physical store and sales force to sell goods and services are drastically reduced. Dot-coms multiplied for that very reason: All a competitor has to do is set up a website. The geographical reach of the internet enables distant competitors to compete more directly with a local firm. For example, a manufacturer in Northern California may now have to compete against a manufacturer in the Southern United States, where wages are lower.
• Threat of Substitute Products: How easily can a product or service be replaced with something else? The more types of products or services there can meet a particular need, and the less profitability will be in an industry. For example, the advent of the mobile phone has replaced the need for pagers. The Internet has made people more aware of substitute products, driving down industry profits in those industries being substituted. Any industry in which digitized information can replace material goods such as books, music, software is at particular risk ( Think, for example, Amazon’s Kindle and Spotify).
• Bargaining Power of Suppliers: Companies can more easily find alternative suppliers and compare prices more easily. When a sole supplier exists, then the company is at the mercy of the supplier. For example, if only one company makes the controller chip for a car engine, that company can control the price, at least to some extent. The Internet has given companies access to more suppliers, driving down prices. On the other hand, suppliers now also have the ability to sell directly to customers. As companies use IT to integrate their supply chain, participating suppliers will prosper by locking customers and increasing switching costs.
• Bargaining Power of Customers: A company that is the sole provider of a unique product has the ability to control pricing. But the Internet has given customers access to information about products and more options (small and big business) to choose from.
• Threat of Substitute Products: The more competitors in an industry, the bigger a factor price becomes. The visibility of internet applications on the Web makes proprietary systems more difficult to keep secret. It is straightforward to copy technology, so innovations will not last that long. For example, Sony Reader was released in 2006, followed by Amazon Kindle in 2007, and just two years later, Barnes and Noble Nook, which was the best-selling unit in the US before iPad (with built-in reading app iBooks) hit the market in 2010. (Wikipedia: E-Reader, 2020)
According to this model, the company's average profitability depends on the five forces' collective strength. If the five forces are intense, for example, in the airline industry, almost no company makes a huge profit. If the forces are mild, for example, the soft drink industry, there is room for higher profits. The Internet provides better opportunities for companies to establish strategic advantage by boosting efficiency in various ways, as we will see in the next section. However, the internet also tends to dampen suppliers' bargaining power and increase the threat of substitute products by making it easier for buyers and sellers to do business. Thus, the Internet (and, by extension, information technology in general) has the overall impact of increasing competition and lowering profitability. This is the great paradox of the internet.
While the Internet has certainly produced many big winners, the overall winners have been the consumers, who have been given an ever-increasing market of products and services and lower prices. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/02%3A_Information_Systems_for_Strategic_Advantage/07%3A_Leveraging_Information_Technology_(IT)_for_Competitive_Advantage/7.03%3A_Competitive_.txt |
Information Systems support or shape a business unit’s organizational strategy to provide a competitive advantage. Any information system - Business Process Management (BPM), Electronic Data Interchange (EDI), Management Information System (MIS), Decision Support System (DSS), Transaction Processing Systems (TPS) - that helps a business deliver a product or service at a lower cost that is differentiated that focuses on a specific market segment or is innovative is a strategic information system. Companies typically have several different types of information systems; each type serves a different level of decision-making - operational (workers), tactical (middle and senior managers), and strategic (executives).
Let’s look at a few examples.
Electronic Data Interchange (EDI)
Typically, a paper-based exchange of purchase orders and invoices takes a week to process. Using EDI, the process can be completed within hours!. By integrating suppliers and distributors via EDI, a company can improve speed, efficiency, and security, thus vastly reducing the resources required to manage relevant business information. Cleo, TrueCommerce EDI, Jitterbit, GoAnywhere MFT are some of the many EDI software that can be used in conjunction with a data integration platform.
EDI can take the role of supply chain management and the standard format of information exchange used by many systems discussed below.
Transaction Processing Systems (TPS)
Transaction processing systems (TPS) are computerized information systems developed to process large amounts of data for routine business transactions such as payroll, order processing, airline reservations, employee records, accounts payable, and receivable. TPS eliminates the tedium of necessary repetitive transactions that take time and labor and makes them efficient and accurate, although people must still input data to computerized systems. Transaction processing systems are boundary-spanning systems that allow the organization to interact with external environments. TPS examples include ATMs, credit card authorizations, online bill payments, and self-checkout stations at retail stores. IT enables all of this to happen in real-time.
Business Process Management (BPM)
Business process management is the automated integration of process information targeted to streamline operations, reduce costs and improve customer service (Ken Vollmer, BPMInstitute.org). Unlike EDI, BPM is used both internally and externally, between applications within a business and between companies. Large financial institutions like Bank of America use BPM to link, integrate and automate different applications - Credit card, bank account, loans - thus resulting in a delivery time for financial transactions from weeks to minutes.
Management Information Systems (MIS)
Management Information systems(MIS): users, hardware, and software that support decision making. MIS collects and stores its key data and produces information that managers need for analysis, control, and decision-making. For example, input from the sales of different products can be used to analyze trends of performing well and those that are not. Managers use this analysis to make semi-structured decisions such as changes to future inventory orders and manufacturing schedules.
MIS, IS, and IT sound very similar and are often confused. MIS is a type of IS that is more organization-based and focused on leveraging IT to increase business value(i.e., Profit). IT or IT management is the technical management of an IT department which can include MIS.
Decision Support Systems (DSS)
A decision support system (DSS) is a computerized information system that supports business or organizational decision-making activities by sifting through and analyzing a huge amount of data and producing comprehensive information reports. As technology continues to advance, DSS is not limited to just huge mainframe computers - DSS applications can be loaded on most desktops, laptops, and even mobile devices. For example, GPS route planning determines the fastest and best route between two points: analyzing and comparing multiple options and factoring in traffic conditions.
Marketing executives at a furniture company(like Living Spaces) could run DSS models that use sales data and demographic assumptions to develop forecasts of the types of furniture that would appeal to the fastest-growing population groups.
DSSs can exist at different levels of decision-making within the organization, from executives to senior managers, and helps people make decisions about a wide variety of problems, ranging from highly structured decisions to unstructured decisions.
• A structured decision is usually one that is repetitive and routine and is based directly on the inputs. For example, a company decides whether or not to withdraw funds from an international account depending on the current exchange rate. EDI and TPS typically handle structured decisions. Structured decisions are good candidates for automation, but we don’t necessarily build decision-support systems for them.
• An unstructured decision has a lot of unknowns and relies on knowledge and/or expertise. An information system can support these decisions by providing the decision-makers with information-gathering tools and collaborative capabilities. An example of an unstructured decision might be what types of a new product should be created and what market should be targeted.
Decision support systems work best when the decision-maker(s) are making semi-structured decisions. A semi-structured decision is one in which most of the factors needed for making the decision are known, but human experience and other outside factors may still play a role. A good example of a semi-structured decision would be diagnosing a medical condition. Farmers using crop=planning tools to determine the best time to plant, fertilize and reap is another example.
DSSs can be as simple as a spreadsheet that allows for the input of specific variables and then calculates required outputs such as inventory management. Another DSS might assist in determining which products a company should develop. Input into the system could include market research on the product, competitor information, and product development costs. The system would then analyze these inputs based on the specific rules and concepts programmed into them. Finally, the system would report its results, with recommendations and/or key indicators to decide.
A DSS can be looked at as a tool for competitive advantage in that it can give an organization a mechanism to make wise decisions about products and innovations.
Collaborative Systems
As organizations began to implement networking technologies, information systems emerged that allowed employees to collaborate differently. Tools such as document sharing and video conferencing allowed users to brainstorm ideas together and collaborate without the necessity of physical, face-to-face meetings.
Broadly speaking, any software that allows multiple users to interact on a document or topic could be considered collaborative. Electronic mail, a shared Word document, social networks, and discussion boards would fall into this broad definition. However, many software tools have been created that are designed specifically for collaborative purposes. These tools offer a broad spectrum of collaborative functions. They can exist as stand-alone systems or integrated with any of the information systems above. Here is just a shortlist of some collaborative tools available for businesses today:
Cloud Services refer to a wide variety of services delivered on-demand to companies and customers over the internet without the need for internal infrastructure or hardware.
Cloud Services
IBM Lotus Notes
• One of the first true “groupware” collaboration tools.
• Provides a full suite of collaboration software, including integrated e-mail
• Obsolete with the advent of newer, easier-to-use technologies like Google Drive and Microsoft SharePoint.
GitHub
• Code hosting platform for collaboration amongst programmers/developers of computer software
• Used primarily for version control – to track changes in source code during software development.
Microsoft SharePoin
• Web-based document management and collaboration tool
• Integrates with Office 365, which educators, students, office workers are familiar with.
• Sharepoint was covered in more detail in Chapter 5
G Suite
• Formerly known as Google Apps for Work
• Software as a Service (SaaS) product that groups all cloud-based productivity and collaboration tools developed by Google.
• The innovative interface allows real-time document editing and sharing
• Allows collaboration of other products, like Office 365.
• Another SaaS that you may be familiar with is
Online Video Conferencing Services allows two or more people in different geographical locations to meet and collaborate.
Online Video Conferencing Services
Zoom
• Most popular online video conferencing and meeting platform due to its user-friendly interface.
• Great for small and large businesses as it can support up to 100p participants in online meetings
• Wide variety of options such as screen share, whiteboard, live chat and messaging, recording, and breakout rooms.
• Collaboration and Interaction from a variety of devices(computers, tablets, smartphones, etc.)
• Google Chrome and Linux OS support
Cisco Webex
• Business communications platform that combines video and audio
• Allows participants to interact with each other’s computer desktops
• Top of the line security features, making it excellent for business with legitimate security concerns
Skype for Business
• Microsoft’s online meeting platform
• Can support up to 250 participants for online meetings
• Combines instant messaging, video conferencing, calling, and document collaboration in a single integrated app.
• Skype that you use at home is good for small businesses and can support up to 50 participants.
With the explosion of the worldwide web, the distinction between these different systems has become fuzzy. Information systems are available to automate practically any business aspect - from managing inventory to sales and customer service. " Information Technology(IT)" is now the category to designate any software-hardware-communications structures that today work as a virtual nervous system of society at all levels. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/02%3A_Information_Systems_for_Strategic_Advantage/07%3A_Leveraging_Information_Technology_(IT)_for_Competitive_Advantage/7.04%3A_Using_Inform.txt |
In 2008, Brynjolfsson and McAfee published a on IT's role in competitive advantage, titled “Investing in the IT That Makes a Competitive Difference.” Their study confirmed that IT could play a role in competitive advantage if deployed wisely. In their study, they draw three conclusions:
• First, the data show that IT has sharpened differences among companies instead of reducing them. This reflects that while companies have always varied widely in their ability to select, adapt, and exploit innovations, technology has accelerated and amplified these differences.
• Second, good management matters: Highly qualified vendors, consultants, and IT departments might be necessary for the successful implementation of enterprise technologies themselves, but the real value comes from the process innovations that can now be delivered on those platforms. Fostering the right innovations and propagating them widely are executive responsibilities that can’t be delegated.
• Finally, the competitive shakeup brought on by IT is not nearly complete, even in the IT-intensive US economy. We expect to see these altered competitive dynamics in other countries, as well, as their IT investments grow.
Artificial Intelligence (AI)
Let's watch this short video by The Royal Society, that explains what AI is and its role and impact in society.
In the tech-driven and ever-changing business landscape, successful leveraging and implementing IT has become the solution for maintaining competitive advantage and growth. One such solution is artificial intelligence (AI). AI (or machine intelligence) is intelligence demonstrated by machines - machines' ability to operate like a human brain - to learn patterns, provide insights and even predict future occurrences based on inputted data/information. For example, AI can give companies a competitive edge in marketing by providing insights into how to market, who to market to, when, and how to market. AI offers insights that are objective and data-driven. Amazon uses AI to follow user’s behavior on their website - what type of products they buy, how long they spend on a product page, etc. The AI system quickly learns to generate tailored recommendations to each user's taste and preference based on their activity. Another advantage of AI is in cybersecurity and fraud protection. AI technologies can use user behavior data to identify and flag any activity that is out of the ordinary for any user (such as credit card use outside your home state). AI systems are very versatile in that they can handle all three types of decisions - structured, semi-structured, and unstructured.
Global Competition
Many companies today are operating in a global environment. In addition to multinational corporations, many companies now export or import and face competition from products created in countries where labor and other costs are low or where natural resources are abundant. Electronic commerce facilitates global trading by enabling even small companies to buy from or sell to businesses in other countries. Amazon, Netflix, Apple, Samsung, LG, and many more have customers and suppliers worldwide.
7.06: Summary
Summary
Information systems can and have been used strategically for competitive advantage by many US companies, including Walmart, Amazon, Netflix, and Apple. Acquiring a competitive advantage is hard, and sustaining it can be just as difficult because of technology's innovative nature. Organizations that want to gain a market edge must understand how they want to differentiate themselves and then use all the elements of information systems (hardware, software, data, people, and process) to accomplish that differentiation.
IT is not a panacea; just purchasing and installing the latest technology will not, by itself, make a company more successful. Instead, the combination of the right technologies, employee training, infrastructure, and good management, together, will give a company the best chance of a positive result.
7.07: Study Questi
Study Questions
1. List the five forces in Porter’s Competitive forces model.
2. What does it mean for a business to have a competitive advantage?
3. What are the primary activities and support activities of the value chain?
4. What has been the overall impact of the Internet on industry profitability? Who has been the true winner?
5. List two examples of how Amazon.com used Porter’s five forces model to gain a competitive advantage.
6. Give an example of how the internet impacted Barnes and Noble's online(bn.com) profitability.
7. List and Compare the different information systems. How are they the same? How are they better?
8. Give an example of a semi-structured decision and explain what inputs would be necessary to assist in making the decision.
9. What does a collaborative information system do?
10. How can IT play a role in competitive advantage, according to the 2008 article by Brynjolfsson and McAfee?
Exercises
1. Discuss the idea that an information system by itself can rarely provide a sustainable competitive advantage.
2. Review the website. What features of Zoom would contribute to good collaboration? What makes Zoom a better collaboration tool than something like Skype or Google Hangouts?
3. Think of a semi-structured decision that you make in your daily life and build your own DSS using a spreadsheet to help you make that decision.
4. Give an example of AI that you see used in your daily life. Describe one way it can be improved or combined with another information system to gain an advantage. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/02%3A_Information_Systems_for_Strategic_Advantage/07%3A_Leveraging_Information_Technology_(IT)_for_Competitive_Advantage/7.05%3A_Investing_in.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• Define the term business process
• Identify different systems needed to support business processes in an organization
• Explain the value of an enterprise resource planning(ERP) system
• Explain how business process management and business process engineering work; and
• Understand how information technology combined with business processes can bring an organization competitive advantage.
08: Business Processes
In the last seven chapters, we have gone through the first four components of an information system (IS). In this chapter, we will discuss the fifth component of information systems, which is a process. People build information systems to solve problems faced by people. Have you wondered how organizations use IS to run their organizations, help their people communicate and collaborate? That is the role of Business Processes in an organization. This chapter will answer those questions and describe how business processes can be used for strategic advantage.
8.02: What Is a Business Process
What Is a Business Process?
We have all heard the term process before, but what exactly does it mean? A business process is a series of related tasks that are completed in a stated sequence to accomplish a business goal. This set of ordered tasks can be simple or complicated. However, the steps involved in completing these tasks can be documented or illustrated in a flow chart. If you have worked in a business setting, you have participated in a business process. Anything from a simple process for making a sandwich at Subway to building a space shuttle utilizes one or more business processes.
Processes are something that businesses go through every day to accomplish their mission. The better their processes, the more effective the business. Some businesses see their processes as a strategy for achieving competitive advantage. A process that uniquely achieves its goal can set a company apart. A process that eliminates costs can allow a company to lower its prices (or retain more profit).
Documenting a Process
Every day, we will conduct many processes without even thinking about them: getting ready for work, using an ATM, reading our email, etc. But as processes grow more complex, they need to be documented.
For businesses, it is essential to do this because it allows them to ensure control over how activities are undertaken in their organization. It also allows for standardization: McDonald’s has the same process for building a Big Mac in its restaurants.
The simplest way to document a process is to create a list. The list shows each step in the process; each step can be checked off upon completion. For example, a simple process, such as how to create an account on Amazon, might look like a checklist such as::
• Go to www.amazon.com.
• Click on “Hello Sign in Account” on the top right of the screen
• Select “start here” after the question “new customers?”
• Select “Create your Amazon account.”
• Enter your name, email, password
• Select “Create Your Amazon account.”
• Check your email to verify your new Amazon account
For processes that are not so straightforward, documenting the process as a checklist may not be sufficient. Some processes may need to be documented as paths to be followed depending on certain conditions being met. For example, here is the process for determining if an article for a term needs to be added to Wikipedia:
• Search Wikipedia to determine if the term already exists.
• If the term is found, then an article is already written, so you must think of another term. Repeat step 1.
• If the term is not found, then look to see if there is a related term.
• If there is a related term, then create a redirect.
• If there is not a related term, then create a new article.
This procedure is relatively simple – in fact, it has the same number of steps as the previous example – but because it has some decision points, it is more difficult to track with a simple list. In these cases, it may make more sense to use a diagram to document the process to illustrate both the above steps and the decision points:
Documenting Business Processes
To standardize a process, organizations need to document their processes and continuously keep track of them to ensure accuracy. As processes change and improve, it is important to know which processes are the most recent. It is also important to manage the process to be easily updated, and changes can be tracked!
The requirement to manage the documentation process is made easy by software tools such as document management, project management, or Business Process Modeling (BPM) software (discussed later in this chapter). Examples include , It includes standardized notations and common techniques such as:
• Versions and timestamps: BPM will keep multiple versions of documents. The most recent version of a document is easy to identify and will be served up by default.
• Approvals and workflows: When a process needs to be changed, the system will manage both access to the documents for editing and the document's routing for approvals.
• Communication: When a process changes, those who implement the process need to be aware of the changes. The system will notify the appropriate people when a change to a document is approved.
• Techniques to model the processes. Standard graphical representations such as a flow chart, Gantt chart, Pert diagram, or Unified Modeling Language can be used, which we will touch upon in Chapter 10.
Of course, these systems are not only used for managing business process documentation, and they have continued to evolve. Many other types of documents are managed in these systems, such as legal documents or design documents.
Enterprise Resource Planning (ERP) Systems
An ERP system is a software application with a centralized database that can be used to run an entire company.
Let’s look at an ERP and associated modules as illustrated in Fig 8.2.
• It is a software application: The system is a software application, which means that it has been developed with specific logic and rules. It has to be installed and configured to work specifically for an individual organization.
• It has a centralized database: The inner circle of Fig 8.2 indicates that all data in an ERP system is stored in a single, central database. This centralization is key to the success of an ERP – data entered in one part of the company can be immediately available to other parts of the company. Examples of types of data are shown: business intelligence, eCommerce, assets management, among others.
• It can be used to run an entire company: An ERP can be used to manage an entire organization’s operations, as shown in the outermost circle of Fig 8.2. Each function is supported by a specific ERP module, reading clockwise from the top: Procurement, Production, Distribution, Accounting, Human Resource, Corporate performance and government, Customer services, Sales. Companies can purchase some or all available modules for an ERP representing different organization functions, such as finance, manufacturing, and sales, to support their continued growth.
When an ERP vendor designs a module, it has to implement the associated business processes' rules. A selling point of an ERP system is that it has best practices built right into it. In other words, when an organization implements an ERP, it also gets improved best practices as part of the deal.
For many organizations, implementing an ERP system is an excellent opportunity to improve their business practices and upgrade their software simultaneously. But for others, an ERP brings them a challenge: Is the process embedded in the ERP really better than the process they are currently utilizing? If they implement this ERP, and it happens to be the same one that all of their competitors have, will they become more like them, making it much more difficult to differentiate themselves?
This has been one of the criticisms of ERP systems: they commoditize business processes, driving all businesses to use the same processes, thereby losing their uniqueness. The good news is that ERP systems also have the capability to be configured with custom processes. For organizations that want to continue using their own processes or even design new ones, ERP systems offer ways to support this through customizations.
But there is a drawback to customizing an ERP system: organizations have to maintain the changes themselves. Whenever an update to the ERP system comes out, any organization that has created a custom process will be required to add that change to their ERP. This will require someone to maintain a listing of these changes and retest the system every time an upgrade is made. Organizations will have to wrestle with this decision: When should they go ahead and accept the best-practice processes built into the ERP system, and when should they spend the resources to develop their own processes? It makes the most sense only to customize those processes that are critical to the competitive advantage of the company.
Some of the best-known ERP vendors are SAP, Microsoft, and Oracle.
Adopting an ERP is about adopting a standard business process across the entire company. The benefits are many, so are the risks of adopting an ERP system. Organizations can spend up to millions of dollars and a few years to fully implement an ERP. Hence, adopting an ERP is a strategic decision to decide how a company wants to run its organization based on a set of business rules and processes to deliver competitive advantages.
Business Process Management (BPM)
Organizations that are serious about improving their business processes will also create structures to manage those processes. BPM can be thought of as an intentional effort to plan, document, implement, and distribute an organization’s business processes with information technology support.
BPM is more than just automating some simple steps. While automation can make a business more efficient, it cannot provide a competitive advantage. On the other hand, BPM can be an integral part of creating that advantage, as we saw in Chapter 7.
Not all of an organization’s processes should be managed this way. An organization should look for processes essential to the business's functioning and those that may be used to bring a competitive advantage. The best processes to look at include employees from multiple departments, those who require decision-making that cannot be easily automated, and processes that change based on circumstances.
Let’s examine an example. Suppose a large clothing retailer is looking to gain a competitive advantage through superior customer service. As part of this, they create a task force to develop a state-of-the-art returns policy that allows customers to return any clothing article, no questions asked. The organization also decides that to protect the competitive advantage that this returns policy will bring, they will develop their own customization to their ERP system to implement this returns policy. As they prepare to roll out the system, they invest in training for all of their customer-service employees, showing them how to use the new system and process returns. Once the updated returns process is implemented, the organization will measure several key indicators about returns that will allow them to adjust the policy as needed. For example, if they find that many customers are returning their high-end clothing after wearing them once, they could implement a change to the process that limits – to, say, fourteen days – the time after the original purchase that an item can be returned. As changes to the returns policy are made, the changes are rolled out via internal communications, and updates to the system's returns processing are made. In our example, the system would no longer allow an item to be returned after fourteen days without an approved reason.
If done properly, business process management will provide several key benefits to an organization, contributing to competitive advantage. These benefits include:
• Empowering employees: When a business process is designed correctly and supported with information technology, employees will implement it on their own authority. In our returns policy example, an employee would be able to accept returns made before fourteen days or use the system to make determinations on what returns would be allowed after fourteen days.
• Built-in reporting: By building measurement into the programming, the organization can keep up to date on key metrics regarding their processes. In our example, these can improve the returns process and, ideally, reduce returns.
• Enforcing best practices: As an organization implements processes supported by information systems, it can implement the best practices for that business process class. In our example, the organization may require that all customers returning a product without a receipt show a legal ID. This requirement can be built into the system so that the return will not be processed unless a valid ID number is entered.
• Enforcing consistency: By creating a process and enforcing it with information technology, it is possible to create consistency across the organization. In our example, all stores in the retail chain can enforce the same returns policy. And if the returns policy changes, the change can be instantly enforced across the entire chain.
Business Process Re-engineering (BPR)
As organizations look to manage their processes to gain a competitive advantage, they also need to understand that their existing ways of doing things may not be the most effective or efficient. A process developed in the 1950s will not be better just because it is now supported by technology.
In 1990, Michael Hammer’s article (1990) “Reengineering Work: Don’t Automate, Obliterate.” discusses how simply automating a bad process does not make it better. Instead, companies should “blow up” their existing processes and develop new processes that take advantage of the new technologies and concepts. Instead of automated outdated processes that do not add value, companies should use modern IT technologies to re-engineer their processes to achieve significant performance improvements radically.
Business process reengineering is not just taking an existing process and automating it. BPR fully understands the process's goals and then dramatically redesigns it from the ground up to achieve dramatic improvements in productivity and quality. But this is easier said than done. Most of us think about making small, local improvements to a process; complete redesign requires thinking on a larger scale.
Hammer provides some guidelines for how to go about doing business process reengineering. You can read (accessible with a free account at HBR, at the time of this writing). A summary of the guidelines is below:
• Organize around outcomes, not tasks. This means to design the process so that, if possible, one person performs all the steps. Instead of repeatedly repeating one step in the process, the person stays involved in the process from start to finish. For example, Mutual Benefit LIfe’s use of one person(a case manager) to perform all tasks required for a completed insurance application from paperwork, medical checks, risk checks to policy pricing.
• Have those who use the outcomes of the process perform the process. Using information technology, many simple tasks are now automated to empower the person who needs the process's outcome to perform it. Hammer's example is purchasing: instead of having every department in the company use a purchasing department to order supplies, have the supplies ordered directly by those who need the supplies using an information system.
• Subsume information-processing work into the real work that produces the information. When one part of the company creates information (like sales information or payment information), it should be processed by that department. There is no need for one part of the company to process information created in another part of the company. An example of this is Ford's redesigned accounts payable process where receiving processes the information about goods received rather than sending it to accounts payable.
• Treat geographically dispersed resources as though they were centralized. With the communications technologies in place today, it becomes easier than ever to not worry about physical location. A multinational organization does not need separate support departments (such as IT, purchasing, etc.) for each location.
• Link parallel activities instead of integrating their results. Departments that work in parallel should share data and communicate with each other during their activities instead of waiting until each group is done and then comparing notes.
• Put the decision points where the work is performed, and build controls into the process. The people who do the work should have decision-making authority, and the process itself should have built-in controls using information technology. The workers become self-managing and self-controlling, and the manager’s role changes to supporter and facilitator.
• Capture information once at the source. Requiring information to be entered more than once causes delays and errors. With information technology, an organization can capture it once and then make it available whenever needed.
These principles may seem like common sense today, but in 1990 they took the business world by storm. Ford and Mutual Benefit Life’s successful attempt at reengineering a core business process have become textbook examples of Business process Reengineering.
Organizations can improve their business processes by many orders of magnitude without adding new employees, simply changing how they did things (see sidebar). For examples of how modern businesses of this century undergo process reengineering to competitive advantage, .
Unfortunately, business process reengineering got a bad name in many organizations. This was because it was used as an excuse for cost-cutting that really had nothing to do with BPR. For example, many companies used it as an excuse for laying off part of their workforce. Today, however, many BPR principles have been integrated into businesses and are considered part of good business process management.
Sidebar: Re-engineering the College Bookstore
The process of purchasing the correct textbooks on time for college classes has always been problematic. And now, with online bookstores such as Amazon and Chegg competing directly with the college bookstore for students’ purchases, the college bookstore is under pressure to justify its existence.
But college bookstores have one big advantage over their competitors: they have access to students’ data. In other words, once a student has registered for classes, the bookstore knows exactly what books that student will need for the upcoming term. To leverage this advantage and take advantage of new technologies, the bookstore wants to implement a new process that will make purchasing books through the bookstore advantageous to students. Though they may not compete on price, they can provide other advantages, such as reducing the time it takes to find the books and guaranteeing that the book is the correct one for the class. To do this, the bookstore will need to undertake a process redesign.
The process redesign's goal is simple: capture a higher percentage of students as customers of the bookstore. The before and after the reengineering is shown in Figure \(3\).
The Before process steps are:
1. The students get a booklist from each instructor
2. Go to the bookstore to search for the books on the list
3. If they are available, then students can purchase them
4. If they are not available, then the students will order the missing books
5. The students purchase the missing books
6. Students may need to do step 3 if it is not yet done
After diagramming the existing process and meeting with student focus groups, the bookstore develops a new process. In the newly redesigned process:
1. The bookstore utilizes information technology to reduce the amount of work the students need to do to get their books by sending the students an email with a list of all the books required for their upcoming classes along with purchase options( new, used, or rental)
2. By clicking a link in this email, the students can log into the bookstore, confirm their books, and pay for their books online.
3. The bookstore will then deliver the books to the students.
The new re-engineered process delivers the business goal of capturing a larger percentage of students as customers of the bookstore using technology to provide a valuable value-added service to students to make it convenient and faster.
ISO Certification
Many organizations now claim that they are using best practices when it comes to business processes. To set themselves apart and prove to their customers (and potential customers) that they are indeed doing this, these organizations seek out an ISO 9000 certification.
ISO is an acronym for representing a global network of national standards bodies
This body defines quality standards that organizations can implement to show that they are, indeed, managing business processes in an effective way. The ISO 9000 certification is focused on quality.
To receive ISO certification, an organization must be audited and found to meet specific criteria. In its most simple form, the auditors perform the following review:
• Tell me what you do (describe the business process).
• Show me where it says that (reference the process documentation).
• Prove that this is what happened (exhibit evidence in documented records).
Over the years, this certification has evolved, and many branches of the certification now exist. The addresses various aspects of quality management. ISO certification is one way to separate an organization from others regarding its quality and services and meet customer expectations.
8.03: Summary
Summary
The advent of information technologies has had a huge impact on how organizations design, implement and support business processes. From document management to project management to ERP systems, information systems are tied into organizational processes. Using business process management, organizations can empower employees and leverage their processes for competitive advantage. Using business process reengineering, organizations can vastly improve their effectiveness and the quality of their products and services. Integrating information technology with business processes is one-way information systems can bring an organization a lasting competitive advantage.
8.04: Study Questions
Study Questions
1. What does the term business process mean?
2. What are three examples of business processes ( from a job you have had or an organization you have observed?
3. What is the value of documenting a business process?
4. What is an ERP system? How does an ERP system enforce best practices for an organization?
5. What is one of the criticisms of ERP systems?
6. What is business process reengineering? How is it different from incrementally improving a process?
7. Why did BPR get a bad name?
8. List the guidelines for redesigning a business process.
9. What is business process management? What role does it play in allowing a company to differentiate itself?
10. What does ISO certification signify?
Exercises
1. Think of a business process that you have had to perform in the past. How would you document this process? Would a diagram make more sense than a checklist? Document the process both as a checklist and as a diagram.
2. Review the return policies at your favorite retailer and then answer this question: What information systems do you think need to be in place to support their return policy.
3. If you were implementing an ERP system, in which cases would you be more inclined to modify the ERP to match your business processes? What are the drawbacks of doing this?
4. Which ERP is the best? Do some original research and compare three leading ERP systems to each other. Write a two- to three-page paper that compares their features.
5. Research a company that chooses to implement an ERP. Write a report to describe it.
6. Research a failed implementation of an ERP. Write a report to describe why.
7. Research and write a report on how a company can obtain an ISO quality management certification. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/02%3A_Information_Systems_for_Strategic_Advantage/08%3A_Business_Processes/8.01%3A_Introduction.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• Describe each of the different roles that people play in the design, development, and use of information systems;
• Understand the different career paths available to those who work with information systems;
• Explain the importance of where the information-systems function is placed in an organization;
• Describe the different types of users of information systems.
This chapter will provide an overview of the different types of people involved in information systems. This includes people(and machines) who create information systems, those who operate and administer information systems, those who manage or support information systems, those who use information systems, and IT's job outlook.
09: The People in Information System
In this text's opening chapters, we focused on the technology behind information systems: hardware, software, data, and networking. In the last chapter, we discussed business processes and the key role they can play in a business's success. In this chapter, we will be discussing the last component of an information system: people.
People are involved in information systems in just about every way you can think of: people imagine information systems, develop information systems, support information systems, and, perhaps most importantly, people use information systems.
9.02: The Creators of Information Systems
The first group of people we are going to look at plays a role in designing, developing, and building information systems. These people are generally very technical and have a background in programming and mathematics. Just about everyone who works in creating information systems has a minimum of a bachelor’s degree in computer science or information systems. However, that is not necessarily a requirement. We will be looking at the process of creating information systems in more detail in chapter 10.
Systems Analyst
The systems analyst's role is unique in that it straddles the divide between identifying business needs and imagining a new or redesigned computer-based system to fulfill those needs. This individual will work with a person, team, or department with business requirements and identify the specific details of a system that needs to be built. Generally, this will require the analyst to understand the business itself, the business processes involved, and the ability to document them well. The analyst will identify the different stakeholders in the system and work to involve the appropriate individuals.
Once the requirements are determined, the analyst will begin translating these requirements into an information-systems design. A good analyst will understand what different technological solutions will work and provide several different alternatives to the requester, based on the company’s budgetary constraints, technology constraints, and culture. Once the solution is selected, the analyst will create a detailed document describing the new system. This new document will require that the analyst understand how to speak in systems developers' technical language.
A systems analyst generally is not the one who does the actual development of the information system. The design document created by the systems analyst provides the detail needed to create the system and is handed off to a programmer (or team of programmers) to do the actual creation of the system. In some cases, however, a systems analyst may create the system that he or she designed. This person is sometimes referred to as a programmer-analyst.
In other cases, the system may be assembled from off-the-shelf components by a person called a systems integrator. This is a specific type of systems analyst that understands how to get different software packages to work with each other.
To become a systems analyst, you should have a background in business and systems design. You also must have strong communication and interpersonal skills plus an understanding of business standards and new technologies. Many analysts first worked as programmers and/or had experience in the business before becoming systems analysts. The best systems analysts have excellent analytical skills and are creative problem solvers.
Computer Programmer (or Software developer)
A computer programmer or software developer is responsible for writing the code that makes up computer software. They write, test, debug and create documentation for computer programs. In the case of systems development, programmers generally attempt to fulfill the design specifications given to them by a systems analyst. Many different programming styles exist: a programmer may work alone for long stretches of time or may work in a team with other programmers. A programmer needs to understand complex processes and the intricacies of one or more programming languages. They are usually referred to by the programming language they most often use: Java programmer or Python programmer. Good programmers are very proficient in mathematics and excel at logical thinking.
Computer Engineer
Computer engineers design the computing devices that we use every day. There are many types of computer engineers who work on various types of devices and systems. Some of the more prominent engineering jobs are as follows:
• Hardware engineer: A hardware engineer designs hardware components, such as microprocessors. A hardware engineer is often at the cutting edge of computing technology, creating something brand new. Other times, the hardware engineer’s job is to engineer an existing component to work faster or use less power. Many times, a hardware engineer’s job is to write code to create a program that will be implemented directly on a computer chip.
• Software engineer: Software engineers do not actually design devices; instead, they create new programming languages and operating systems, working at the lowest hardware levels to develop new kinds of software to run on the hardware.
• Systems engineer: A systems engineer takes the components designed by other engineers and makes them all work together. For example, to build a computer, the motherboard, processor, memory, and hard disk all have to work together. A systems engineer has experience with many different hardware and software types and knows how to integrate them to create new functionality.
• Network engineer: A network engineer’s job is to understand the networking requirements and then design a communications system to meet those needs, using the networking hardware and software available.
There are many different types of computer engineers, and often the job descriptions overlap. While many may call themselves engineers based on a company job title, there is also a professional designation of “professional engineer,” which has specific requirements behind it. In the US, each state has its own set of requirements for using this title, as do different countries around the world. Most often, it involves a professional licensing exam. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/02%3A_Information_Systems_for_Strategic_Advantage/09%3A_The_People_in_Information_System/9.01%3A_Introduction.txt |
Another group of information-systems professionals is involved in the day-to-day operations and administration of IT. These people must keep the systems running and up-to-date so that the rest of the organization can make the most effective use of these resources.
Computer Operator
A computer operator is a person who keeps large computers running. This person’s job is to oversee the mainframe computers and data centers in organizations. Some of their duties include keeping the operating systems up to date, ensuring available memory and disk storage, and overseeing the computer's physical environment. Since mainframe computers have increasingly been replaced with servers, storage management systems, and other platforms, computer operators’ jobs have grown broader and include working with these specialized systems.
Database Administrator
A database administrator (DBA) is the person who manages the databases for an organization. This person operates and maintains databases, including database recovery and backup procedures, used as part of applications or the data warehouse. They are responsible for securing the data and ensuring that only users who are approved to access the data can do so. The DBA also consults with systems analysts and programmers on projects requiring access to or creating databases.
• Database Architect: Database architects design and create secure databases that meet the needs of an organization. They work closely with software designers, design analysts, and others to create comprehensive databases that may be used by hundreds, if not thousands, of people. Most organizations do not staff a separate database architect position. Instead, they require DBAs to work on both new and established database projects.
• Database Analyst: Some organizations create a separate position, Database Analyst, who looks at databases from a higher level. He analyzes database design and the changing needs of an organization, recommends additions for new projects, and designs the tables and relationships.
• Oracle DBA: A DBA that specializes in Oracle database. Oracle DBA’s handle capacity planning, evaluate database server hardware, and manage all aspects of an Oracle database, including installation, configuration, design, and data migration.
Help-Desk/Support Analyst
Most midsize to large organizations have their own information-technology help desk and are the most visible IT roles. The help desk is the first line of support for computer users in the company. Computer users who are having problems or need information can contact the help desk for assistance. Often, a help-desk worker is a junior-level employee who does not necessarily know how to answer all of the questions that come his or her way. In these cases, help-desk analysts work with senior-level support analysts or have a computer knowledgebase at their disposal to help them investigate the problem at hand. The help desk is a great place to break into IT because it exposes you to all of the company's different technologies. A successful help-desk analyst has conflict resolutions, active listening skills, problem-solving abilities, and a wide range of technical knowledge across hardware, software, and networks.
Trainer
A computer trainer conducts classes to teach people specific computer skills. For example, if a new ERP system is installed in an organization, one part of the implementation process is to teach all users how to use the new system. A trainer may work for a software company and be contracted to come in to conduct classes when needed; a trainer may work for a company that offers regular training sessions, or a trainer may be employed full time for an organization to handle all of their computer instruction needs. To be successful as a trainer, you need to be able to communicate technical concepts well and have a lot of patience!
Quality Support Engineers
A quality engineer establishes and maintains a company’s quality standards and tests systems to ensure efficiency, reliability, and performance. They are also responsible for creating documentation that reports issues and errors relating to the computer and software systems. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/02%3A_Information_Systems_for_Strategic_Advantage/09%3A_The_People_in_Information_System/9.03%3A_Information-Systems_Operations_and_Administr.txt |
The management of information-systems functions is critical to the success of information systems within the organization. Here are some of the jobs associated with the management of information systems.
Chief Information Officer(CIO)
The CIO, or chief information officer, is the head of the information-systems function. This person aligns the plans and operations of the information systems with the strategic goals of the organization. This includes tasks such as budgeting, strategic planning, and personnel decisions for the information-systems function. This is a high-profile position as the CIO is also the face of the organization's IT department. This involves working with senior leaders in all parts of the organization to ensure good communication and planning.
Interestingly, the CIO position does not necessarily require a lot of technical expertise. While helpful, it is more important for this person to have good management and people skills and understand the business. Many organizations do not have someone with the CIO's title; instead, the head of the information-systems function is called vice president of information systems or director of information systems.
Functional Manager
As an information-systems organization becomes larger, many of the different functions are grouped and led by a manager. These functional managers report to the CIO and manage the employees specific to their function. For example, in a large organization, a group of systems analysts reports to a systems-analysis function manager. For more insight into how this might look, see the discussion later in the chapter of how information systems are organized.
ERP Management
Organizations using an ERP require one or more individuals to manage these systems. These people make sure that the ERP system is completely up to date, work to implement any changes to the ERP needed, and consult with various user departments on needed reports or data extracts.
Project Managers
Information-systems projects are notorious for going over budget and being delivered late. In many cases, a failed IT project can spell doom for a company. A project manager is responsible for keeping projects on time and budget. This person works with the project stakeholders to keep the team organized and communicates the status of the project to management. A project manager does not have authority over the project team; instead, the project manager coordinates schedules and resources to maximize the project outcomes. A project manager must be a good communicator and an extremely organized person. A project manager should also have good people skills. Many organizations require their project managers to become certified as project management professionals (PMP).
Information-Security Officer
An information security officer is in charge of setting information-security policies for an organization and then overseeing those policies' implementation. This person may have one or more people reporting to them as part of the information security team. As information has become a critical asset, this position has become highly valued. The information-security officer must ensure that the organization’s information remains secure from both internal and external threats.
9.05: Emerging Roles
As technology evolves, many new roles are becoming more common as other roles fade. For example, as we enter the age of “big data,” we see the need for more data analysts and business-intelligence specialists. Many companies are now hiring social media experts and mobile-technology specialists. The increased use of cloud computing and virtual-machine technologies also is breeding demand for expertise in those areas.
• Cloud system engineer: In the past, companies would typically store their data in large physical databases or even hire database firms, but today, they turn to cloud storage as a low-cost and effective means of storing data. This is where cloud engineers come in. They are responsible for the design, planning, management, maintenance, and support of an organization's cloud computing environment.
• Cyber Security Analyst (or engineer): As new technologies emerge, so do the number of security threats online. Cybersecurity is a growing field that focuses on protecting organizations from digital attacks and keeping their information and networks safe. The following are examples of some of the many cybersecurity roles:
• Security Administrator: These professionals serve in high-level roles, overseeing the IT security efforts of their organization. They create policies and procedures, identify weak areas of networks, install firewalls, and respond to security breaches.
• Security Architect: Security architects design, plan, and supervise systems that thwart potential computer security threats. They must find the strengths and weaknesses of their organizations' computer systems, often developing new security architectures.
• Security Analyst: Organizations employ a security analyst to protect computer and networking systems from cyber-attacks and hackers and keep information and networks safe.
• AI/Machine Learning Engineer: These engineers develop and maintain AI (artificial intelligence) machines and systems that have the ability to learn and utilize existing knowledge. As more and more industries turn towards automating certain aspects of the workforce, AI engineers will be in high demand.
• Computer Vision Engineer: Computer vision engineers create and use computer vision and machine learning algorithms that acquire, process, and analyze digital images, videos, etc. Their work is closely linked to AR(augmented reality) and VR (virtual reality). As we see the rise of such technologies as self-driving vehicles, these skills' demands will continue to grow.
• Big Data Engineer: Big Data Engineers create and manage a company's Big Data infrastructure, such as SQL engines and tools. A big data engineer installs continuous pipelines that run to and from huge pools of filtered information from which data scientists can pull relevant data sets for their analyses.
• Health Information Technician: Health information technicians use specialized computer programs and administrative techniques to ensure that patient's electronic health records are complete, accurate, accessible, and secure.
• Mobile Application developers: Mobile App developers create software for mobile devices. They write programs inside a mobile development environment using Objective C, C++, or Java programming languages. A mobile app developer will typically choose an OS such as Google’s Android or Apple's IOS and develop apps for that environment. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/02%3A_Information_Systems_for_Strategic_Advantage/09%3A_The_People_in_Information_System/9.04%3A_Managing_Information_Systems.txt |
These job descriptions do not represent all possible jobs within an information system organization. Larger organizations will have more specialized roles; smaller organizations may combine some of these roles. Many of these roles may exist outside of a traditional information-systems organization, as we will discuss below.
Working with information systems can be a rewarding career choice. Whether you want to be involved in very technical jobs (programmer, database administrator) or want to be involved in working with people (systems analyst, trainer), there are many different career paths available.
Often, those in technical jobs who want career advancement find themselves in a dilemma: do they want to continue doing technical work, where sometimes their advancement options are limited or do they want to become a manager of other employees and put themselves on a management career track? In many cases, those proficient in technical skills are not gifted with managerial skills. Some organizations, especially those that highly value their technically skilled employees, will create a technical track that exists in parallel to the management track to retain employees who are contributing to the organization. Today, most large organizations have dual career paths - the Managerial and Technical/Professional.
Then there are people from other fields who want to get into IT. For example, a writer wants to become a technical writer, and a salesperson may want to become a quality tester.
People have many different reasons for transitioning into the IT industry, and the timing couldn’t be better. The IT industry is facing a massive shortage of workers, both domestic and international, and there are many employment opportunities at every level.
Sidebar: Are Certifications Worth Pursuing?
As technology is becoming more important to businesses, hiring employees with technical skills is becoming critical. But how can an organization ensure that the person they are hiring has the necessary skills? These days, many organizations are including technical certifications as a prerequisite for getting hired.
Certifications are designations given by a certifying body that someone has a specific knowledge level in a specific technology. This certifying body is often the vendor of the product itself, though independent certifying organizations, such as , also exist. Many of these organizations offer certification tracks, allowing a beginning certificate as a prerequisite to getting more advanced certificates. To get a certificate, you generally attend one or more training classes and then take one or more certification exams. Passing the exams with a certain score will qualify you for a certificate. In most cases, these classes and certificates are not free and, in fact, can run into the thousands of dollars. Some examples of the certifications in the highest demand include (software certifications), (networking), and SANS (security), Oracle (database, SQL).
For many working in IT (or thinking about an IT career), determining whether to pursue one or more of these certifications is an important question. For many jobs, such as those involving networking or security, the employer will require a certificate to determine which potential employees have a basic level of skill. For those already in an IT career, a more advanced certificate may lead to a promotion. However, other cases, when experienced with a certain technology, will negate the need for certification. For those wondering about the importance of certification, the best solution is to talk to potential employers and those already working in the field to determine the best choice. Perusing different job websites to see the trend of hot IT jobs and associated requirements is a good place to start.
Organizing the Information-Systems Function
In the early years of computing, the information-systems function (generally called data processing) was placed in the organization's finance or accounting department. As computing became more important, a separate information-systems function was formed. However, it was still generally placed under the CFO and considered an administrative function of the company. In the 1980s and 1990s, when companies began networking internally and then linking up to the Internet, the information-systems function was combined with the telecommunications functions and designated the information technology (IT) department. As information technology's role continued to increase, especially the increased risk over security and privacy, its place in the organization also moved up the ladder. In many organizations today, the head of IT (the CIO) reports directly to the CEO or COO. There are still places where IT reports to a VP of finance.
IT is often organized into these functions:
• IT support (call support)
• Security
• Database
• Network
• Applications to support end-user apps (i.e., Office) or enterprise apps (ERP, MRP).
The size of each function varies depending on the level of outsourcing a company decides to do.
Not all IT-related tasks are done directly by IT staff. Some tasks may be done by other groups in a firm such as Marketing or Manufacturing. For example, marketing or engineering groups may choose their own vendor to support and provide cloud services for the company's products or services. Collaboration with IT is critical to avoid creating confusion for end-user support and training. Some IT tasks can also be outsourced to external partners.
Outsourcing
Outsourcing- using third-party service providers- to handle some of your business processes became a popular business strategy back in the '80s and 90’s to combat rising labor costs and allow firms to focus on their core functions. For example, an early function that firms outsourced is payroll. With the Internet boom and bust in 2000-2001 and the rise of the global marketplace, outsourcing is now a common business strategy for companies of all sizes.
If an organization needs a specific skill for a limited period of time, instead of training an existing employee or hiring someone new, the job can be outsourced. Outsourcing can be used in many different situations within the information-systems function, such as designing and creating a new website or the upgrade of an ERP system. Some organizations see outsourcing as a cost-cutting move, contracting out a whole group or department. In some cases, outsourcing has become a necessity - the only feasible way to grow your business, launch a product, or manage operations is by using an outside vendor for certain tasks.
Job Outlook
IT jobs are projected to grow due to continued increase in cloud computing, cybersecurity concert, and firms’ expansion, from both computing and non-computing industries, to adopt new technologies and digital platforms,
According to the , jobs in computer and information system managers are projected to grow 10% from 2019 to 2029, 4% for network and computer systems administrators, 8% for computer support specialists. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/02%3A_Information_Systems_for_Strategic_Advantage/09%3A_The_People_in_Information_System/9.06%3A_Career_Path_in_Information_Systems.txt |
Information-Systems Users – Types of Users
Besides the people who work to create, administer, and manage information systems, one more significant group of people: the users of information systems. This group represents a considerable percentage of the people involved. If the user cannot successfully learn and use an information system, the system is doomed to failure.
One tool used to understand how users will adopt a new technology comes from a 1962 study by Everett Rogers. In his book, Diffusion of Innovation, 1 Rogers explains how new ideas and technology spread via communication channels over time. Innovations are initially perceived as uncertain and even risky. To overcome this uncertainty, most people seek out others like themselves who have already adopted the new idea or technology. Thus, the diffusion process consists of successive groups of consumers adopting new technology( shown in blue in the graph below); the adoption rate will start slowly and then dramatically increase once adoption reaches a certain point - its market share(yellow curve) reaches saturation level and becomes self-sustaining.
Figure 9.4: Technology adoption user types
Image by Rogers Everett, licensed under Public domain, via Wikimedia Commons
Rogers identified five (sections of the blue curve) specific types of technology adopters:
• Innovators: Innovators are the first individuals to adopt new technology. Innovators are willing to take risks, are the youngest in age, have the highest social class, have great financial liquidity, are very social, and have the closest contact with scientific sources and interaction with other innovators. Risk tolerance has them adopting technologies that may ultimately fail. Financial resources help absorb these failures (Rogers 1962 5th ed, p. 282).
• Early adopters: The early adopters adopt an innovation after a technology has been introduced and proven. These individuals have the highest degree of opinion leadership among the other adopter categories, which means that they can influence the largest majority's opinions. They are typically younger in age, have higher social status, more financial liquidity, more advanced education, and are more socially aware than later adopters. These people are more discrete in adoption choices than innovators and realize the judicious choice of adoption will help them maintain a central communication position (Rogers 1962 5th ed, p. 283).
• Early majority: Individuals in this category adopt an innovation after a varying degree of time. This time of adoption is significantly longer than the innovators and early adopters. This group tends to be slower in the adoption process, has above average social status, has contact with early adopters, and seldom holds opinion leadership positions in a system (Rogers 1962 5th ed, p. 283).
• Late majority: The late majority will adopt an innovation after the average member of the society. These individuals approach an innovation with a high degree of skepticism, have below-average social status, very little financial liquidity, contact others in the late majority and the early majority, and show very little opinion leadership.
• Laggards: Individuals in this category are the last to adopt an innovation. Unlike those in the previous categories, individuals in this category show no opinion leadership. These individuals typically have an aversion to change agents and tend to be advanced in age. Laggards typically tend to be focused on “traditions,” are likely to have the lowest social status and the lowest financial liquidity, be the oldest of all other adopters, and be only in contact with family and close friends.
Knowledge of the diffusion theory and the five types of technology users help provide additional insight into how to implement new information systems within an organization. For example, when rolling out a new system, IT may want to identify the innovators and early adopters within the organization and work with them first, then leverage their adoption to drive the implementation.
This process of diffusion of new ideas and technology can usually take months or years. But there are exceptions: the use of the internet in the 1990s and mobile devices in recent years to communicate, interact socially, access news and entertainment have spread more rapidly than possibly any other innovation in humankind's history.
9.08: Summary
Summary
This chapter has reviewed the many different categories of individuals - from the front-line help-desk workers to system analysts to chief information officer(CIO) -who make up the people component of information systems. The world of information technology is changing so fast that new roles are being created all the time, and roles that have existed for decades are being phased out. That said, this chapter should have given you a good idea of the importance of the people component of information systems.
9.09: Study Questions
Study Questions
1. Describe the role of a systems analyst.
2. What are some of the different roles of a computer engineer?
3. What are the duties of a computer operator?
4. What does the CIO do?
5. Describe the job of a DBA.
6. Explain the point of having two different career paths in information systems.
7. What are the five types of information-systems users?
8. Why would an organization outsource?
Exercises
1. Which IT job would you like to have? Do some original research and write a two-page paper describing the duties of the job you are interested in.
2. Spend a few minutes on or to find IT jobs in your area. What are IT jobs currently available? Write up a two-page paper describing three jobs, their starting salary (if listed), and the skills and education needed for the job.
3. How is the IT function organized in your school or place of employment? Create an organization chart showing how the IT organization fits into your overall organization. Comment on how centralized or decentralized the IT function is.
4. What type of IT user are you? Take a look at the five types of technology adopters, and then write a one-page summary of where you think you fit in this model. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/02%3A_Information_Systems_for_Strategic_Advantage/09%3A_The_People_in_Information_System/9.07%3A_Information-Systems_Users__Types_of_Users.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• Explain the overall process of developing a new software application;
• Explain the differences between software development methodologies;
• Understand the different types of programming languages used to develop software;
• Understand some of the issues surrounding the development of mobile applications; and
• Identify the four primary implementation policies.
10: Information Systems Development
When someone has an idea for a new function to be performed by a computer, how does that idea become a reality? If a company wants to implement a new business process and needs new hardware or software to support it, how do they go about making it happen? How do they decide whether to build their own solution or buy or subscribe to a solution available in the market?
This chapter will discuss the different methods of taking those ideas and bringing them to reality, a process known as information systems development.
10.02: Systems Development Life Cycle (SDLC) Mo
Systems Development Life Cycle (SDLC) Model
SDLC was first developed in the 1960s to manage the large projects associated with corporate systems running on mainframes. It is a very structured process designed to manage large projects with many people's efforts, including technical, business, support professionals. These projects are often costly to build, and they have a large impact on the organization. A failed project or an incorrect business decision to pick a wrong project to fund can be a business or financial catastrophe for an organization.
SDLC is a model defining a process of a set of phases for planning, analysis, design, implementation, maintenance. Chapter 1 discusses that an information system (IS) includes hardware, software, database, networking, process, and people. SDLC has been used often to manage an IS project that may include one, some, or all of the elements of an IS. Let’s walk through each of the five phases of an SDLC as depicted in Figure 10.1:
1. Planning. In this phase, a request is initiated by someone who acts as a sponsor for this idea. A small team is assembled to conduct a preliminary assessment of the request's merit and feasibility. The objectives of this phase are:
• To determine how the request fits with the company’s strategy or business goals.
• To conduct a feasibility analysis, which includes an analysis of the technical feasibility (is it possible to create this?), the economic feasibility (can we afford to do this?), and the legal feasibility (are we allowed to do this?).
• To recommend a go/no go for the request. If it is a go, then a concept proposal is also produced for management to approve.
1. Analysis. Once the concept proposal is approved, the project is formalized with a new project team (including the previous phase). Using the concept proposal as the starting point, the project members work with different stakeholder groups to determine the new system's specific requirements. No programming or development is done in this step. The objectives of this phase are:
• Identify and Interview key stakeholders.
• Document key procedures
• Develop the data requirements
• To produce a system-requirements document as the result of this phase. This has the details to begin the design of the system.
1. Design. Once the system requirements are approved, the team may be reconfigured to bring in more members. This phase aims for the project team to take the system requirements document created in the previous phase and develop the specific technical details required for the system. The objectives are:
• Translate the business requirements into specific technical requirement
• Design the user interface, database, data inputs and outputs, and reports
• Produce a system-design document as the result of this phase. . This document will have everything a programmer will need to create the system.
1. Implementation. Once a system design is approved, the software code finally gets written in the programming phase, and the development effort for other elements such as hardware also happens. The purpose is to create an initial working system. The objectives are:
• Develop the software code, and other IS components. Using the system- design document as a guide, developers begin to code or develop all the IS project components.
• Test the working system through a series of structured tests such as:
• The first is a unit test, which tests individual parts of the code for errors or bugs.
• Next is a system test, where the system's different components are tested to ensure that they work together properly.
• Finally, the user-acceptance test allows those that will be using the software to test the system to ensure that it meets their standards.
• Iteratively test any fixes again to address any bugs, errors, or problems found during testing.
• Train the users
• Provide documentation
• Perform necessary conversions from any previous system to the new system.
• Produce, as a result, the initial working system that meets the requirements laid out in the analysis phase and the design developed in the design phase.
1. Maintenance. This phase takes place once the implementation phase is complete. In this phase, the system must have a structured support process in place to:
• Report bugs
• Deploy bug fixes
• Accept requests for new features
• Evaluate the priorities of reported bugs or requested features to be implemented
• Identify a predictable and regular schedule to release system updates and perform backups.
• Dispose of data and anything else that is no longer needed
Organizations can combine or sub-divide these phases to fit their needs. For example, instead of one phase, Planning, an organization can choose to have two phases: Initiation, Concept; or splitting the implementation into two phases: implementation and testing.
Waterfall Model
One specific SDLC-based model is the Waterfall model, and the name is often thought to be the same as SDLC. It is used to manage software projects as depicted in Fig 10.2 with five phases: Requirements, Design, Implement, Verification, and Maintenance. This model stresses that each phase must be completed before the next one can begin (hence the name waterfall). For example, changes to the requirements are not allowed once the implementation phase has begun, or changes must be sought and approved to a change process. They may require the project to restart from the requirement phase since new requirements need to be approved, which may cause the design to be revised before the implementation phase can begin.
The waterfall model's rigid structure has been criticized for being quite rigid and causing teams to be risk-averse to avoid going back to previous phases. However, there are benefits to such a structure too. Some advantages and disadvantages of SDLC and Waterfall are:
Advantages and Disadvantage of SDLC and Waterfall
Advantages
Disadvantages
The robust process to control and track changes to minimize the number of risks can derail the project unknowingly.
Take time to record everything, which leads to additional cost and time to the schedule.
Standard and transparent processes help the management of large teams.
Too much time spent attending meetings, seeking approval, etc. which lead to additional cost and time to the schedule.
Documentation reduces the risks of losing personnel, easier to add people to the project.
Some members do not like to spend time writing, leading to the additional time needed to complete a project.
Easier to trace a problem in the system to its root whenever errors are found, even after the project is completed.
It is difficult to incorporate changes or customers’ feedback since the project has to go back to one or more previous phases, leading teams to become risk-averse.
Other models are developed over time to address these criticisms. We will discuss two other models: Rapid Application Development and Agile, as different approaches to SDLC.
Rapid Application Development (RAD)
Rapid application development (RAD) is a software development (or systems-development) methodology that focuses less on planning and incorporating changes on an ongoing basis. RAD focuses on quickly building a working model of the software or system, getting feedback from users, and updating the working model. After several iterations of development, a final version is developed and implemented. Let’s walk through the four phases in the RAD model as depicted in Fig. 10.3.
1. Requirements Planning. This phase is similar to the planning, analysis, and design phases of the SDLC.
2. User Design. In this phase, the users' representatives work with the system analysts, designers, and programmers to interactively create the system's design. One technique for working with all of these various stakeholders is the Joint Application Development (JAD) session. A JAD session gets all relevant users who interact with the systems from different perspectives, other key stakeholders, including developers, to have a structured discussion about the system's design. The objectives are for users to understand and adopt the working model and for the developers to understand how the system needs to work from the user’s perspective to provide a positive user experience.
3. Construction. In the construction phase, the tasks are similar to SDLC’s implementation phase. The developers continue to work interactively with the users to incorporate their feedback as they interact with the working model that is being developed. This is an interactive process, and changes can be made as developers are working on the program. This step is executed parallel with the User Design step in an iterative fashion until an acceptable version of the product is developed.
4. Cutover. This step is similar to some of the SDLC implementation phase tasks. The system goes live or is fully deployed. All steps required to move from the previous state to using the new system are completed here.
Compared to the SDLC or Waterfall model, the RAD methodology is much more compressed. Many of the SDLC steps are combined, and the focus is on user participation and iteration. This methodology is better suited for smaller projects and has the added advantage of giving users the ability to provide feedback throughout the process. SDLC requires more documentation and attention to detail and is well suited to large, resource-intensive projects. RAD is better suited for projects that are less resource-intensive and need to be developed quickly. Here are some of the advantages and disadvantages of RAD:
Advantages and Disadvantage of RAD
Advantages
Disadvantages
Increase quality due to the frequency of interacting with the users
Risks of weak implementation of features that are not visible to the users, such as security
Reduce risks of users’ refusal to accept the finished product
Lack of control over the system changes due to a working version's fast turn-around to address users’ issues.
Improve chances of on-time, on-budget completion as users update in real-time, avoiding surprises during development.
Lack of design since changes are being put in the system might unknowingly affect other parts of the system.
Increase interaction time between developers/experts and users
Scarce resources as developers are tied up, which could slow down other projects.
Best suited for small to medium size project teams
Difficult to scale up to large teams
Agile Development Methodologies
Agile methodologies are a group of methodologies that utilize incremental changes focusing on quality and attention to detail. Each increment is released in a specified period of time (called a time box), creating a regular release schedule with particular objectives. While considered a separate methodology from RAD, they share some of the same principles: iterative development, user interaction, and changeability. The agile methodologies are based on the “Agile Manifesto,” first released in 2001.
The characteristics of agile methods include:
• small cross-functional teams that include development-team members and users;
• daily status meetings to discuss the current state of the project;
• short time-frame increments (from days to one or two weeks) for each change to be completed; and
• At the end of each iteration, a working project is completed to demonstrate to the stakeholders.
In essence, the Agile approach puts a higher value on tasks that promote interaction, build frequent working versions, customers/user collaboration, and quick response to change and less emphasis on processes and documentation. The agile methodologies' goal is to provide an iterative approach's flexibility while ensuring a quality product.
There are a variety of models that are built using Agile methodologies. One such example is the Scrum development model.
Scrum development model
This model is suited for small teams who work to produce a set of features within fixed-time interactions, such as two- to four weeks, called sprints. Let’s walk through the four key elements of a Scrum model as depicted in Fig 10.4.
Fig 10.4. The Scrum project management method. Image by Lakeworks is licensed CC BY-SA 4.0
1. Product backlog. This is a detailed breakdown list of work to be done. All the work is prioritized based on criteria such as risks, dependencies, mission-critical, etc. Developers select their own tasks and self-organize to get the work done.
2. Sprint backlog. This is a list of the work to be done in the next sprint.
3. Sprint. This is a fixed time, such as 1-day, 2-weeks, or 4-weeks, as agreed by the team. A daily progress meeting is called a daily scrum, typically a short 10-15 minute meeting facilitated by a scrum master whose role is to remove roadblocks for the team.
4. Working increment of the software. This is a working version that is incrementally built with the breakdown lists at the end of the sprints.
Lean Methodology
One last methodology we will discuss is a relatively new concept taken from the business bestseller , by Eric Reis.
This methodology focuses on taking an initial idea and developing a minimum viable product (MVP). The MVP is a working software application with just enough functionality to demonstrate the idea behind the project. Once the MVP is developed, it is given to potential users for review. Feedback on the MVP is generated in two forms: (1) direct observation and discussion with the users, and (2) usage statistics gathered from the software itself. Using these two forms of feedback, the team determines whether they should continue in the same direction or rethink the project's core idea, change the functions, or create a new MVP. This change in strategy is called a pivot. Several iterations of the MVP are developed, with new functions added each time based on the feedback, until a final product is completed.
The biggest difference between the lean methodology and the other methodologies is that the system's full set of requirements is unknown when the project is launched. As each iteration of the project is released, the statistics and feedback gathered are used to determine the requirements. The lean methodology works best in an entrepreneurial environment where a company is interested in determining if their idea for a software application is worth developing. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/02%3A_Information_Systems_for_Strategic_Advantage/10%3A_Information_Systems_Development/10.01%3A_Introduction.txt |
Software Development
Many of the methodologies discussed above are used to manage software development since programming is complex, and sometimes errors are hard to detect. We learned in chapter 2 that software is created via programming, and programming is the process of creating a set of logical instructions for a digital device to follow using a programming language. The programming process is sometimes called “coding” because the syntax of a programming language is not in a form that everyone can understand – it is in “code.”
The process of developing good software is usually not as simple as sitting down and writing some code. True, sometimes a programmer can quickly write a short program to solve a need. But most of the time, the creation of software is a resource-intensive process that involves several different groups of people in an organization. In the following sections, we are going to review several different methodologies for software development.
Sidebar: The project management quality triangle
When developing software or any product or service, there is tension between the developers and the different stakeholder groups, such as management, users, and investors. Fig. 10.5 illustrates the tension of the three requirements: time, cost, and quality that project managers need to make tradeoffs in. From how quickly the software can be developed (time), to how much money will be spent (cost), to how well it will be built (quality). The quality triangle is a simple concept. It states that you can only address two of the following: time, cost, and quality for any product or service being developed.
So what does it mean that you can only address two of the three? It means that the finished product's quality depends on the three variables: scope, schedule, and the allocated budget. Changes in any of these three variables affect the other two, hence, the quality.
For example, if a feature is added, but no additional time is added to the schedule to develop and test, the code's quality may suffer, even if more money is added. There are times when it is not even feasible to make the tradeoff. For example, adding more people to a project where members are so overwhelmed that they don’t have time to manage or train new people. Overall, this model helps us understand the tradeoffs we must make when developing new products and services.
Programming Languages
One of the important decisions that a project team needs to make is to decide which programming language(s) are to be used and associated tools in the development process. As mentioned in chapter 3, software developers create software using one of several programming languages. A programming language is a formal language that provides a way for a programmer to create structured code to communicate logic in a format that the computer hardware can execute. Over the past few decades, many different programming languages have evolved to meet many different needs.
There is no one way to categorize the languages. Still, they are often grouped by type (i.e., query, scripting), or chronologically by year when it was introduced (i.,e. Fortran was introduced in 1954s), by their “generation,” by how it was translated to the machine code, or how it was executed. We will discuss a few categories in this chapter.
Generations of Programming Languages
Early languages were specific to the type of hardware that had to be programmed; each type of computer hardware had a different low-level programming language (in fact, even today, there are differences at the lower level, though higher-level programming languages now obscure them). In these early languages, precise instructions had to be entered line by line – a tedious process.
Some common characteristics are summarized below to illustrate some differences among these generations:
First-generation (1GL)
Second-generation (2GL)
Third-generation (3GL)
Fourth-generation (4GL)
Fifth-generation (5GL)
Time introduced (est).
1940s or earlier
1950s
1950s-1970s
1970s-1990s
1980s-1900s
Instructions
They are made of binary numbers of 0s and 1s
Use a set of syntax that is readable by human and programmers
The syntax is more structured and is made up of more human-like language
The syntax is friendly to non-programmers
Still in progress.
Category
Machine dependent
Machine code
Machine dependent
Low level, Assembly Languages
Machine independent
High Level
Machine independent
High-level abstraction,
Advanced 3GLs
Logic programming
Advantage
Very fast, no need for ‘translation’ to 0s and 1s
Code can be read and written by programmers easier than learning machine code
More machine-independent
More friendly to programmers
General-purpose
Easy to learn
May not need programmers to write programs
Disadvantage
Machine dependent, not portable
Must be converted to machine code, still machine-dependent
May go multiple steps to translate to machine code
More specialized
Still early in the adoption phase
Today’s usage
If needed to interact with hardware directly such as drivers (i.e., USB driver)
If needed to interact with hardware directly such as drivers (i.e., USB driver)
Modern 3GLs are more commonly used.
Early 3GLs are used to maintain existing business programs or scientific programs
Database, web development
Limited
Visual tools, Artificial intelligence research
Examples
Machine language
Assembly language
Early 3GLs: COBOL, Fortran
Modern 3PLs: C, C++, Java, Javascript
Perl, PhP, Python, SQL, Ruby
Mercury, OPS5
Statista.com reported that by early 2020, Javascript was the most used language among developers worldwide. To see the complete list, please visit Statista.com for more details.
Sidebar: Examples of languages
First-generation language: machine code. In machine code, programming is done by directly setting actual ones and zeroes (the bits) using binary code. Here is an example program that
adds 1234 and 4321 using machine language:
10111001
00000000
11010010
10100001
00000100
00000000
10001001
00000000
00001110
10001011
00000000
00011110
00000000
00011110
00000000
00000010
10111001
00000000
11100001
00000011
00010000
11000011
10001001
10100011
00001110
00000100
00000010
00000000
Second-generation language. Assembly language gives English-like phrases to the machine-code instructions, making it easier to program. An assembly-language program must be run through an assembler, which converts it into machine code. Here is an example program that adds 1234 and 4321 using assembly language:
MOV CX,1234 MOV DS:[0],CX MOV CX,4321 MOV AX,DS:[0]
MOV BX,DS:[2] ADD AX,BX
MOV DS:[4],AX
Third-generation languages are not specific to the type of hardware they run and are much more like spoken languages. Most third-generation languages must be compiled, a process that converts them into machine code. Well-known third-generation languages include BASIC, C, Pascal, and Java. Here is an example using BASIC:
A=1234 B=4321 C=A+B END
Fourth-generation languages are a class of programming tools that enable fast application development using intuitive interfaces and environments. Many times, a fourth-generation language has a particular purpose, such as database interaction or report-writing. These tools can be used by those with very little formal training in programming and allow for the quick development of applications and/or functionality. Examples of fourth-generation languages include Clipper, FOCUS, FoxPro, SQL, and SPSS.
Why would anyone want to program in a lower-level language when they require so much more work? The answer is similar to why some prefer to drive stick-shift automobiles instead of automatic transmission: control and efficiency. Lower-level languages, such as assembly language, are much more efficient and execute much more quickly. You have finer control over the hardware as well. Sometimes, a combination of higher- and lower-level languages is mixed together to get the best of both worlds: the programmer will create the overall structure and interface using a higher-level language but will use lower-level languages wherever in the program that requires more precision.
Compiled vs. Interpreted
Besides classifying a programming language based on its generation, it can also be classified as compiled or interpreted language. As we have learned, a computer language is written in a human-readable form. In a compiled language, the program code is translated into a machine-readable form called an executable that can be run on the hardware. Some well-known compiled languages include C, C++, and COBOL.
An interpreted language requires a runtime program to be installed to execute. This runtime program then interprets the program code line by line and runs it. Interpreted languages are generally easier to work with but are slower and require more system resources. Examples of popular interpreted languages include BASIC, PHP, PERL, and Python. The web languages such as HTML and Javascript would also be considered interpreted because they require a browser to run.
The Java programming language is an interesting exception to this classification, as it is actually a hybrid of the two. A program written in Java is partially compiled to create a program that can be understood by the Java Virtual Machine (JVM). Each type of operating system has its own JVM, which must be installed, allowing Java programs to run on many different types of operating systems.
Procedural vs. Object-Oriented
A procedural programming language is designed to allow a programmer to define a specific starting point for the program and then execute sequentially. All early programming languages worked this way. As user interfaces became more interactive and graphical, it made sense for programming languages to evolve to allow the user to define the program's flow. The object-oriented programming language is set up to define “objects” that can take certain actions based on user input. In other words, a procedural program focuses on the sequence of activities to be performed; an object-oriented program focuses on the different items being manipulated.
For example, in a human-resources system, an “EMPLOYEE” object would be needed. If the program needed to retrieve or set data regarding an employee, it would first create an employee object in the program and then set or retrieve the values needed. Every object has properties, which are descriptive fields associated with the object. In the example below, an employee object has the properties “Name,” “Employee number,” “Birthdate,” and “Date of hire.” An object also has “methods,” which can take actions related to the object. In the example, there are two methods. The first is “ComputePay(),” which will return the current amount owed to the employee. The second is “ListEmployees(),” which will retrieve a list of employees who report to this employee.
Employee Object
Object: EMPLOYEE
First_Name
Last_Name
Employee_ID
Birthdate
Date_of_hire
ComputePay()
ListEmployees()
Programming Tools
Another decision that needs to be made during the development of an IS is the set of tools needed to write programs. To write programs, programmers need tools to enter code, check for the code's syntax, and some method to translate their code into machine code. To be more efficient at programming, programmers use integrated tools such as an integrated development environment (IDE) or computer-aided software-engineering (CASE) tools.
Integrated Development Environment (IDE)
For most programming languages, an IDE can be used. An IDE provides various tools for the programmer, all in one place with a consistent user interface. IDE usually includes:
• an editor for writing the program that will color-code or highlight keywords from the programming language;
• a help system that gives detailed documentation regarding the programming language;
• a compiler/interpreter, which will allow the programmer to run the program;
• a debugging tool, which will provide the programmer details about the execution of the program to resolve problems in the code; and
• a check-in/check-out mechanism allows a team of programmers to work together on a project and not write over each other’s code changes.
Statista.com reports that 80% of software developers worldwide from 2018 and 2019 use a source code collaboration tool such as GitHub, 77% use a standalone IDE such as Eclipse, 69% use Microsoft Visual Studio. For a complete list, please visit .
Computer-aided software engineering (CASE) Tools
While an IDE provides several tools to assist the programmer in writing the program, the code still must be written. Computer-aided software engineering (CASE) tools allow a designer to develop software with little or no programming. Instead, the CASE tool writes the code for the designer. CASE tools come in many varieties, but their goal is to generate quality code based on the designer's input.
Build vs. Buy or Subscribe
When an organization decides that a new software program needs to be developed, they must determine if it makes more sense to build it themselves or purchase it from an outside company. This is the “build vs. buy” decision. This ‘buy’ decision now includes the option to subscribe instead of buying it outright.
There are many advantages to purchasing software from an outside company. First, it is generally less expensive to purchase a software package than to build it. Second, when a software package is purchased, it is available much more quickly than if the package is built in-house. Third, companies or consumers pay a one-time price and get to keep the software for as long as the license allows and could be as long as you own it or even after the vendor stops supporting it. Software applications can take months or years to build; a purchased package can be up and running within a month. A purchased package has already been tested, and many of the bugs have already been worked out, and additional support contracts can be purchased. It is the role of a systems integrator to make various purchased systems and the existing systems at the organization work together.
There are also disadvantages to purchasing software. First, the same software you are using can be used by your competitors. If a company is trying to differentiate itself based on a business process in that purchased software, it will have a hard time doing so if its competitors use the same software. Another disadvantage to purchasing software is the process of customization. If you purchase a software package from a vendor and then customize it, you will have to manage those customizations every time the vendor provides an upgrade. With the rise of security and privacy, companies may lack the in-house expertise to respond quickly. Installing various updates and dealing with bugs encountered may also be a burden to IT staff and users. This can become an administrative headache.
A hybrid solution is to subscribe. Subscribe means that instead of selling products individually, vendors now offer a subscription model that the users can rent and pay periodically, such as monthly, yearly. The renting model has been used in many other industries such as movies, books and recently has moved into high tech industries. Companies and consumers can now subscribe to almost everything, as we discussed in earlier chapters, from additional storage in your email platforms such as Google Drive or Microsoft Onedrive, to software such as Quickbooks, Microsoft Office 365, to hosting and web support services such as Amazon AWS. Vendors benefit from converting one-time sales to recurring sales and increase customer loyalty. Customers benefit from the headache of installing updates, having the software support and updates taken care of automatically, knowing that the software continues to be updated with new features. A subscription model is now a prevalent option for both consumers and businesses.
Even if an organization determines to buy or subscribe, it still makes sense to go through many of the same analyses to compare the costs and benefits of building it themselves. This is an important decision that could have a long-term strategic impact on the organization.
Web Services
Chapter 3 stated that the move to cloud computing has allowed software to be looked at as a service. One option companies have these days to license functions provided by other companies instead of writing the code themselves. These are called web services, and they can greatly simplify the addition of functionality to a website.
For example, suppose a company wishes to provide a map showing the location of someone who has called their support line. By utilizing , they can build a Google Map right into their application. Or a shoe company could make it easier for its retailers to sell shoes online by providing a shoe-size web service that the retailers could embed right into their website.
Web services can blur the lines between “build vs. buy.” Companies can choose to build a software application themselves but then purchase functionality from vendors to supplement their system.
End-User Computing or Shadow IT
In many organizations, application development is not limited to the programmers and analysts in the information-technology department. Especially in larger organizations, other departments develop their own department-specific applications. The people who build these are not necessarily trained in programming or application development, but they tend to be adept with computers. A person, for example, who is skilled in a particular software package, such as a spreadsheet or database package, may be called upon to build smaller applications for use by his or her own department. This phenomenon is referred to as end-user development, or end-user computing, or shadow IT.
End-user computing can have many advantages for an organization. First, it brings the development of applications closer to those who will use them. Because IT departments are sometimes quite backlogged, it also provides a means to have software created more quickly. Many organizations encourage end-user computing to reduce the strain on the IT department.
End-user computing does have its disadvantages as well. If departments within an organization are
developing their own applications, the organization may end up with several applications that perform similar functions, which is inefficient since it duplicated effort. Sometimes, these different versions of the same application provide different results, bringing confusion when departments interact. These applications are often developed by someone with little or no formal training in programming. In these cases, the software developed can have problems that have to be resolved by the IT department. End-user computing can be beneficial to an organization, but it should be managed. The IT department should set guidelines and provide tools for the departments who want to create their own solutions.
Communication between departments will go a long way towards the successful use of end-user computing.
Sidebar: Building a Mobile App
Software development typically includes building applications to run on desktops, servers, or mainframes. However, the web's commercialization has created additional software development categories such as web design, content development, web server. Web-related development effort for the internet is now called web development. Earlier web development activities include building websites to support businesses or to build e-commerce systems and have made technologies such as HTML very popular with web designers and programming languages such as Perl, Python, Java popular for programmers. Pre-packaged websites are now available for consumers to purchase without learning HTML or hiring a web designer. For example, entrepreneurs who want to start a bakery business can now buy a pre-build website with a shopping cart, all ready to start a business without incurring costly expenses to build it themselves.
With the rise of mobile phones, a new type of software development called mobile app development came into being. Statista.com forecasts that Mobile apps revenues will increase significantly from \$98B in 2014 to over \$935B by 2023. This means that the need for mobile app developers has also increased.
In many ways, building an application for a mobile device is the same as building an application for a traditional computer. Understanding the application requirements, designing the interface, working with users – all of these steps still need to be carried out. The decision process to pick the right programming languages and tools remains the same.
However, there are specific differences that programmers must consider in building apps for mobile devices. They are:
• The user interface must vary to adapt to different screen size
• The use of fingers as pointers or to type in text instead of keyboard and mouse on the desktop
• Specific requirements from the OS vendor must be met for the app to be included in each store (i.e., Apple’s App Store or Android’s Play Store)
• The integration with the desktop or the cloud to synch up data
• Tight integration with other built-in hardware such as cameras, biometric or motion sensors.
• Less available memory, storage space, and processing power
Mobile apps are now available for just about everything and continue to grow. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/02%3A_Information_Systems_for_Strategic_Advantage/10%3A_Information_Systems_Development/10.03%3A_Software_Development.txt |
Implementation Methodologies
Once a new system is developed (or purchased), the organization must determine the best method for implementing it. Convincing a group of people to learn and use a new system can be a challenging process. Using the new software and the business processes it gives rise to can have far-reaching effects within the organization.
There are several different methodologies an organization can adopt to implement a new system. Four of the most popular are listed below.
• Direct cutover. In the direct-cutover implementation methodology, the organization selects a particular date that the old system will not be used anymore. On that date, the users begin using the new system, and the old system is unavailable. The advantages of using this methodology are that it is speedy and the least expensive. However, this method is the riskiest as well. If the new system has an operational problem or is not properly prepared, it could prove disastrous for the organization.
• Pilot implementation. In this methodology, a subset of the organization (called a pilot group) starts using the new system before the rest of the organization. This has a smaller impact on the company and allows the support team to focus on a smaller group of individuals.
• Parallel operation. With the parallel operation, the old and new systems are used simultaneously for a limited period of time. This method is the least risky because the old system is still being used while the new system is essentially being tested. However, this is the most expensive methodology since work is duplicated and support is needed for both systems in full.
• Phased implementation. In a phased implementation, different functions of the new application are used as functions from the old system are turned off. This approach allows an organization to move from one system to another slowly.
These implementation methodologies depend on the complexity and importance of the old and new systems.
Change Management
As new systems are brought online, and old systems are phased out, it becomes important to manage how change is implemented. Change should never be introduced in a vacuum. The organization should be sure to communicate proposed changes before they happen and plan to minimize the impact of the change that will occur after implementation. Training and incorporating users’ feedback are critical to increasing user’s acceptance of the new system. Without gaining the user’s acceptance, the risk of failure is very high. Change management is a critical component of IT oversight.
Maintenance
Once a new system has been introduced, it enters the maintenance phase. In this phase, the system is in production and is being used by the organization. While the system is no longer actively being developed, changes need to be made when bugs are found, or new features are requested. During the maintenance phase, IT management must ensure that the system continues to stay aligned with business priorities, has a clear process to accept requests, problem reports, deploy updates to ensure user’s satisfaction with continuous improvements in the product's quality.
With the rise of privacy concerns, many companies now add policies about maintaining their customers’ data or data collected during the project. Policies such as when to dispose of, how to dispose of, where to store are just a few examples.
10.05: Summary
Developing an IS can be costly and a complex process to manage a group of professionals to deliver a new system on time and budget. There are several development models from the formal SDLC process to more informal processes such as agile programming or lean methodologies to provide a framework to manage all the phases from start to finish.
Software development is about so much more than programming. Programming languages have evolved from very low-level machine-specific languages to higher-level languages that allow a programmer to write software for a wide variety of machines. Most programmers work with software development tools that provide them with integrated components to make the software development process more efficient.
For some organizations, building their own software applications does not make the most sense; instead, they choose to purchase or rent software built by a third party to save development costs and speed implementation. In end-user computing, software development happens outside the information technology department. When implementing new software applications, organizations need to consider several different types of implementation methodologies.
An organization’s responsibilities to complete a software development do not end with the deployment of the software. It now includes a clear and systemic process to maintain and protect customers’ and projects’ data to address security and privacy concerns.
10.06: Study Questions
Study Questions
1. What are the steps in the SDLC methodology?
2. What is RAD software development?
3. What is the Waterfall model?
4. What makes the lean methodology unique?
5. What is the difference between the Waterfall and Agile models?
6. What is a sprint?
7. What are three differences between second-generation and third-generation languages?
8. Why would an organization consider building its own software application if it is cheaper to buy one?
9. What is the difference between the pilot implementation methodology and the parallel implementation methodology?
10. What is change management?
11. What are the four different implementation methodologies?
Exercises
1. Which software-development methodology would be best if an organization needed to develop a software tool for a small group of users in the marketing department? Why? Which implementation methodology should they use? Why?
2. Doing your own research, find three programming languages and categorize them in these areas: generation, compiled vs. interpreted, procedural vs. object-oriented.
3. Some argue that HTML is not a programming language. Doing your own research, find three arguments for why it is not a programming language and three arguments for why it is.
4. Read more about responsive design using the link given in the text. Provide the links to three websites that use responsive design and explain how they demonstrate responsive-design behavior.
5. Research the criteria and cost to put a mobile app into Apple’s App Store. Write a report.
6. Research to find out what elements to use to estimate the cost to build an app. Write a report. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/02%3A_Information_Systems_for_Strategic_Advantage/10%3A_Information_Systems_Development/10.04%3A_Implementation_Methodologies.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• Explain the concept of globalization;
• Describe the role of information technology in globalization;
• Identify the issues experienced by firms as they face a global economy; and
• Define the digital divide and explain Nielsen’s three stages of the digital divide.
The rapid rise of the Internet has made it easier than ever to do business worldwide. This chapter will look at the impact that the Internet is having on the globalization of business. Firms will need to manage challenges and leverage opportunities due to globalization and digitalization. It will discuss the digital divide concept, what steps have been taken to date to alleviate it, and what more needs to be done.
11: Information Systems Beyond the Organization
In this chapter, we will look at how the internet has opened the world to globalization. We will look at where it began and fast forward to where we are today. We will be reviewing the influences of man, machine, and technology that enables globalization. It is now just as simple to communicate with someone on the other side of the world as to talk to someone next door. In this chapter, we will look at the implications of globalization and its impact on the world.
What Is Globalization?
Globalization is found in economics and refers to the integration of goods, services, and culture among the people and nations of the world. Globalization has accelerated since the turn of the 18th century due to mass improvement in transportation and technology. Globalization has its roots as far back as an exploration of finding the New World. Globalization creates world markets. Places that were once limited to only providing goods and services to the immediate area now have open access to other countries worldwide. The expansion of global markets has increased economic activities in the exchange of goods, services, and funds, which has created global markets that are now readily feasible. Today the ease of the connectivity of people has accelerated the speed of globalization. People no longer have to sail for a year to share goods or services.
The internet has connected nations together. From its initial beginnings in the United States in the 1970s to the World Wide Web development, it has crept into home use with the introduction of the personal computer by the 1980s. The 90’s then introduced social networks and e-commerce of today; the Internet has continued to increase the integration between countries, making globalization a fact of life for citizens worldwide. The Internet is truly a worldwide phenomenon. By Q3 of 2020, approximately 4.9 billion people, or more than half of the world’s population, use the internet. For more details, please view the data at internetworldstats.com/stats.htm.
The Network Society
In 1996, social-sciences researcher Manuel Castells published The Rise of the Network Society. He identified new ways to organize economic activity around the networks that the new telecommunication technologies have provided. This new, global economic activity was different from the past because “it is an economy with the capacity to work as a unit in real-time on a planetary scale.” (Castells, 2000) We are now into this network society, where we are all connected on a global scale.
The World Is Flat
In Thomas Friedman’s seminal book, The World Is Flat (Friedman, 2005), he unpacks the impacts that the personal computer, the Internet, and communication software have had on business, specifically its impact on globalization. He begins the book by defining the three eras of globalization:
• Globalization 1.0″ occurred from 1492 until about 1800. In this era, globalization was centered around countries. It was about how much horsepower, wind power, and steam power a country had and how creatively it was deployed. The world shrank from size “large” to size “medium.”
• Globalization 2.0″ occurred from about 1800 until 2000, interrupted only by the two World Wars. In this era, the dynamic force driving change was multinational companies. The world shrank from size “medium” to size “small.”
• “Globalization 3.0″ is our current era, beginning in the year 2000. The convergence of the personal computer, fiber-optic Internet connections, and software has created a “flat-world platform” that allows small groups and even individuals to go global. The world has shrunk from size “small” to size “tiny.”
According to Friedman (2005), this third era of globalization was brought about, in many respects, by information technology. Some of the specific technologies he lists include:
• The graphical user interface for the personal computer popularized in the late 1980s. Before the graphical user interface, using a computer was relatively difficult. By making the personal computer something that anyone could use, it became commonplace very quickly. Friedman points out that this digital storage of content made people much more productive and, as the Internet evolved, made it simpler to communicate content worldwide.
• The build-out of the Internet infrastructure during the dot-com boom during the late-1990s. During the late 1990s, telecommunications companies laid thousands of miles of fiber-optic cable worldwide, turning network communications into a commodity. At the same time, the Internet protocols, such as SMTP (e-mail), HTML (web pages), and TCP/IP (network communications), became standards that were available for free and used by everyone.
• The introduction of software to automate and integrate business processes. As the Internet continued to grow and become the dominant form of communication, it became essential to build on the standards developed earlier so that the websites and applications running on the Internet would work well together. Friedman calls this “workflow software,” by which he means software that allows people to work together more easily and allows different software packages and databases to integrate easily. Examples include payment-processing systems and shipping calculators.
These three technologies came together in the late 1990s to create a “platform for global collaboration.” Once these technologies were in place, they continued to evolve. Friedman also points out a couple more technologies that have contributed to the flat-world platform – the open-source movement (see chapter 10) and the advent of mobile technologies.
The World Is Flat was published in 2005. Since then, we have seen even more growth in information technologies that have contributed to global collaborations. We will discuss current and future trends in chapter 13. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/03%3A_Information_Systems_Beyond_the_Organization/11%3A_Information_Systems_Beyond_the_Organization/11.01%3A_Introduction.txt |
The Global Firm
The new era of globalization allows any business to become international. By accessing this new platform of technologies or network, Castells’ vision (Castells, 2000) of working as a unit in real-time on a planetary scale can be a reality. He believed the collective could benefit society. Some of the advantages of this include the following:
• Access to expertise and labor around the world. Organizations are no longer being limited by viable candidates locally and can now hire people from the global labor pool. This also allows organizations to pay a lower labor cost for the same work based on the prevailing wage in different countries.
• Operate 24 hours a day. With employees in different time zones worldwide, an organization can literally operate around the clock, handing off work on projects from one part of the world to another. Businesses can also keep their digital storefront (their website) open all the time.
• Access to a larger market for firm products. Once a product is being sold online, it is available for purchase from a worldwide consumer base. Even if a company’s products do not appeal beyond its own country’s borders, being online has also made the product more visible to consumers within that country.
• Achieve a diversity of the market. It helps companies to stabilize their overall revenue sources. The company could be experiencing a gain in revenues in one country and be down the other side of the world, which will help to stabilize their revenues.
• Gain more exposure to foreign investment opportunities. Globalization helps companies to become more familiar with opportunities in the new areas that they are expanding into.
To fully take advantage of these new capabilities, companies need to understand that there are also challenges in dealing with employees, customers from different cultures, and other countries' economies. Some of these challenges include:
• Infrastructure differences. Each country has its own infrastructure, many of which are not of the same quality as the US infrastructure. Americans are currently getting around 135 Mbps of download speed and 52 Mbps of upload speed through their fixed broadband connections — good for eighth in the world and around double the global average. For every South Korea (16 average speed), there is an Egypt (0.83 MBps) or an India (0.82 MBps). A business cannot depend on every country it deals with having the same Internet speeds. See the sidebar called “How Does My Internet Speed Compare?”
• Labor laws and regulations. Different countries (including the United States) have different laws and regulations. A company that wants to hire employees from other countries must understand the different regulations and concerns.
• Legal restrictions. Many countries have restrictions on what can be sold or how a product can be advertised. A business needs to understand what is allowed. For example, in Germany, it is illegal to sell anything Nazi-related; in China, it is illegal to put anything sexually suggestive online.
• Language, customs, and preferences. Every country has its own (or several) unique culture(s), which a business must consider when trying to market a product. Additionally, different countries have different preferences. For example, in some parts of the world, people prefer to eat their french fries with mayonnaise instead of ketchup; in other parts of the world, specific hand gestures (such as the thumbs-up) are offensive.
• International shipping. Shipping products between countries promptly can be challenging. Inconsistent address formats, dishonest customs agents, and prohibitive shipping costs are all factors that must be considered when trying to deliver products internationally.
• Volatility of currency. This could occur when you are buying or selling goods, the currency has big fluctuations in value when converting to a different countries’ currency, such as the euro, yen, and dollar.
Because of these challenges, many businesses choose not to expand globally, either for labor or for customers. Whether a business has its own website or relies on a third party, such as Amazon or eBay, the question of whether to globalize must be carefully considered.
Globalization has changed greatly in the last several decades. It has seen positive development, with associated costs and benefits such as organizations have seen its fortune changed and progress and modernization are brought into various parts of the world. However, its benefits are not necessarily evenly distributed across the world. With the global pandemic of 2020 (Covid-19), globalization is now viewed by many as risks to the national supply chain of goods and services, job losses, increased gap of inequality, and health risks. It is expected that globalization post-Covid will need to mitigate these risks to move it to a more balanced approach between independence and integration between countries (Kobrin, 2020).
Sidebar: How Does My Internet Speed Compare?
Internet speed varies by geographies, such as states and countries, as reported by Statista.com. For example, as of August 2020, Singapore's internet speed is ~218 Mbps, while Hungary is ~156 Mbps. Please visit Statista.com for more details.
Statista.com also reported that as of June 2020, over 42% of US households did not know the download speed of their household internet service. The download speed varies from 10 Mbps or less to over 100 Mbps. There are several free tools that you can use to test your household internet upload and download speed, such as the app Speedtest, a free download (as of this writing). | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/03%3A_Information_Systems_Beyond_the_Organization/11%3A_Information_Systems_Beyond_the_Organization/11.02%3A_The_Global_Firm.txt |
The Digital Divide
As the Internet continues to make inroads across the world, it also creates a separation between those who have access to this global network and those who do not. This separation is called the “digital divide” and is of great concern. Kilburn (2005) summarizes this concern in his article Crossroads:
Adopted by the ACM Council in 1992, the ACM Code of Ethics and Professional Conduct focuses on issues involving the Digital Divide that could prevent certain categories of people - those from low-income households, senior citizens, single-parent children, the undereducated, minorities, and residents of rural areas — from receiving adequate access to the wide variety of resources offered by computer technology. This Code of Ethics positions the use of computers as a fundamental ethical consideration: “In a fair society, all individuals would have equal opportunity to participate in, or benefit from, the use of computer resources regardless of race, sex, religion, age, disability, national origin, or other similar factors.” The article discusses the digital divide in various forms and analyzes reasons for the growing inequality in people’s access to Internet services. It also describes how society can bridge the digital divide: the serious social gap between information “haves” and “have-nots.”
The digital divide is categorized into three stages: the economic divide, the usability divide, and the empowerment divide (Nielson, 2006)
• The economic divide is usually called the digital divide: it means that some people can afford to have a computer and Internet access while others cannot. Because of Moore’s Law (see chapter 2), the price of hardware has continued to drop, and, at this point, we can now access digital technologies, such as smartphones, for very little. This fact, Nielsen asserts, means that the economic divide is a moot point for all intents and purposes, and we should not focus our resources on solving it.
• The usability divide is concerned with the fact that “technology remains so complicated that many people couldn’t use a computer even if they got one for free.” And even for those who can use a computer, accessing all the benefits of having one is beyond their understanding. Included in this group are those with low literacy and seniors. According to Nielsen, we know how to help these users, but we are not doing it because there is little profit.
• The empowerment divide is the most difficult to solve. It is concerned with how we use technology to empower ourselves. Very few users truly understand the power that digital technologies can give them. In his article, Nielsen explains that his (and others’) research has shown that very few users contribute content to the Internet, use the advanced search, or even distinguish paid search ads from organic search results. Many people will limit what they can do online by accepting the basic, default settings of their computer and not understanding how they can truly be empowered.
Understanding the digital divide using these three stages provides an approach to developing solutions and monitoring our progress in bridging the digital divide gap.
The digital divide can occur between countries, regions, or even neighborhoods. There are pockets with little or no Internet access in many US cities, while just a few miles away, high-speed broadband is common. For example, in 2020, the US ) reports that “In urban areas, 97% of Americans have access to high-speed fixed service. In rural areas, that number falls to 65%. And on Tribal lands, barely 60% have access. All told, nearly 30 million Americans cannot reap the benefits of the digital age.” Overall, Statista.com reported that as of August 2020, only ~85% of the US population has internet access.
The global pandemic (Covid-19) has made Internet access an essential requirement due to the social distance or lockdown mandates and has spotlighted this issue globally.
Challenges and efforts to bridge the Digital Divide gap
Solutions to the digital divide have had mixed success over the years. Initial effort focused on providing internet access and/or computing devices with some degrees of success. However, just providing Internet access and/or computing devices is not enough to bring true Internet access to a country, region, or neighborhood.
The Worldbank and International Monetary Fund (IMF), in their annual meeting in 2020, brought together global leaders and private innovators to discuss how to bridge the digital gap globally. Three challenges were identified:
1. Lack of infrastructure remains a major barrier to connectivity
2. Greater collaboration is needed between the public and private sectors
3. Education and training to help connect people in underserved communities
In June 2020, the UN Secretary-General stated that Digital Divide is now ‘a Matter of Life and Death’ amid the COVID-19 Crisis and called on global leaders for global cooperation to meet the goal: every person has safe and affordable access to the Internet by 2030.
With this challenge being made acute due to the global pandemic of 2020 (Covid-19), many leaders have increased their investment to bridge this gap in their countries. For example, the IMF reported that countries like Kenya, Ghana, Rwanda, and Tanzania had made great progress in using mobile to connect their citizens to financial systems (IMF, 2020). Many states in the United States have increased their funding through public or private partnerships, such as the California Closing the Divide initiative (CA dept of education, 2020).
Continued global investment to bridge this gap remains a critical need for the global world, both during and post-global pandemic.
Sidebar: Using Gaming to Bridge the Digital Divide
Paul Kim, the Assistant Dean and Chief Technology Officer of the Stanford Graduate School of Education, designed a project to address the digital divide for children in developing countries (Kim et al., 2011.) In their project, the researchers wanted to understand if children can adopt and teach themselves mobile learning technology without help from teachers or other adults and the processes and factors involved in this phenomenon. The researchers developed a mobile device called TeacherMate, which contained a game designed to help children learn math. The unique part of this research was that the researchers interacted directly with the children; they did not channel the mobile devices through the teachers or the schools. Another important factor to consider: to understand the context of the children’s educational environment, the researchers began the project by working with parents and local nonprofits six months before their visit. While the results of this research are too detailed to go into here, it can be said that the researchers found that children can, indeed, adopt and teach themselves mobile learning technologies.
What makes this research so interesting when thinking about the digital divide is that the researchers found that, to be effective, they had to customize their technology and tailor their implementation to the specific group they were trying to reach. One of their conclusions stated the following:
Considering the rapid advancement of technology today, mobile learning options for future projects will only increase. Consequently, researchers must continue to investigate their impact; we believe there is a specific need for more in-depth studies on ICT [information and communication technology] design variations to meet different localities' challenges. To read more about Dr. Kim’s project, locate the paper referenced in the list of references.
11.04: Summary
Summary
Information technology has driven change on a global scale. As documented by Castells and Friedman, technology has given us the ability to integrate with people worldwide using digital tools. These tools have allowed businesses to broaden their labor pools, markets, and even operating hours. But they have also brought many new complications for businesses, which now must understand regulations, preferences, and cultures from many different nations. This new globalization has also exacerbated the digital divide using Nielson's three stages. The 2020 global pandemic has accentuated both the problems and increased efforts in bridging the digital divide globally. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/03%3A_Information_Systems_Beyond_the_Organization/11%3A_Information_Systems_Beyond_the_Organization/11.03%3A_The_Digital_Divide.txt |
Learning Objectives
Upon successful completion of this chapter, you will be able to:
• Describe what the term information systems ethics means;
• Explain what a code of ethics is and describe the advantages and disadvantages;
• Define the term intellectual property and explain the protections provided by copyright, patent, and trademark
• Describe what Creative Commons is and be able to identify what the different licenses mean.
• Describe the challenges that information technology brings to individual privacy.
The rapid changes in all the components of information systems in the past few decades have brought a broad array of new capabilities and powers to governments, organizations, and individuals alike. This chapter will discuss the effects that these new capabilities have had and the legal and regulatory changes that have been put in place in response, and what ethical issues organizations and IT communities need to consider in using or developing emerging solutions and services that regulations are not fully developed.
• 12.1: Introduction
This chapter discusses the impact of information systems on how we behave (ethics) and the new legal structures being put in place to protect intellectual property and privacy.
• 12.2: Intellectual Property
This section discusses intellectual property, copyright, obtaining protection, first sale doctrine, and fair use.
• 12.3: The Digital Millennium Copyright Act
This section discusses the provisions of the anti-circumvention and the safe harbor of the Digital Millennium Copyright Act and the different types of licenses the authors can grant to others under Creative Commons.
• 12.4: Summary
• 12.5: Study Questions
12: The Ethical and Legal Implications of Information System
Introduction
Information systems have had an impact far beyond the world of business. In the past four decades, technology has fundamentally altered our lives: from the way we work, how we play to how we communicate, and how we fight wars. Mobile phones track us as we shop at stores and go to work. Algorithms based on consumer data allow firms to sell us products that they think we need or want. New technologies create new situations that we have never dealt with before. They can threaten individual autonomy, violate privacy rights, and can also be morally contentious. How do we handle the new capabilities that these devices empower us with? What new laws are going to be needed to protect us from ourselves and others? This chapter will kick off with a discussion of the impact of information systems on how we behave (ethics). This will be followed by the new legal structures being put in place, focusing on intellectual property and privacy.
Information Systems Ethics
The term ethics is defined as “a set of moral principles” or “the principles of conduct governing an individual or a group.” Since the dawn of civilization, the study of ethics and its impact has fascinated humankind. But what do ethics have to do with information systems?
The introduction of new technology can have a profound effect on human behavior. New technologies give us capabilities that we did not have before, which create environments and situations that have not been specifically addressed in ethical terms. Those who master new technologies gain new power; those who cannot master them may lose power. In 1913, Henry Ford implemented the first moving assembly line to create his Model T cars. While this was a great step forward technologically (and economically), the assembly line reduced human beings' value in the production process. The development of the atomic bomb concentrated unimaginable power in the hands of one government, which then had to wrestle with the decision to use it. Today’s digital technologies have created new categories of ethical dilemmas.
For example, the ability to anonymously make perfect copies of digital music has tempted many music fans to download copyrighted music for their own use without making payment to the music’s owner. Many of those who would never have walked into a music store and stolen a CD find themselves with dozens of illegally downloaded albums.
Digital technologies have given us the ability to aggregate information from multiple sources to create profiles of people. What would have taken weeks of work in the past can now be done in seconds, allowing private organizations and governments to know more about individuals than at any time in history. This information has value but also chips away at the privacy of consumers and citizens.
Communication technologies like social media(Facebook, Twitter, Instagram, LinkedIn, internet blogs) give so many people access to so much information that it's getting harder and harder to tell what’s real and what’s fake. Its widespread use has blurred the line between professional, personal, and private. Employers now have access to information that has traditionally been considered private and personal, giving rise to new legal and ethical ramifications.
Some technologies like self-driving vehicles(drones), artificial intelligence, the digital genome, and additive manufacturing methods(GMO) are transitioning into a new phase, becoming more widely used or incorporated into consumer goods, requiring new ethical and regulatory guidelines.
Code of Ethics
One method for navigating new ethical waters is a code of ethics. A code of ethics is a document that outlines a set of acceptable behaviors for a professional or social group; generally, it is agreed to by all members of the group. The document details different actions that are considered appropriate and inappropriate.
A good example of a code of ethics is the Code of Ethics and Professional Conduct of the Association for Computing Machinery, an organization of computing professionals that includes educators, researchers, and practitioners. Here is an excerpt from the preamble:
Computing professionals' actions change the world. To act responsibly, they should reflect upon the wider impacts of their work, consistently supporting the public good. The ACM Code of Ethics and Professional Conduct ("the Code") expresses the profession's conscience. Additionally, the Code serves as a basis for remediation when violations occur. The Code includes principles formulated as statements of responsibility based on the understanding that the public good is always the primary consideration. Each principle is supplemented by guidelines, which provide explanations to assist computing professionals in understanding and applying the principle.
Section 1 outlines fundamental ethical principles that form the basis for the remainder of the Code. Section 2 addresses additional, more specific considerations of professional responsibility. Section 3 guides individuals who have a leadership role, whether in the workplace or a volunteer professional capacity. Commitment to ethical conduct is required of every ACM member, and principles involving compliance with the Code are given in
In the ACM’s code, you will find many straightforward ethical instructions, such as the admonition to be honest and trustworthy. But because this is also an organization of professionals that focuses on computing, there are more specific admonitions that relate directly to information technology:
• No one should enter or use another’s computer system, software, or data files without permission. One must always have appropriate approval before using system resources, including communication ports, file space, other system peripherals, and computer time.
• Designing or implementing systems that deliberately or inadvertently demean individuals or groups is ethically unacceptable.
• Organizational leaders are responsible for ensuring that computer systems enhance, not degrade, working life quality. When implementing a computer system, organizations must consider all workers' personal and professional development, physical safety, and human dignity. Appropriate human-computer ergonomic standards should be considered in system design and the workplace.
One of the major advantages of creating a code of ethics is clarifying the acceptable standards of behavior for a professional group. The varied backgrounds and experiences of the members of a group lead to various ideas regarding what is acceptable behavior. While to many the guidelines may seem obvious, having these items detailed provides clarity and consistency. Explicitly stating standards communicates the common guidelines to everyone in a clear manner.
Having a code of ethics can also have some drawbacks. First of all, a code of ethics does not have legal authority; in other words, breaking a code of ethics is not a crime in itself. So what happens if someone violates one of the guidelines? Many codes of ethics include a section that describes how such situations will be handled. In many cases, repeated violations of the code result in expulsion from the group.
In the case of ACM: “Adherence of professionals to a code of ethics is largely a voluntary matter. However, if a member does not follow this code by engaging in gross misconduct, membership in ACM may be terminated.” Expulsion from ACM may not impact many individuals since membership in ACM is usually not a requirement for employment. However, expulsion from other organizations, such as a state bar organization or medical board, could carry a huge impact.
Another possible disadvantage of a code of ethics is that there is always a chance that important issues will arise that are not specifically addressed in the code. Technology is changing exponentially, and advances in artificial intelligence mean new ethical issues related to machines. The code of ethics might not be updated often enough to keep up with all of the changes. However, a good code of ethics is written in a broad enough fashion that it can address the ethical issues of potential technology changes. In contrast, the organization behind the code makes revisions.
Finally, a code of ethics could also be a disadvantage because it may not entirely reflect the ethics or morals of every member of the group. Organizations with a diverse membership may have internal conflicts as to what is acceptable behavior. For example, there may be a difference of opinion on the consumption of alcoholic beverages at company events. In such cases, the organization must choose the importance of addressing a specific behavior in the code.
Sidebar: Acceptable Use Policies(AUP) (20%)
Many organizations that provide technology services to a group of constituents or the public require an acceptable use policy (AUP) before those services can be accessed. Like a code of ethics, it is a set of rules applied by the organization that outlines what users may or may not do while using the organization’s services. Usually, the policy requires some acknowledgment that the rules are well understood, including potential violations. An everyday example of this is the terms of service that must be agreed to before using the public Wi-Fi at Starbucks, McDonald’s, or even a university. An AUP is an important document as it demonstrates due diligence of the organization's security and protection of sensitive data, which protects the organization from legal actions. Here is an example of an acceptable use policy from Virginia Tech.
Just as with a code of ethics, these acceptable use policies specify what is allowed and what is not allowed. Again, while some of the items listed are obvious to most, others are not so obvious:
• “Borrowing” someone else’s login ID and password are prohibited.
• Using the provided access for commercial purposes, such as hosting your own business website, is not allowed.
• Sending out unsolicited emails to a large group of people is prohibited.
Also, as with codes of ethics, violations of these policies have various consequences. In most cases, such as with Wi-Fi, violating the acceptable use policy will mean that you will lose your access to the resource. While losing access to Wi-Fi at Starbucks may not have a lasting impact, a university student getting banned from the university’s Wi-Fi (or possibly all network resources) could greatly impact. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/03%3A_Information_Systems_Beyond_the_Organization/12%3A_The_Ethical_and_Legal_Implications_of_Information_System/12.01%3A_Introduction.txt |
Intellectual Property
One of the domains that digital technologies have deeply impacted is the domain of intellectual property. Digital technologies have driven a rise in new intellectual property claims and made it much more difficult to defend intellectual property.
Merriam-Webster Dictionary defines Intellectual property as “property (as an idea, invention, or process) that derives from the work of the mind or intellect. This could include song lyrics, a computer program, a new type of toaster, or even a sculpture.
Practically speaking, it is challenging to protect an idea. Instead, intellectual property laws are written to protect the tangible results of an idea. In other words, just coming up with a song in your head is not protected, but if you write it down, it can be protected.
Protection of intellectual property is important because it gives people an incentive to be creative. Innovators with great ideas will be more likely to pursue those ideas if they clearly understand how they will benefit. In the US Constitution, Article 8, Section 8, the authors saw fit to recognize the importance of protecting creative works:
Congress shall have the power... To promote the Progress of Science and useful Arts by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.
An important point to note here is the “limited time” qualification. While protecting intellectual property is important because of its incentives, it is also necessary to limit the amount of benefit that can be received and allow the results of ideas to become part of the public domain.
Outside of the US, intellectual property protections vary. You can find out more about a specific country’s intellectual property laws by visiting the World Intellectual Property Organization.
There are many intellectual property types such as copyrights, patents, trademarks, industrial design rights, plant variety rights, and trade secrets. In the following sections, we will review three of the best-known intellectual property protection: copyright, patent, and trademark.
Copyright
Copyright is the protection given to songs, movies, books, computer software, architecture, and other creative works, usually for a limited time. An artist can, for example, sue if his painting is copied and sold on T-shirts without permission. A coder can sue if another Web developer verbatim takes her code. Any work that has an “author” can be copyrighted. It covers both published and unpublished work. Under the terms of copyright, the author of the work controls what can be done with the work, including:
• Who can make copies of the work?
• Who can create derivative works from the original work?
• Who can perform the work publicly?
• Who can display the work publicly?
• Who can distribute the work?
Often, work is not owned by an individual but is instead owned by a publisher with whom the original author has an agreement. In return for the rights to the work, the publisher will market and distribute the work and then pay the original author a portion of the proceeds.
Copyright protection lasts for the life of the original author plus seventy years. In the case of a copyrighted work owned by a publisher or another third party, the protection lasts for ninety-five years from the original creation date. For works created before 1978, the protections vary slightly. You can see the full details on copyright protections by reviewing the Copyright Basics document available at the US Copyright Office’s website. See also the sidebar “History of Copyright Law.”
Obtaining Copyright Protection
In the United States, copyright is obtained by the simple act of creating the original work. In other words, when an author writes down that song, makes that film, or designs that program, he or she automatically has the copyright. However, it is advisable to register for a copyright with the US Copyright Office for a work that will be used commercially. A registered copyright is needed to bring legal action against someone who has used a work without permission.
First Sale Doctrine
If an artist creates a painting and sells it to a collector who then, for whatever reason, proceeds to destroy it, does the original artist have any recourse? What if the collector, instead of destroying it, begins making copies of it and sells them? Is this allowed?
The protections that copyright law extends to creators have an important limitation. The first sale doctrine is a part of copyright law that addresses this, as shown below:
The first sale doctrine, codified at 17 U.S.C. § 109, provides that an individual who knowingly purchases a copy of a copyrighted work from the copyright holder receives the right to sell, display or otherwise dispose of that particular copy, notwithstanding the interests of the copyright owner.
So, in our examples, the copyright owner has no recourse if the collector destroys her artwork. But the collector does not have the right to make copies of the artwork.
Fair Use
Another important provision within copyright law is that of fair use. Fair use is a limitation on copyright law that allows for protected works without prior authorization in specific cases. For example, if a teacher wanted to discuss a current event in her class, she could pass out copies of a copyrighted news story to her students without first getting permission. Fair use allows a student to quote a small portion of a copyrighted work in a research paper.
Unfortunately, the specific guidelines for what is considered fair use and what constitutes copyright violation are not well defined. Fair use is a well-known and respected concept and will only be challenged when copyright holders feel that their work's integrity or market value is being threatened. The following four factors are considered when determining if something constitutes fair use 9 :
• The purpose and character of the use, including whether such use is of commercial nature or is for nonprofit educational purposes;
• The nature of the copyrighted work;
• The amount and substantiality of the portion used concerning the copyrighted work as a whole;
• The effect of the use upon the potential market for, or value of, the copyrighted work.
If you are ever considering using a copyrighted work as part of something you are creating, you may be able to do so under fair use. However, it is always best to check with the copyright owner to ensure you are staying within your rights and not infringing upon theirs.
Sidebar: The History of Copyright Law
As noted above, current copyright law grants copyright protection for seventy years after the author’s death or ninety-five years from the date of creation for a work created for hire. But it was not always this way.
The first US copyright law, which only protected books, maps, and charts, protected for only 14 years with a renewable term of 14 years. Over time, copyright law was revised to grant protections to other forms of creative expressions, such as photography and motion pictures. Congress also saw fit to extend the length of the protections, as shown in the chart below. Today, copyright has become big business, with many businesses relying on copyright-protected works for their income.
Many now think that the protections last too long. The Sonny Bono Copyright Term Extension Act 1998 has been nicknamed the “Mickey Mouse Protection Act,” as it was enacted just in time to protect the copyright on the Walt Disney Company’s Mickey Mouse character. It extended copyright terms to the life of the author plus 70 years. Because of this term extension, many works from the 1920s and 1930s were still protected by copyright and could not enter the public domain until 2019 or later. Mickey Mouse will not be in the public domain until 2024. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/03%3A_Information_Systems_Beyond_the_Organization/12%3A_The_Ethical_and_Legal_Implications_of_Information_System/12.02%3A_Intellectual_Proper.txt |
The Digital Millennium Copyright Act
As digital technologies have changed what it means to create, copy, and distribute media, a policy vacuum has been created. In 1998, the US Congress passed the Digital Millennium Copyright Act (DMCA), which extended copyright law to consider digital technologies. An anti-piracy statute makes it illegal to duplicate digital copyrighted works and sell or freely distribute them. Two of the best-known provisions from the DMCA are the anti-circumvention provision and the “safe harbor” provision.
• The anti-circumvention provision makes it illegal to create technology to circumvent technology that has been put in place to protect a copyrighted work. This provision includes the creation of the technology and the publishing of information that describes how to do it. While this provision does allow for some exceptions, it has become quite controversial and has led to a movement to have it modified.
• The “safe harbor” provision limits online service providers' liability when someone using their services commits copyright infringement. This provision allows YouTube, for example, not to be held liable when someone posts a clip from a copyrighted movie. The provision does require the online service provider to take action when they are notified of the violation (a “takedown” notice). For an example of how takedown works, here’s how YouTube handles these requests: YouTube Copyright Infringement Notification
Many think that the DMCA goes too far and ends up limiting our freedom of speech. The Electronic Frontier Foundation (EFF) is at the forefront of this battle. For example, in discussing the anti-circumvention provision, the EFF states:
Yet, the DMCA has become a serious threat that jeopardizes fair use, impedes competition and innovation, chills free expression and scientific research, and interferes with computer intrusion laws. If you circumvent DRM [digital rights management] locks for non-infringing fair uses or create the tools to do so, you might be on the receiving end of a lawsuit.
Creative Commons
In chapter 2, we learned about open-source software. Open-source software has few or no copyright restrictions; the software creators publish their code and make their software available for others to use and distribute for free. This is great for software, but what about other forms of copyrighted works? If an artist or writer wants to make their works available, how can they go about doing so while still protecting their work integrity? Creative Commons is the solution to this problem.
Creative Commons is an international nonprofit organization that provides legal tools for artists and authors around the world. The tools offered to make it simple to license artistic or literary work for others to use or distribute consistently with the creator's intentions. Creative Commons licenses are indicated with the symbol CC . It is important to note that Creative Commons and the public domain are not the same. When something is in the public domain, it has absolutely no restrictions on its use or distribution. Works whose copyrights have expired, for example, are in the public domain.
By using a Creative Commons license, creators can control the use of their work while still making it widely accessible. By attaching a Creative Commons license to their work, a legally binding license is created. Creators can choose from the following six licenses with varying permissions from the least open to the most open license:
• CC-BY: This is the least restrictive license. It lets others distribute, remix, adapt, and build upon the original work, in any medium or format, even commercially, as long as they give the author credit(attribution) for the original work.
• CC-BY-SA: This license restricts the distribution of the work via the “share-alike” clause. This means that others can freely distribute, remix, adapt and build upon the work, but they must give credit to the original author, and they must share using the same Creative Commons license.
• CC-BY-NC: NC stands for “non-commercial.” This license is the same as CC-BY but adds that no one can make money with this work - non-commercial purposes only.
• CC-BY-NC-SA: This license allows others to distribute, remix, adapt, and build upon the original work for non-commercial purposes, but they must give credit to the original author and share using the same license.
• CC-BY-NC-ND: This license is the same as CC-BY-NC and adds the ND restriction, which means that no derivative works may be made from the original.
• CCO: allows creators to give up their copyright and put their works into the worldwide public domain. It allows others to distribute, remix, adapt and build upon in any medium or format with no conditions.
This book has been written under the creative commons license CC-BY. More than half a billion licensed works exist on the Web free for students and teachers to use, build upon, and share. To learn more about Creative Commons, visit the Creative Commons website
Patent
Another important form of intellectual property protection is the patent. A patent creates protection for someone who invents a new product or process. The definition of invention is quite broad and covers many different fields. Here are some examples of items receiving patents:
• circuit designs in semiconductors;
• prescription drug formulas;
• firearms;
• locks;
• plumbing;
• engines;
• coating processes; and
• business processes.
Once a patent is granted, it provides the inventors with protection from others infringing on their patent. A patent holder has the right to “exclude others from making, using, offering for sale, or selling the invention throughout the United States or importing the invention into the United States for a limited time in exchange for public disclosure of the invention when the patent is granted.”
As with copyright, patent protection lasts for a limited period of time before the invention or process enters the public domain. In the US, a patent lasts twenty years. This is why generic drugs are available to replace brand-name drugs after twenty years.
Obtaining Patent Protection
Unlike copyright, a patent is not automatically granted when someone has an interesting idea and writes it down. In most countries, a patent application must be submitted to a government patent office. A patent will only be granted if the invention or process being submitted meets certain conditions:
• It must be original. The invention being submitted must not have been submitted before.
• It must be non-obvious. You cannot patent something that anyone could think of. For example, you could not put a pencil on a chair and try to get a patent for a pencil-holding chair.
• It must be useful. The invention being submitted must serve some purpose or have some use that would be desired.
The United States Patent and Trademark Office(USPTO) is the federal agency that grants U.S patents and registering trademarks. It reviews patent applications to ensure that the item being submitted meets these requirements. This is not an easy job: USPTO processes more than 600,000 patent applications and grants upwards of 300,000 patents each year. It took 75 years to issue the first million patents. The last million patents took only three years to issue; digital technologies drive much of this innovation.
Sidebar: What Is a Patent Troll?
The advent of digital technologies has led to a large increase in patent filings and, therefore, many patents being granted. Once a patent is granted, it is up to the patent owner to enforce it; if someone is found to be using the invention without permission, the patent holder has the right to sue to force that person to stop and collect damages.
The rise in patents has led to a new form of profiteering called patent trolling. A patent troll is a person or organization who gains the rights to a patent but does not actually make the invention that the patent protects. Instead, the patent troll searches for illegally using the invention in some way and sues them. In many cases, the infringement being alleged is questionable at best. For example, companies have been sued for using Wi-Fi or for scanning documents, technologies that have been on the market for many years.
Recently, the US government has begun taking action against patent trolls. Several pieces of legislation are working their way through Congress that will, if enacted, limit the ability of patent trolls to threaten innovation. You can learn a lot more about patent trolls by listening to a detailed investigation titled When Patents Attack conducted by the radio program This American Life.
Trademark
A trademark is a word, phrase, logo, shape, or sound that identifies a source of goods or services. For example, the Nike “Swoosh,” the Facebook “f,” and Apple’s apple (with a bite taken out of it) Kleenex(facial tissue brand) are all trademarked. The concept behind trademarks is to protect the consumer. Imagine going to the local shopping center to purchase a specific item from a specific store and finding that there are several stores all with the same name!
Two types of trademarks exist – a common-law trademark and a registered trademark. As with copyright, an organization will automatically receive a trademark if a word, phrase, or logo is being used in the normal course of business (subject to some restrictions, discussed below). A common-law trademark is designated by placing “TM” next to the trademark. A registered trademark has been examined, approved, and registered with the trademark office, such as the Patent and Trademark Office in the US. A registered trademark has the circle-R (®) placed next to the trademark.
While most any word, phrase, logo, shape, or sound can be trademarked, there are a few limitations.
A trademark will not hold up legally if it meets one or more of the following conditions:
1. The trademark is likely to confuse with a mark in a registration or prior application.
2. The trademark is merely descriptive for the goods/services. For example, trying to register the trademark “blue” for a blue product you sell will not pass muster.
3. The trademark is a geographic term.
4. The trademark is a surname. You will not be allowed to trademark “Smith’s Bookstore.”
5. The trademark is ornamental as applied to the goods. For example, a repeating flower pattern that is a design on a plate cannot be trademarked.
As long as an organization uses its trademark and defends it against infringement, the protection afforded by it does not expire. Thus, many organizations defend their trademark against other companies whose branding even only slightly copies their trademark. For example, Chick-fil-A has trademarked the phrase “Eat Mor Chikin” and has vigorously defended it against a small business using the slogan “Eat More Kale.” Coca-Cola has trademarked its bottle's contour shape and will bring legal action against any company using a bottle design similar to theirs. As an example of trademarks that have been diluted and have now lost their protection in the US are “aspirin” (originally trademarked by Bayer), “escalator” (originally trademarked by Otis), and “yo-yo” (originally trademarked by Duncan).
Information Systems and Intellectual Property
The rise of information systems has forced us to rethink how we deal with intellectual property. From the increase in patent applications swamping the government’s patent office to the new laws that must be put in place to enforce copyright protection, digital technologies have impacted our behavior.
Privacy
The term privacy has many definitions, but privacy will mean the ability to control information about oneself for our purposes. Our ability to maintain our privacy has eroded substantially in the past decades due to information systems.
Personally Identifiable Information(PII)
Information about a person that can uniquely establish that person’s identity is called personally identifiable information, or PII. This is a broad category that includes information such as:
• name;
• social security number;
• date of birth;
• place of birth;
• mother‘s maiden name;
• biometric records (fingerprint, face, etc.);
• medical records;
• educational records;
• financial information; and
• employment information.
Organizations that collect PII are responsible for protecting it. The Department of Commerce recommends that “organizations minimize the use, collection, and retention of PII to what is strictly necessary to accomplish their business purpose and mission.” They go on to state that “the likelihood of harm caused by a breach involving PII is greatly reduced if an organization minimizes the amount of PII it uses, collects, and stores.” 4 Organizations that do not protect PII can face penalties, lawsuits, and loss of business. In the US, most states now have laws requiring organizations that have had security breaches related to PII to notify potential victims, as does the European Union.
Just because companies are required to protect your information does not mean they are restricted from sharing it. In the US, companies can share your information without your explicit consent (see sidebar below), though not all do so. The FTC urges companies that collect PII to create a privacy policy and post it on their website. California requires a privacy policy for any website that does business with a resident of the state.
While the US's privacy laws seek to balance consumer protection with promoting commerce, in the European Union, privacy is considered a fundamental right that outweighs the interests of commerce. This has led to much stricter privacy protection in the EU and makes commerce more difficult between the US and the EU.
Non-Obvious Relationship Awareness
Digital technologies have given us many new capabilities that simplify and expedite the collection of personal information. Every time we come into contact with digital technologies, information about us is being made available. From our location to our web-surfing habits, our criminal record, to our credit report, we are constantly being monitored. This information can then be aggregated to create profiles of every one of us. While much of the information collected was available in the past, collecting it and combining it took time and effort. Today, detailed information about us is available for purchase from different companies. Even information not categorized as PII can be aggregated so that an individual can be identified.
First commercialized by big casinos looking to find cheaters, NORA is used by both government agencies and private organizations, and it is big business. In some settings, NORA can bring many benefits, such as in law enforcement. By identifying potential criminals more quickly, crimes can be solved more quickly or even prevented before they happen. But these advantages come at a price: our privacy.
Restrictions on Data Collecting
Information privacy or data protection laws provide legal guidelines for obtaining, using, and storing data about its citizens. The European Union has had the General Data Protection Regulation(GDPR) in force since 2018. The US does not have a comprehensive information privacy law but has adopted sectoral laws. 9
Children’s Online Privacy Protection Act(COPPA)
Websites collecting information from children under the age of thirteen are required to comply with the Children’s Online Privacy Protection Act (COPPA), which is enforced by the Federal Trade Commission (FTC). To comply with COPPA, organizations must make a good-faith effort to determine the age of those accessing their websites. If users are under thirteen years old, they must obtain parental consent before collecting any information.
Family Educational Rights and Privacy Act(FERPA)
The Family Educational Rights and Privacy Act (FERPA) is a US law that protects student education records' privacy. In brief, this law specifies that parents have a right to their child’s educational information until they reach either the age of eighteen or begin attending school beyond the high school level. At that point, control of the information is given to the child. While this law is not specifically about the digital collection of information on the Internet, the educational institutions collecting student information are at a higher risk for disclosing it improperly because of digital technologies. This became especially apparent during the Covid-19 pandemic when all face-to-face classes at educational institutions transitioned to online classes. Institutions need to have policies in place that protect student privacy during video meetings and recordings.
Health Insurance Portability and Accountability Act (HIPAA)
The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is the law that specifically singles out records related to health care as a special class of personally identifiable information. This law gives patients specific rights to control their medical records, requires health care providers and others who maintain this information to get specific permission to share it, and imposes penalties on the institutions that breach this trust. Since much of this information is now shared via electronic medical records, the protection of those systems becomes paramount.
If you key in the data in the US, you own the right to store and use it even if the data was collected without permission except regulated by laws and rules such as above. Very few states recognize an individual’s right to privacy; California is the exception. The California Online Privacy Protection Act of 2003(OPPA) requires operators of commercial websites or online services that collect personal information on California residents through a website to post a privacy policy on the site conspicuously.
Sidebar: Do Not Track
When it comes to getting permission to share personal information, the US and the EU have different approaches. In the US, the “opt-out” model is prevalent; in this model, the default agreement is that you have agreed to share your information with the organization and explicitly tell them that you do not want your information shared. No laws prohibit sharing your data (beyond some specific categories of data, such as medical records). In the European Union, the “opt-in” model is required to be the default. In this case, you must give your explicit permission before an organization can share your information.
To combat this sharing of information, the Do Not Track initiative was created. As its creators explain 3 :
Do Not Track is a technology and policy proposal that enables users to opt-out of tracking by websites they do not visit, including analytics services, advertising networks, and social platforms. At present, few of these third parties offer a reliable tracking opt-out, and tools for blocking them are neither user-friendly nor comprehensive. Much like the popular Do Not Call registry, Do Not Track provides users with a single, simple, persistent choice to opt-out of third-party web tracking.
12.04: Summary
Summary
The rapid changes in information technology in the past few decades have brought a broad array of new capabilities and powers to governments, organizations, and individuals alike. These new capabilities have required thoughtful analysis and the creation of new norms, regulations, and laws. This chapter has seen how intellectual property and privacy have been affected by these new capabilities and how the regulatory environment has been changed to address them.
12.05: Study Questions
Study Questions
1. What does the term information systems ethics mean?
2. What is a code of ethics? What is one advantage and one disadvantage of a code of ethics?
3. What does the term intellectual property mean? Give an example.
4. What protections are provided by a copyright? How do you obtain one?
5. What is fair use?
6. What protections are provided by a patent? How do you obtain one?
7. What does a trademark protect? How do you obtain one?
8. What does the term personally identifiable information mean?
9. What protections are provided by HIPAA, COPPA, and FERPA?
10. How would you explain the concept of NORA?
Exercises
1. Provide one example of how information technology has created an ethical dilemma that would not have existed before the advent of information technology.
2. Find an example of a code of ethics or acceptable use policy related to information technology and highlight five points that you think are important.
3. Find an example of work done under a CC license.
4. Do some original research on the effort to combat patent trolls. Write a two-page paper that discusses this legislation.
5. Give an example of how NORA could be used to identify an individual.
6. How are intellectual property protections different across the world? Pick two countries and do some original research, then compare the patent and copyright protections offered in those countries to those in the US. Write a two- to three-page paper describing the differences. | textbooks/workforce/Information_Technology/Information_Systems/Information_Systems_for_Business/03%3A_Information_Systems_Beyond_the_Organization/12%3A_The_Ethical_and_Legal_Implications_of_Information_System/12.03%3A_The_Digital_Millenn.txt |