{ "pages": [ { "page_number": 1, "text": "Assurance\n User\nIssues\nContingency\n Planning\nI & A\nPersonnel\nTraining\nAccess\nControls\nAudit\nPlanning\nRisk\nManagement\nCrypto\nPhysical\nSecurity\n Support\n &\nOperations\nPolicy\n Program\nManagement\nThreats\nNational Institute of Standards and Technology\nTechnology Administration\nU.S. Department of Commerce\nAn Introduction to Computer Security:\nThe NIST Handbook\nSpecial Publication 800-12\n" }, { "page_number": 2, "text": "" }, { "page_number": 3, "text": "iii\nTable of Contents\nI. INTRODUCTION AND OVERVIEW\nChapter 1\nINTRODUCTION\n1.1\nPurpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3\n1.2\nIntended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3\n1.3\nOrganization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4\n1.4\nImportant Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5\n1.5\nLegal Foundation for Federal Computer Security Programs . 7\nChapter 2\nELEMENTS OF COMPUTER SECURITY\n2.1\nComputer Security Supports the Mission of the Organization. 9\n2.2\nComputer Security is an Integral Element of Sound\nManagement. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10\n2.3\nComputer Security Should Be Cost-Effective. . . . . . . . . . . . . . . . 11\n2.4\nComputer Security Responsibilities and Accountability Should\nBe Made Explicit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12\n2.5\nSystems Owners Have Security Responsibilities Outside Their\nOwn Organizations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12\n2.6\nComputer Security Requires a Comprehensive and Integrated\nApproach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13\n2.7\nComputer Security Should Be Periodically Reassessed. . . . . . . 13\n2.8\nComputer Security is Constrained by Societal Factors. . . . . . . 14\nChapter 3 \nROLES AND RESPONSIBILITIES\n" }, { "page_number": 4, "text": "iv\n3.1\nSenior Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16\n3.2\nComputer Security Management . . . . . . . . . . . . . . . . . . . . . . . . . . . 16\n3.3\nProgram and Functional Managers/Application Owners . . . . 16\n3.4\nTechnology Providers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16\n3.5\nSupporting Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18\n3.6\nUsers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20\nChapter 4\nCOMMON THREATS: A BRIEF OVERVIEW\n4.1\nErrors and Omissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22\n4.2\nFraud and Theft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23\n4.3\nEmployee Sabotage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24\n4.4\nLoss of Physical and Infrastructure Support . . . . . . . . . . . . . . . . 24\n4.5\nMalicious Hackers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24\n4.6\nIndustrial Espionage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26\n4.7\nMalicious Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27\n4.8\nForeign Government Espionage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27\n4.9\nThreats to Personal Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28\nII. MANAGEMENT CONTROLS\nChapter 5\nCOMPUTER SECURITY POLICY\n5.1\nProgram Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35\n5.2\nIssue-Specific Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37\n5.3\nSystem-Specific Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40\n5.4\nInterdependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42\n5.5\nCost Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43\nChapter 6\nCOMPUTER SECURITY PROGRAM MANAGEMENT\n" }, { "page_number": 5, "text": "v\n6.1\nStructure of a Computer Security Program . . . . . . . . . . . . . . . . 45\n6.2\nCentral Computer Security Programs . . . . . . . . . . . . . . . . . . . . . . 47\n6.3\nElements of an Effective Central Computer Security Program 51\n6.4\nSystem-Level Computer Security Programs . . . . . . . . . . . . . . . . 53\n6.5\nElements of Effective System-Level Programs . . . . . . . . . . . . . . 53\n6.6\nCentral and System-Level Program Interactions . . . . . . . . . . . . 56\n6.7\nInterdependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56\n6.8\nCost Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56\nChapter 7\nCOMPUTER SECURITY RISK MANAGEMENT\n7.1\nRisk Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59\n7.2\nRisk Mitigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63\n7.3\nUncertainty Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67\n7.4\nInterdependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68\n7.5\nCost Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68\nChapter 8\nSECURITY AND PLANNING \nIN THE COMPUTER SYSTEM LIFE CYCLE\n8.1\nComputer Security Act Issues for Federal Systems . . . . . . . . . . 71\n8.2\nBenefits of Integrating Security in the Computer System Life\nCycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72\n8.3\nOverview of the Computer System Life Cycle . . . . . . . . . . . . . . . 73\n" }, { "page_number": 6, "text": "vi\n8.4\nSecurity Activities in the Computer System Life Cycle . . . . . . 74\n8.5\nInterdependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86\n8.6\nCost Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86\nChapter 9\nASSURANCE\n9.1\nAccreditation and Assurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90\n9.2\nPlanning and Assurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92\n9.3\nDesign and Implementation Assurance . . . . . . . . . . . . . . . . . . . . . 92\n9.4\nOperational Assurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96\n9.5\nInterdependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101\n9.6\nCost Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101\nIII. OPERATIONAL CONTROLS\nChapter 10\nPERSONNEL/USER ISSUES\n10.1\nStaffing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107\n10.2\nUser Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110\n10.3\nContractor Access Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . 116\n10.4\nPublic Access Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116\n10.5\nInterdependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117\n10.6\nCost Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117\nChapter 11\nPREPARING FOR CONTINGENCIES AND DISASTERS\n11.1\nStep 1: Identifying the Mission- or Business-Critical Functions120\n" }, { "page_number": 7, "text": "vii\n11.2\nStep 2: Identifying the Resources That Support Critical\nFunctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120\n11.3\nStep 3: Anticipating Potential Contingencies or Disasters . . . . 122\n11.4\nStep 4: Selecting Contingency Planning Strategies . . . . . . . . . . 123\n11.5\nStep 5: Implementing the Contingency Strategies . . . . . . . . . . . 126\n11.6\nStep 6: Testing and Revising . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128\n11.7\nInterdependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129\n11.8\nCost Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129\nChapter 12\nCOMPUTER SECURITY INCIDENT HANDLING\n12.1\nBenefits of an Incident Handling Capability . . . . . . . . . . . . . . . . 134\n12.2\nCharacteristics of a Successful Incident Handling Capability 137\n12.3\nTechnical Support for Incident Handling . . . . . . . . . . . . . . . . . . . 139\n12.4\nInterdependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140\n12.5\nCost Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141\nChapter 13\nAWARENESS, TRAINING, AND EDUCATION\n13.1\nBehavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143\n13.2\nAccountability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144\n13.3\nAwareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144\n13.4\nTraining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146\n13.5\nEducation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147\n13.6\nImplementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148\n13.7\nInterdependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152\n13.8\nCost Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152\n" }, { "page_number": 8, "text": "viii\nChapter 14\nSECURITY CONSIDERATIONS \nIN \nCOMPUTER SUPPORT AND OPERATIONS\n14.1\nUser Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156\n14.2\nSoftware Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157\n14.3\nConfiguration Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157\n14.4\nBackups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158\n14.5\nMedia Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158\n14.6\nDocumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161\n14.7\nMaintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161\n14.8\nInterdependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162\n14.9\nCost Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163\nChapter 15\nPHYSICAL AND ENVIRONMENTAL SECURITY\n15.1\nPhysical Access Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166\n15.2\nFire Safety Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168\n15.3\nFailure of Supporting Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170\n15.4\nStructural Collapse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170\n15.5\nPlumbing Leaks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171\n15.6\nInterception of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171\n15.7\nMobile and Portable Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172\n15.8\nApproach to Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172\n15.9\nInterdependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174\n15.10 Cost Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174\n" }, { "page_number": 9, "text": "ix\nIV. TECHNICAL CONTROLS\nChapter 16\nIDENTIFICATION AND AUTHENTICATION\n16.1\nI&A Based on Something the User Knows . . . . . . . . . . . . . . . . . . 180\n16.2\nI&A Based on Something the User Possesses . . . . . . . . . . . . . . . . 182\n16.3\nI&A Based on Something the User Is . . . . . . . . . . . . . . . . . . . . . . . 186\n16.4\nImplementing I&A Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187\n16.5\nInterdependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189\n16.6\nCost Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189\nChapter 17\nLOGICAL ACCESS CONTROL\n17.1\nAccess Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194\n17.2\nPolicy: The Impetus for Access Controls . . . . . . . . . . . . . . . . . . . . 197\n17.3\nTechnical Implementation Mechanisms . . . . . . . . . . . . . . . . . . . . . 198\n17.4\nAdministration of Access Controls . . . . . . . . . . . . . . . . . . . . . . . . . . 204\n17.5\nCoordinating Access Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206\n17.6\nInterdependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206\n17.7\nCost Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207\nChapter 18\nAUDIT TRAILS\n18.1\nBenefits and Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211\n18.2\nAudit Trails and Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214\n18.3\nImplementation Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217\n18.4\nInterdependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220\n18.5\nCost Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221\n" }, { "page_number": 10, "text": "x\nChapter 19\nCRYPTOGRAPHY\n19.1\nBasic Cryptographic Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . 223\n19.2\nUses of Cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226\n19.3\nImplementation Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230\n19.4\nInterdependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233\n19.5\nCost Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234\nV. EXAMPLE\nChapter 20\nASSESSING AND MITIGATING THE RISKS\nTO A HYPOTHETICAL COMPUTER SYSTEM\n20.1\nInitiating the Risk Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241\n20.2\nHGA's Computer System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242\n20.3\nThreats to HGA's Assets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245\n20.4\nCurrent Security Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248\n20.5\nVulnerabilities Reported by the Risk Assessment Team . . . . . 257\n20.6\nRecommendations for Mitigating the Identified Vulnerabilities261\n20.7\nSummary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266\nCross Reference and General Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269\n" }, { "page_number": 11, "text": "xi\nAcknowledgments\nNIST would like to thank the many people who assisted with the development of this handbook. For\ntheir initial recommendation that NIST produce a handbook, we thank the members of the Computer\nSystem Security and Privacy Advisory Board, in particular, Robert Courtney, Jr. NIST management\nofficials who supported this effort include: James Burrows, F. Lynn McNulty, Stuart Katzke, Irene\nGilbert, and Dennis Steinauer.\nIn addition, special thanks is due those contractors who helped craft the handbook, prepare drafts, teach\nclasses, and review material:\nDaniel F. Sterne of Trusted Information Systems (TIS, Glenwood, Maryland) served as Project\nManager for Trusted Information Systems on this project. In addition, many TIS employees\ncontributed to the handbook, including: David M. Balenson, Martha A. Branstad, Lisa M.\nJaworski, Theodore M.P. Lee, Charles P. Pfleeger, Sharon P. Osuna, Diann K. Vechery, Kenneth\nM. Walker, and Thomas J. Winkler-Parenty.\nAdditional drafters of handbook chapters include:\nLawrence Bassham III (NIST), Robert V. Jacobson, International Security Technology, Inc.\n(New York, NY) and John Wack (NIST).\nSignificant assistance was also received from:\nLisa Carnahan (NIST), James Dray (NIST), Donna Dodson (NIST), the Department of Energy,\nIrene Gilbert (NIST), Elizabeth Greer (NIST), Lawrence Keys (NIST), Elizabeth Lennon (NIST),\nJoan O'Callaghan (Bethesda, Maryland), Dennis Steinauer (NIST), Kibbie Streetman (Oak Ridge\nNational Laboratory), and the Tennessee Valley Authority.\nMoreover, thanks is extended to the reviewers of draft chapters. While many people assisted, the\nfollowing two individuals were especially tireless:\nRobert Courtney, Jr. (RCI) and Steve Lipner (MITRE and TIS).\nOther important contributions and comments were received from: \nMembers of the Computer System Security and Privacy Advisory Board, and the\nSteering Committee of the Federal Computer Security Program Managers' Forum.\nFinally, although space does not allow specific acknowledgement of all the individuals who contributed\nto this effort, their assistance was critical to the preparation of this document. \nDisclaimer:\nNote that references to specific products or brands is for explanatory purposes only; no\nendorsement, explicit or implicit, is intended or implied.\n" }, { "page_number": 12, "text": "xii\n" }, { "page_number": 13, "text": "1\n \nI. INTRODUCTION AND OVERVIEW\n" }, { "page_number": 14, "text": "2\n" }, { "page_number": 15, "text": " It is recognized that the computer security field continues to evolve. To address changes and new issues,\n1\nNIST's Computer Systems Laboratory publishes the CSL Bulletin series. Those bulletins which deal with security\nissues can be thought of as supplements to this publication.\n Note that these requirements do not arise from this handbook, but from other sources, such as the Computer\n2\nSecurity Act of 1987.\n In the Computer Security Act of 1987, Congress assigned responsibility to NIST for the preparation of\n3\nstandards and guidelines for the security of sensitive federal systems, excluding classified and \"Warner\nAmendment\" systems (unclassified intelligence-related), as specified in 10 USC 2315 and 44 USC 3502(2).\n3\nChapter 1\nINTRODUCTION\n1.1\nPurpose\nThis handbook provides assistance in securing computer-based resources (including hardware,\nsoftware, and information) by explaining important concepts, cost considerations, and\ninterrelationships of security controls. It illustrates the benefits of security controls, the major\ntechniques or approaches for each control, and important related considerations.1\nThe handbook provides a broad overview of computer security to help readers understand their\ncomputer security needs and develop a sound approach to the selection of appropriate security\ncontrols. It does not describe detailed steps necessary to implement a computer security\nprogram, provide detailed implementation procedures for security controls, or give guidance for\nauditing the security of specific systems. General references are provided at the end of this\nchapter, and references of \"how-to\" books and articles are provided at the end of each chapter in\nParts II, III and IV. \nThe purpose of this handbook is not to specify requirements but, rather, to discuss the benefits of\nvarious computer security controls and situations in which their application may be appropriate. \nSome requirements for federal systems are noted in the text. This document provides advice\n2\nand guidance; no penalties are stipulated.\n1.2\nIntended Audience\nThe handbook was written primarily for those who have computer security responsibilities and\nneed assistance understanding basic concepts and techniques. Within the federal government,3\nthis includes those who have computer security responsibilities for sensitive systems. \n" }, { "page_number": 16, "text": "I. Introduction and Overview\n As necessary, issues that are specific to the federal environment are noted as such.\n4\n The term management controls is used in a broad sense and encompasses areas that do not fit neatly into\n5\noperational or technical controls.\n4\nDefinition of Sensitive Information\nMany people think that sensitive information only\nrequires protection from unauthorized disclosure. \nHowever, the Computer Security Act provides a\nmuch broader definition of the term \"sensitive\"\ninformation: \nany information, the loss, misuse, or\nunauthorized access to or modification of which\ncould adversely affect the national interest or\nthe conduct of federal programs, or the privacy\nto which individuals are entitled under section\n552a of title 5, United States Code (the Privacy\nAct), but which has not been specifically\nauthorized under criteria established by an\nExecutive Order or an Act of Congress to be\nkept secret in the interest of national defense or\nforeign policy. \nThe above definition can be contrasted with the\nlong-standing confidentiality-based information\nclassification system for national security\ninformation (i.e., CONFIDENTIAL, SECRET, and TOP\nSECRET). This system is based only upon the need\nto protect classified information from unauthorized\ndisclosure; the U.S. Government does not have a\nsimilar system for unclassified information. No\ngovernmentwide schemes (for either classified or\nunclassified information) exist which are based on\nthe need to protect the integrity or availability of\ninformation. \nFor the most part, the concepts presented in\nthe handbook are also applicable to the\nprivate sector. While there are differences\n4\nbetween federal and private-sector\ncomputing, especially in terms of priorities\nand legal constraints, the underlying\nprinciples of computer security and the\navailable safeguards managerial,\noperational, and technical are the same. \nThe handbook is therefore useful to anyone\nwho needs to learn the basics of computer\nsecurity or wants a broad overview of the\nsubject. However, it is probably too detailed\nto be employed as a user awareness guide,\nand is not intended to be used as an audit\nguide.\n1.3 Organization\nThe first section of the handbook contains\nbackground and overview material, briefly\ndiscusses of threats, and explains the roles\nand responsibilities of individuals and\norganizations involved in computer security. \nIt explains the executive principles of\ncomputer security that are used throughout\nthe handbook. For example, one important\nprinciple that is repeatedly stressed is that\nonly security measures that are cost-effective\nshould be implemented. A familiarity with\nthe principles is fundamental to\nunderstanding the handbook's philosophical approach to the issue of security. \nThe next three major sections deal with security controls: Management Controls (II),\n5\nOperational Controls (III), and Technical Controls (IV). Most controls cross the boundaries\nbetween management, operational, and technical. Each chapter in the three sections provides a\nbasic explanation of the control; approaches to implementing the control, some cost\n" }, { "page_number": 17, "text": "1. Introduction\n5\nconsiderations in selecting, implementing, and using the control; and selected interdependencies\nthat may exist with other controls. Each chapter in this portion of the handbook also provides\nreferences that may be useful in actual implementation.\nThe Management Controls section addresses security topics that can be characterized as\nmanagerial. They are techniques and concerns that are normally addressed by management\nin the organization's computer security program. In general, they focus on the management\nof the computer security program and the management of risk within the organization. \nThe Operational Controls section addresses security controls that focus on controls that are,\nbroadly speaking, implemented and executed by people (as opposed to systems). These\ncontrols are put in place to improve the security of a particular system (or group of\nsystems). They often require technical or specialized expertise and often rely upon\nmanagement activities as well as technical controls. \n \nThe Technical Controls section focuses on security controls that the computer system\nexecutes. These controls are dependent upon the proper functioning of the system for their\neffectiveness. The implementation of technical controls, however, always requires\nsignificant operational considerations and should be consistent with the management of\nsecurity within the organization. \nFinally, an example is presented to aid the reader in correlating some of the major topics\ndiscussed in the handbook. It describes a hypothetical system and discusses some of the controls\nthat have been implemented to protect it. This section helps the reader better understand the\ndecisions that must be made in securing a system, and illustrates the interrelationships among\ncontrols.\n1.4 Important Terminology\nTo understand the rest of the handbook, the reader must be familiar with the following key terms\nand definitions as used in this handbook. In the handbook, the terms computers and computer\nsystems are used to refer to the entire spectrum of information technology, including application\nand support systems. Other key terms include:\nComputer Security: The protection afforded to an automated information system in order to\nattain the applicable objectives of preserving the integrity, availability and confidentiality of\ninformation system resources (includes hardware, software, firmware, information/data, and\ntelecommunications). \nIntegrity: In lay usage, information has integrity when it is timely, accurate, complete, and\nconsistent. However, computers are unable to provide or protect all of these qualities. \n" }, { "page_number": 18, "text": "I. Introduction and Overview\n National Research Council, Computers at Risk, (Washington, DC: National Academy Press, 1991), p. 54.\n6\n6\nLocation of Selected Security Topics\nBecause this handbook is structured to focus on computer security controls, there may be several security\ntopics that the reader may have trouble locating. For example, no separate section is devoted to\nmainframe or personal computer security, since the controls discussed in the handbook can be applied\n(albeit in different ways) to various processing platforms and systems. The following may help the\nreader locate areas of interest not readily found in the table of contents: \nTopic\nChapter\nAccreditation\n8. \nLife Cycle\n9. \nAssurance\nFirewalls\n17. \nLogical Access Controls\nSecurity Plans\n8.\nLife Cycle\nTrusted Systems\n9.\nAssurance\nSecurity features, including those incorporated into trusted systems, are\ndiscussed throughout.\nViruses &\n9.\nAssurance (Operational Assurance section)\nOther Malicious\n12.\nIncident Handling\nCode\nNetwork Security\nNetwork security uses the same basic set of controls as mainframe security or\nPC security. In many of the handbook chapters, considerations for using the\ncontrol is a networked environment are addressed, as appropriate. For\nexample, secure gateways are discussed as a part of Access Control;\ntransmitting authentication data over insecure networks is discussed in the\nIdentification and Authentication chapter; and the Contingency Planning\nchapter talks about data communications contracts.\nFor the same reason, there is not a separate chapter for PC, LAN,\nminicomputer, or mainframe security.\nTherefore, in the computer security field, integrity is often discussed more narrowly as having\ntwo facets: data integrity and system integrity. \"Data integrity is a requirement that information\nand programs are changed only in a specified and authorized manner.\" System integrity is a\n6\nrequirement that a system \"performs its intended function in an unimpaired manner, free from\n" }, { "page_number": 19, "text": "1. Introduction\n National Computer Security Center, Pub. NCSC-TG-004-88.\n7\n Computers at Risk, p. 54.\n8\n Although not listed, readers should be aware that laws also exist that may affect nongovernment\n9\norganizations.\n7\ndeliberate or inadvertent unauthorized manipulation of the system.\" The definition of integrity\n7\nhas been, and continues to be, the subject of much debate among computer security experts. \nAvailability: A \"requirement intended to assure that systems work promptly and service is not\ndenied to authorized users.\" \n8\nConfidentiality: A requirement that private or confidential information not be disclosed to\nunauthorized individuals.\n1.5 Legal Foundation for Federal Computer Security Programs\nThe executive principles discussed in the next chapter explain the need for computer security. In\naddition, within the federal government, a number of laws and regulations mandate that agencies\nprotect their computers, the information they process, and related technology resources (e.g.,\ntelecommunications). The most important are listed below. \n9\nThe Computer Security Act of 1987 requires agencies to identify sensitive systems, conduct\ncomputer security training, and develop computer security plans.\nThe Federal Information Resources Management Regulation (FIRMR) is the primary\nregulation for the use, management, and acquisition of computer resources in the federal\ngovernment.\nOMB Circular A-130 (specifically Appendix III) requires that federal agencies establish\nsecurity programs containing specified elements.\nNote that many more specific requirements, many of which are agency specific, also exist.\nFederal managers are responsible for familiarity and compliance with applicable legal\nrequirements. However, laws and regulations do not normally provide detailed instructions for\nprotecting computer-related assets. Instead, they specify requirements such as restricting the\navailability of personal data to authorized users. This handbook aids the reader in developing an\neffective, overall security approach and in selecting cost-effective controls to meet such\nrequirements.\n" }, { "page_number": 20, "text": "I. Introduction and Overview\n8\nReferences\nAuerbach Publishers (a division of Warren Gorham & Lamont). Data Security Management.\nBoston, MA. 1995.\nBritish Standards Institute. A Code of Practice for Information Security Management, 1993. \nCaelli, William, Dennis Longley, and Michael Shain. Information Security Handbook. New\nYork, NY: Stockton Press, 1991.\nFites, P., and M. Kratz. Information Systems Security: A Practitioner's Reference. New York,\nNY: Van Nostrand Reinhold, 1993. \nGarfinkel, S., and G. Spafford. Practical UNIX Security. Sebastopol, CA: O'Riley & Associates,\nInc., 1991. \nInstitute of Internal Auditors Research Foundation. System Auditability and Control Report.\nAltamonte Springs, FL: The Institute of Internal Auditors, 1991.\nNational Research Council. Computers at Risk: Safe Computing in the Information Age.\nWashington, DC: National Academy Press, 1991.\nPfleeger, Charles P. Security in Computing. Englewood Cliffs, NJ: Prentice Hall, 1989.\nRussell, Deborah, and G.T. Gangemi, Sr. Computer Security Basics. Sebastopol, CA: O'Reilly &\nAssociates, Inc., 1991.\nRuthberg, Z., and Tipton, H., eds. Handbook of Information Security Management. Boston, MA:\nAuerbach Press, 1993. \n" }, { "page_number": 21, "text": "9\nChapter 2\nELEMENTS OF COMPUTER SECURITY\nThis handbook's general approach to computer security is based on eight major elements:\n1.\nComputer security should support the mission of the organization.\n2. \nComputer security is an integral element of sound management.\n3. \nComputer security should be cost-effective.\n4. \nComputer security responsibilities and accountability should be made explicit.\n5. \nSystem owners have computer security responsibilities outside their own\norganizations.\n6.\nComputer security requires a comprehensive and integrated approach.\n7. \nComputer security should be periodically reassessed.\n8.\nComputer security is constrained by societal factors. \nFamiliarity with these elements will aid the reader in better understanding how the security\ncontrols (discussed in later sections) support the overall computer security program goals. \n2.1 Computer Security Supports the Mission of the Organization.\nThe purpose of computer security is to protect an organization's valuable resources, such as\ninformation, hardware, and software. Through the selection and application of appropriate\nsafeguards, security helps the organization's mission by protecting its physical and financial\nresources, reputation, legal position, employees, and other tangible and intangible assets. \nUnfortunately, security is sometimes viewed as thwarting the mission of the organization by\nimposing poorly selected, bothersome rules and procedures on users, managers, and systems. On\nthe contrary, well-chosen security rules and procedures do not exist for their own sake they are\nput in place to protect important assets and thereby support the overall organizational mission.\nSecurity, therefore, is a means to an end and not an end in itself. For example, in a private- sector\nbusiness, having good security is usually secondary to the need to make a profit. Security, then,\nought to increase the firm's ability to make a profit. In a public-sector agency, security is usually\nsecondary to the agency's service provided to citizens. Security, then, ought to help improve the\nservice provided to the citizen.\n" }, { "page_number": 22, "text": "I. Introduction and Overview\n10\nThis chapter draws upon the OECD's Guidelines for the\nSecurity of Information Systems, which was endorsed by the\nUnited States. It provides for:\nAccountability - The responsibilities and accountability of\nowners, providers and users of information systems and other\nparties...should be explicit.\nAwareness - Owners, providers, users and other parties should\nreadily be able, consistent with maintaining security, to gain\nappropriate knowledge of and be informed about the existence\nand general extent of measures...for the security of information\nsystems.\nEthics - The Information systems and the security of information\nsystems should be provided and used in such a manner that the\nrights and legitimate interest of others are respected.\nMultidisciplinary - Measures, practices and procedures for the\nsecurity of information systems should take account of and\naddress all relevant considerations and viewpoints....\nProportionality - Security levels, costs, measures, practices and\nprocedures should be appropriate and proportionate to the value\nof and degree of reliance on the information systems and to the\nseverity, probability and extent of potential harm....\nIntegration - Measures, practices and procedures for the security\nof information systems should be coordinated and integrated\nwith each other and other measures, practices and procedures of\nthe organization so as to create a coherent system of security.\nTimeliness - Public and private parties, at both national and\ninternational levels, should act in a timely coordinated manner\nto prevent and to respond to breaches of security of information\nsystems.\nReassessment - The security of information systems should be\nreassessed periodically, as information systems and the\nrequirements for their security vary over time.\nDemocracy - The security of information systems should be\ncompatible with the legitimate use and flow of data and\ninformation in a democratic society.\nTo act on this, managers need to\nunderstand both their organizational\nmission and how each information\nsystem supports that mission. After a\nsystem's role has been defined, the\nsecurity requirements implicit in that\nrole can be defined. Security can then\nbe explicitly stated in terms of the\norganization's mission. \nThe roles and functions of a system may\nnot be constrained to a single\norganization. In an interorganizational\nsystem, each organization benefits from\nsecuring the system. For example, for\nelectronic commerce to be successful,\neach of the participants requires security\ncontrols to protect their resources. \nHowever, good security on the buyer's\nsystem also benefits the seller; the\nbuyer's system is less likely to be used\nfor fraud or to be unavailable or\notherwise negatively affect the seller. \n(The reverse is also true.)\n2.2 Computer Security is an\nIntegral Element of Sound\nManagement.\nInformation and computer systems are\noften critical assets that support the\nmission of an organization. Protecting\nthem can be as critical as protecting\nother organizational resources, such as\nmoney, physical assets, or employees. \nHowever, including security\nconsiderations in the management of\ninformation and computers does not\ncompletely eliminate the possibility that\nthese assets will be harmed. Ultimately,\n" }, { "page_number": 23, "text": "2. Elements of Computer Security\n11\norganization managers have to decide what the level of risk they are willing to accept, taking into\naccount the cost of security controls. \nAs with many other resources, the management of information and computers may transcend\norganizational boundaries. When an organization's information and computer systems are linked\nwith external systems, management's responsibilities also extend beyond the organization. This\nmay require that management (1) know what general level or type of security is employed on the\nexternal system(s) or (2) seek assurance that the external system provides adequate security for\nthe using organization's needs. \n2.3 Computer Security Should Be Cost-Effective.\nThe costs and benefits of security should be carefully examined in both monetary and non-\nmonetary terms to ensure that the cost of controls does not exceed expected benefits. Security\nshould be appropriate and proportionate to the value of and degree of reliance on the computer\nsystems and to the severity, probability and extent of potential harm. Requirements for security\nvary, depending upon the particular computer system.\nIn general, security is a smart business practice. By investing in security measures, an\norganization can reduce the frequency and severity of computer security-related losses. For\nexample, an organization may estimate that it is experiencing significant losses per year in\ninventory through fraudulent manipulation of its computer system. Security measures, such as an\nimproved access control system, may significantly reduce the loss.\nMoreover, a sound security program can thwart hackers and can reduce the frequency of viruses. \nElimination of these kinds of threats can reduce unfavorable publicity as well as increase morale\nand productivity.\nSecurity benefits, however, do have both direct and indirect costs. Direct costs include\npurchasing, installing, and administering security measures, such as access control software or\nfire-suppression systems. Additionally, security measures can sometimes affect system\nperformance, employee morale, or retraining requirements. All of these have to be considered in\naddition to the basic cost of the control itself. In many cases, these additional costs may well\nexceed the initial cost of the control (as is often seen, for example, in the costs of administering an\naccess control package). Solutions to security problems should not be chosen if they cost more,\ndirectly or indirectly, than simply tolerating the problem.\n" }, { "page_number": 24, "text": "I. Introduction and Overview\n The difference between responsibility and accountability is not always clear. In general, responsibility is a\n10\nbroader term, defining obligations and expected behavior. The term implies a proactive stance on the part of the\nresponsible party and a causal relationship between the responsible party and a given outcome. The term\naccountability generally refers to the ability to hold people responsible for their actions. Therefore, people could\nbe responsible for their actions but not held accountable. For example, an anonymous user on a system is\nresponsible for not compromising security but cannot be held accountable if a compromise occurs since the action\ncannot be traced to an individual.\n The term other parties may include but is not limited to: executive management; programmers;\n11\nmaintenance providers; information system managers (software managers, operations managers, and network\nmanagers); software development managers; managers charged with security of information systems; and internal\nand external information system auditors.\n Implicit is the recognition that people or other entities (such as corporations or governments) have\n12\nresponsibilities and accountability related to computer systems. These are responsibilities and accountabilities are\noften shared among many entities. (Assignment of responsibilities is usually accomplished through the issuance\nof policy. See Chapter 5.)\n12\n2.4 Computer Security Responsibilities and Accountability Should Be Made\nExplicit.\nThe responsibilities and accountability of owners, providers, and users of computer systems and\n10\nother parties concerned with the security of computer systems should be explicit. The\n11 \n \n \n \n \n \n \n \n \n \n12\nassignment of responsibilities may be internal to an organization or may extend across\norganizational boundaries. \nDepending on the size of the organization, the program may be large or small, even a collateral\nduty of another management official. However, even small organizations can prepare a document\nthat states organization policy and makes explicit computer security responsibilities. This element\ndoes not specify that individual accountability must be provided for on all systems. For example,\nmany information dissemination systems do not require user identification and, therefore, cannot\nhold users accountable. \n2.5 Systems Owners Have Security Responsibilities Outside Their Own\nOrganizations.\nIf a system has external users, its owners have a responsibility to share appropriate knowledge\nabout the existence and general extent of security measures so that other users can be confident\nthat the system is adequately secure. (This does not imply that all systems must meet any\nminimum level of security, but does imply that system owners should inform their clients or users\nabout the nature of the security.)\nIn addition to sharing information about security, organization managers \"should act in a timely,\n" }, { "page_number": 25, "text": "2. Elements of Computer Security\n Organisation for Economic Co-operation and Development, Guidelines for the Security of Information\n13\nSystems, Paris, 1992.\n13\ncoordinated manner to prevent and to respond to breaches of security\" to help prevent damage to\nothers. However, taking such action should not jeopardize the security of systems.\n13\n2.6 Computer Security Requires a Comprehensive and Integrated\nApproach.\nProviding effective computer security requires a comprehensive approach that considers a variety\nof areas both within and outside of the computer security field. This comprehensive approach\nextends throughout the entire information life cycle.\n2.6.1 Interdependencies of Security Controls \nTo work effectively, security controls often depend upon the proper functioning of other controls. \nIn fact, many such interdependencies exist. If appropriately chosen, managerial, operational, and\ntechnical controls can work together synergistically. On the other hand, without a firm\nunderstanding of the interdependencies of security controls, they can actually undermine one\nanother. For example, without proper training on how and when to use a virus-detection\npackage, the user may apply the package incorrectly and, therefore, ineffectively. As a result, the\nuser may mistakenly believe that their system will always be virus-free and may inadvertently\nspread a virus. In reality, these interdependencies are usually more complicated and difficult to\nascertain. \n2.6.2 Other Interdependencies\nThe effectiveness of security controls also depends on such factors as system management, legal\nissues, quality assurance, and internal and management controls. Computer security needs to\nwork with traditional security disciplines including physical and personnel security. Many other\nimportant interdependencies exist that are often unique to the organization or system\nenvironment. Managers should recognize how computer security relates to other areas of systems\nand organizational management.\n2.7 Computer Security Should Be Periodically Reassessed.\nComputers and the environments they operate in are dynamic. System technology and users, data\nand information in the systems, risks associated with the system and, therefore, security\nrequirements are ever-changing. Many types of changes affect system security: technological\ndevelopments (whether adopted by the system owner or available for use by others); connecting\nto external networks; a change in the value or use of information; or the emergence of a new\n" }, { "page_number": 26, "text": "I. Introduction and Overview\n14\nthreat. \nIn addition, security is never perfect when a system is implemented. System users and operators\ndiscover new ways to intentionally or unintentionally bypass or subvert security. Changes in the\nsystem or the environment can create new vulnerabilities. Strict adherence to procedures is rare,\nand procedures become outdated over time. All of these issues make it necessary to reassess the\nsecurity of computer systems.\n2.8 Computer Security is Constrained by Societal Factors.\nThe ability of security to support the mission of the organization(s) may be limited by various\nfactors, such as social issues. For example, security and workplace privacy can conflict. \nCommonly, security is implemented on a computer system by identifying users and tracking their\nactions. However, expectations of privacy vary and can be violated by some security measures. \n(In some cases, privacy may be mandated by law.) \nAlthough privacy is an extremely important societal issue, it is not the only one. The flow of\ninformation, especially between a government and its citizens, is another situation where security\nmay need to be modified to support a societal goal. In addition, some authentication measures,\nsuch as retinal scanning, may be considered invasive in some environments and cultures.\nThe underlying idea is that security measures should be selected and implemented with a\nrecognition of the rights and legitimate interests of others. This many involve balancing the\nsecurity needs of information owners and users with societal goals. However, rules and\nexpectations change with regard to the appropriate use of security controls. These changes may\neither increase or decrease security.\nThe relationship between security and societal norms is not necessarily antagonistic. Security can\nenhance the access and flow of data and information by providing more accurate and reliable\ninformation and greater availability of systems. Security can also increase the privacy afforded to\nan individual or help achieve other goals set by society.\nReferences\nOrganisation for Economic Co-operation and Development. Guidelines for the Security of\nInformation Systems. Paris, 1992.\n" }, { "page_number": 27, "text": " Note that this includes groups within the organization; outside organizations (e.g., NIST and OMB) are not\n14\nincluded in this chapter. \n These categories are generalizations used to help aid the reader; if they are not applicable to the reader's\n15\nparticular environment, they can be safely ignored. While all these categories may not exist in a particular\norganization, the functionality implied by them will often still be present. Also, some organizations may fall into\nmore than one category. For example, the personnel office both supports the computer security program (e.g., by\nkeeping track of employee departures) and is also a user of computer services.\n15\nChapter 3 \nROLES AND RESPONSIBILITIES\nOne fundamental issue that arises in discussions of computer security is: \"Whose responsibility is\nit?\" Of course, on a basic level the answer is simple: computer security is the responsibility of\neveryone who can affect the security of a computer system. However, the specific duties and\nresponsibilities of various individuals and organizational entities vary considerably.\nThis chapter presents a brief overview of roles and responsibilities of the various officials and\norganizational offices typically involved with computer security. They include the following\n14\ngroups: \n15\nsenior management\nprogram/functional managers/application owners,\ncomputer security management, \ntechnology providers, \nsupporting organizations, and\nusers. \nThis chapter is intended to give the reader a basic familiarity with the major organizational\nelements that play a role in computer security. It does not describe all responsibilities of each in\ndetail, nor will this chapter apply uniformly to all organizations. Organizations, like individuals,\nhave unique characteristics, and no single template can apply to all. Smaller organizations, in\nparticular, are not likely to have separate individuals performing many of the functions described\nin this chapter. Even at some larger organizations, some of the duties described in this chapter\nmay not be staffed with full-time personnel. What is important is that these functions be handled\nin a manner appropriate for the organization. \nAs with the rest of the handbook, this chapter is not intended to be used as an audit guide. \n" }, { "page_number": 28, "text": "I. Introduction and Overview\n The functional manager/application owner may or may not be the data owner. Particularly within the\n16\ngovernment, the concept of the data owner may not be the most appropriate, since citizens ultimately own the\ndata.\n16\nSenior management has ultimate responsibility for\nthe security of an organization's computer systems. \n3.1 Senior Management\nUltimately, responsibility for the success of an\norganization lies with its senior managers. \nThey establish the organization's computer\nsecurity program and its overall program\ngoals, objectives, and priorities in order to support the mission of the organization. Ultimately,\nthe head of the organization is responsible for ensuring that adequate resources are applied to the\nprogram and that it is successful. Senior managers are also responsible for setting a good\nexample for their employees by following all applicable security practices.\n3.2 Computer Security Management\nThe Computer Security Program Manager (and support staff) directs the organization's day-to-\nday management of its computer security program. This individual is also responsible for\ncoordinating all security-related interactions among organizational elements involved in the\ncomputer security program as well as those external to the organization.\n3.3 Program and Functional Managers/Application Owners\nProgram or Functional Managers/Application Owners are responsible for a program or function\n(e.g., procurement or payroll) including the supporting computer system. Their responsibilities\n16\ninclude providing for appropriate security, including management, operational, and technical\ncontrols. These officials are usually assisted by a technical staff that oversees the actual workings\nof the system. This kind of support is no different for other staff members who work on other\nprogram implementation issues. \nAlso, the program or functional manager/application owner is often aided by a Security Officer\n(frequently dedicated to that system, particularly if it is large or critical to the organization) in\ndeveloping and implementing security requirements. \n3.4 Technology Providers\nSystem Management/System Administrators. These personnel are the managers and technicians\nwho design and operate computer systems. They are responsible for implementing technical\nsecurity on computer systems and for being familiar with security technology that relates to their\nsystem. They also need to ensure the continuity of their services to meet the needs of functional\n" }, { "page_number": 29, "text": "3. Roles and Responsibilities\n17\nWhat is a Program/Functional Manager?\nThe term program/functional manager or\napplication owner may not be familiar or\nimmediately apparent to all readers. The examples\nprovided below should help the reader better\nunderstand this important concept. In reviewing\nthese examples, note that computer systems often\nserve more than one group or function. \nExample 1. A personnel system serves an entire\norganization. However, the Personnel Manager\nwould normally be the application owner. This\napplies even if the application is distributed so that\nsupervisors and clerks throughout the organization\nuse and update the system. \nExample #2. A federal benefits system provides\nmonthly benefit checks to 500,000 citizens. The\nprocessing is done on a mainframe data center. \nThe Benefits Program Manager is the application\nowner. \nExample 3. A mainframe data processing\norganization supports several large applications. \nThe mainframe director is not the Functional\nManager for any of the applications. \nExample 4. A 100-person division has a diverse\ncollection of personal computers, work stations,\nand minicomputers used for general office support,\nInternet connectivity, and computer-oriented\nresearch. The division director would normally be\nthe Functional Manager responsible for the system.\nmanagers as well as analyzing technical vulnerabilities in their systems (and their security\nimplications). They are often a part of a larger Information Resources Management (IRM)\norganization. \nCommunications/Telecommunications Staff. This\noffice is normally responsible for providing\ncommunications services, including voice, data,\nvideo, and fax service. Their responsibilities for\ncommunication systems are similar to those that\nsystems management officials have for their\nsystems. The staff may not be separate from other\ntechnology service providers or the IRM office.\nSystem Security Manager/Officers. Often\nassisting system management officials in this effort\nis a system security manager/officer responsible\nfor day-to-day security\nimplementation/administration duties. Although\nnot normally part of the computer security\nprogram management office, this officer is\nresponsible for coordinating the security efforts of\na particular system(s). This person works closely\nwith system management personnel, the computer\nsecurity program manager, and the program or\nfunctional manager's security officer. In fact,\ndepending upon the organization, this may be the\nsame individual as the program or functional\nmanager's security officer. This person may or\nmay not be a part of the organization's overall\nsecurity office.\nHelp Desk. Whether or not a Help Desk is tasked\nwith incident handling, it needs to be able to\nrecognize security incidents and refer the caller to\nthe appropriate person or organization for a\nresponse.\n" }, { "page_number": 30, "text": "I. Introduction and Overview\n Categorization of functions and organizations in this section as supporting is in no way meant to imply any\n17\ndegree of lessened importance. Also, note that this list is not all-inclusive. Additional supporting functions that\ncan be provided may include configuration management, independent verification and validation, and independent\npenetration testing teams.\n The term outside auditors includes both auditors external to the organization as a whole and the\n18\norganization's internal audit staff. For purposes of this discussion, both are outside the management chain\nresponsible for the operation of the system.\n18\nWho Should Be the Accrediting Official?\nThe Accrediting Officials are agency officials who\nhave authority to accept an application's security\nsafeguards and approve a system for operation. \nThe Accrediting Officials must also be authorized\nto allocate resources to achieve acceptable security\nand to remedy security deficiencies. Without this\nauthority, they cannot realistically take\nresponsibility for the accreditation decision. In\ngeneral, Accreditors are senior officials, who may\nbe the Program or Function Manager/Application\nOwner. For some very sensitive applications, the\nSenior Executive Officer is appropriate as an\nAccrediting Official. In general, the more\nsensitive the application, the higher the\nAccrediting Officials are in the organization. \nWhere privacy is a concern, federal managers can\nbe held personally liable for security inadequacies. \nThe issuing of the accreditation statement fixes\nsecurity responsibility, thus making explicit a\nresponsibility that might otherwise be implicit. \nAccreditors should consult the agency general\ncounsel to determine their personal security\nliabilities. \nNote that accreditation is a formality unique to the\ngovernment.\nSource: NIST FIPS 102\n3.5 Supporting Functions17\nThe security responsibilities of managers,\ntechnology providers and security officers are\nsupported by functions normally assigned to others. \nSome of the more important of these are described\nbelow. \nAudit. Auditors are responsible for examining\nsystems to see whether the system is meeting stated\nsecurity requirements, including system and\norganization policies, and whether security controls\nare appropriate. Informal audits can be performed\nby those operating the system under review or, if\nimpartiality is important, by outside auditors. \n18\nPhysical Security. The physical security office is\nusually responsible for developing and enforcing\nappropriate physical security controls, in\nconsultation with computer security management,\nprogram and functional managers, and others, as\nappropriate. Physical security should address not\nonly central computer installations, but also backup\nfacilities and office environments. In the\ngovernment, this office is often responsible for the\nprocessing of personnel background checks and\nsecurity clearances. \nDisaster Recovery/Contingency Planning Staff. \nSome organizations have a separate disaster\nrecovery/contingency planning staff. In this case,\nthey are normally responsible for contingency\nplanning for the organization as a whole, and\n" }, { "page_number": 31, "text": "3. Roles and Responsibilities\n19\nnormally work with program and functional mangers/application owners, the computer security\nstaff, and others to obtain additional contingency planning support, as needed.\nQuality Assurance. Many organizations have established a quality assurance program to improve\nthe products and services they provide to their customers. The quality officer should have a\nworking knowledge of computer security and how it can be used to improve the quality of the\nprogram, for example, by improving the integrity of computer-based information, the availability\nof services, and the confidentiality of customer information, as appropriate. \nProcurement. The procurement office is responsible for ensuring that organizational\nprocurements have been reviewed by appropriate officials. The procurement office cannot be\nresponsible for ensuring that goods and services meet computer security expectations, because it\nlacks the technical expertise. Nevertheless, this office should be knowledgeable about computer\nsecurity standards and should bring them to the attention of those requesting such technology. \nTraining Office. An organization has to decide whether the primary responsibility for training\nusers, operators, and managers in computer security rests with the training office or the computer\nsecurity program office. In either case, the two organizations should work together to develop an\neffective training program.\nPersonnel. The personnel office is normally the first point of contact in helping managers\ndetermine if a security background investigation is necessary for a particular position. The\npersonnel and security offices normally work closely on issues involving background\ninvestigations. The personnel office may also be responsible for providing security-related exit\nprocedures when employees leave an organization. \nRisk Management/Planning Staff. Some organizations have a full-time staff devoted to studying\nall types of risks to which the organization may be exposed. This function should include\ncomputer security-related risks, although this office normally focuses on \"macro\" issues. Specific\nrisk analyses for specific computer systems is normally not performed by this office. \nPhysical Plant. This office is responsible for ensuring the provision of such services as electrical\npower and environmental controls, necessary for the safe and secure operation of an\norganization's systems. Often they are augmented by separate medical, fire, hazardous waste, or\nlife safety personnel. \n" }, { "page_number": 32, "text": "I. Introduction and Overview\n20\n3.6 Users\nUsers also have responsibilities for computer security. Two kinds of users, and their associated\nresponsibilities, are described below.\nUsers of Information. Individuals who use information provided by the computer can be\nconsidered the \"consumers\" of the applications. Sometimes they directly interact with the system\n(e.g., to generate a report on screen) in which case they are also users of the system (as\ndiscussed below). Other times, they may only read computer-prepared reports or only be briefed\non such material. Some users of information may be very far removed from the computer system. \nUsers of information are responsible for letting the functional mangers/application owners (or\ntheir representatives) know what their needs are for the protection of information, especially for\nits integrity and availability.\nUsers of Systems. Individuals who directly use computer systems (typically via a keyboard) are\nresponsible for following security procedures, for reporting security problems, and for attending\nrequired computer security and functional training.\nReferences\nWood, Charles Cresson. \"How to Achieve a Clear Definition of Responsibilities for Information\nSecurity.\" DATAPRO Information Security Service, IS115-200-101, 7 pp. April 1993.\n" }, { "page_number": 33, "text": " As is true for this publication as a whole, this chapter does not address threats to national security systems,\n19\nwhich fall outside of NIST's purview. The term \"national security systems\" is defined in National Security\nDirective 42 (7/5/90) as being \"those telecommunications and information systems operated by the U.S.\nGovernment, its contractors, or agents, that contain classified information or, as set forth in 10 U.S.C. 2315, that\ninvolves intelligence activities, involves cryptologic activities related to national security, involves command and\ncontrol of military forces, involves equipment that is an integral part of a weapon or weapon system, or involves\nequipment that is critical to the direct fulfillment of military or intelligence missions.\"\n A discussion of how threats, vulnerabilities, safeguard selection and risk mitigation are related is contained\n20\nin Chapter 7, Risk Management.\n Note that one protects against threats that can exploit a vulnerability. If a vulnerability exists but no threat\n21\nexists to take advantage of it, little or nothing is gained by protecting against the vulnerability. See Chapter 7,\nRisk Management.\n21\nChapter 4\nCOMMON THREATS: A BRIEF OVERVIEW\nComputer systems are vulnerable to many threats that can inflict various types of damage\nresulting in significant losses. This damage can range from errors harming database integrity to\nfires destroying entire computer centers. Losses can stem, for example, from the actions of\nsupposedly trusted employees defrauding a system, from outside hackers, or from careless data\nentry clerks. Precision in estimating computer security-related losses is not possible because\nmany losses are never discovered, and others are \"swept under the carpet\" to avoid unfavorable\npublicity. The effects of various threats varies considerably: some affect the confidentiality or\nintegrity of data while others affect the availability of a system. \nThis chapter presents a broad view of the risky environment in which systems operate today. The\nthreats and associated losses presented in this chapter were selected based on their prevalence and\nsignificance in the current computing environment and their expected growth. This list is not\nexhaustive, and some threats may combine elements from more than one area. This overview of\n19\nmany of today's common threats may prove useful to organizations studying their own threat\nenvironments; however, the perspective of this chapter is very broad. Thus, threats against\nparticular systems could be quite different from those discussed here. \n20\nTo control the risks of operating an information system, managers and users need to know the\nvulnerabilities of the system and the threats that may exploit them. Knowledge of the threat21\nenvironment allows the system manager to implement the most cost-effective security measures. \nIn some cases, managers may find it more cost-effective to simply tolerate the expected losses. \nSuch decisions should be based on the results of a risk analysis. (See Chapter 7.)\n" }, { "page_number": 34, "text": "I. Introduction and Overview\n Computer System Security and Privacy Advisory Board, 1991 Annual Report (Gaithersburg, MD), March\n22\n1992, p. 18. The categories into which the problems were placed and the percentages of economic loss attributed\nto each were: 65%, errors and omissions; 13%, dishonest employees; 6%, disgruntled employees; 8%, loss of\nsupporting infrastructure, including power, communications, water, sewer, transportation, fire, flood, civil unrest,\nand strikes; 5%, water, not related to fires and floods; less than 3%, outsiders, including viruses, espionage,\ndissidents, and malcontents of various kinds, and former employees who have been away for more than six weeks.\n House Committee on Science, Space and Technology, Subcommittee on Investigations and Oversight, Bugs in\n23\nthe Program: Problems in Federal Government Computer Software Development and Regulation, 101st Cong., 1st\nsess., 3 August 1989, p. 2.\n22\n4.1 Errors and Omissions\nErrors and omissions are an important threat to data and system integrity. These errors are\ncaused not only by data entry clerks processing hundreds of transactions per day, but also by all\ntypes of users who create and edit data. Many programs, especially those designed by users for\npersonal computers, lack quality control measures. However, even the most sophisticated\nprograms cannot detect all types of input errors or omissions. A sound awareness and training\nprogram can help an organization reduce the number and severity of errors and omissions.\nUsers, data entry clerks, system operators, and programmers frequently make errors that\ncontribute directly or indirectly to security problems. In some cases, the error is the threat, such\nas a data entry error or a programming error that crashes a system. In other cases, the errors\ncreate vulnerabilities. Errors can occur during all phases of the systems life cycle. A long-term\nsurvey of computer-related economic losses conducted by Robert Courtney, a computer security\nconsultant and former member of the Computer System Security and Privacy Advisory Board,\nfound that 65 percent of losses to organizations were the result of errors and omissions. This\n22\nfigure was relatively consistent between both private and public sector organizations.\nProgramming and development errors, often called \"bugs,\" can range in severity from benign to\ncatastrophic. In a 1989 study for the House Committee on Science, Space and Technology,\nentitled Bugs in the Program, the staff of the Subcommittee on Investigations and Oversight\nsummarized the scope and severity of this problem in terms of government systems as follows:\nAs expenditures grow, so do concerns about the reliability, cost and accuracy of ever-larger\nand more complex software systems. These concerns are heightened as computers perform\nmore critical tasks, where mistakes can cause financial turmoil, accidents, or in extreme\ncases, death.23\nSince the study's publication, the software industry has changed considerably, with measurable\nimprovements in software quality. Yet software \"horror stories\" still abound, and the basic\nprinciples and problems analyzed in the report remain the same. While there have been great\n" }, { "page_number": 35, "text": "4. Threats: A Brief Overview\n President's Council on Integrity and Efficiency, Review of General Controls in Federal Computer Systems,\n24\nOctober, 1988.\n Bob Violino and Joseph C. Panettieri, \"Tempting Fate,\" InformationWeek, October 4, 1993: p. 42. \n25\n Letter from Scott Charney, Chief, Computer Crime Unit, U.S. Department of Justice, to Barbara Guttman, NIST.\n26\nJuly 29, 1993.\n \"Theft, Power Surges Cause Most PC Losses,\" Infosecurity News, September/October, 1993, 13.\n27\n23\nimprovements in program quality, as reflected in decreasing errors per 1000 lines of code, the\nconcurrent growth in program size often seriously diminishes the beneficial effects of these\nprogram quality enhancements. \nInstallation and maintenance errors are another source of security problems. For example, an\naudit by the President's Council for Integrity and Efficiency (PCIE) in 1988 found that every one\nof the ten mainframe computer sites studied had installation and maintenance errors that\nintroduced significant security vulnerabilities.24\n4.2 Fraud and Theft\nComputer systems can be exploited for both fraud and theft both by \"automating\" traditional\nmethods of fraud and by using new methods. For example, individuals may use a computer to\nskim small amounts of money from a large number of financial accounts, assuming that small\ndiscrepancies may not be investigated. Financial systems are not the only ones at risk. Systems\nthat control access to any resource are targets (e.g., time and attendance systems, inventory\nsystems, school grading systems, and long-distance telephone systems). \nComputer fraud and theft can be committed by insiders or outsiders. Insiders (i.e., authorized\nusers of a system) are responsible for the majority of fraud. A 1993 InformationWeek/Ernst and\nYoung study found that 90 percent of Chief Information Officers viewed employees \"who do not\nneed to know\" information as threats. The U.S. Department of Justice's Computer Crime Unit\n25\ncontends that \"insiders constitute the greatest threat to computer systems.\" Since insiders have\n26\nboth access to and familiarity with the victim computer system (including what resources it\ncontrols and its flaws), authorized system users are in a better position to commit crimes. Insiders\ncan be both general users (such as clerks) or technical staff members. An organization's former\nemployees, with their knowledge of an organization's operations, may also pose a threat,\nparticularly if their access is not terminated promptly.\nIn addition to the use of technology to commit fraud and theft, computer hardware and software\nmay be vulnerable to theft. For example, one study conducted by Safeware Insurance found that\n$882 million worth of personal computers was lost due to theft in 1992. \n27\n" }, { "page_number": 36, "text": "I. Introduction and Overview\n Charney.\n28\n Martin Sprouse, ed., Sabotage in the American Workplace: Anecdotes of Dissatisfaction, Mischief and Revenge\n29\n(San Francisco, CA: Pressure Drop Press, 1992), p. 7.\n24\nCommon examples of computer-related employee\nsabotage include:\ndestroying hardware or facilities,\nplanting logic bombs that destroy\nprograms or data,\nentering data incorrectly,\n\"crashing\" systems,\ndeleting data,\nholding data hostage, and\nchanging data.\n4.3 Employee Sabotage\nEmployees are most familiar with their\nemployer's computers and applications,\nincluding knowing what actions might cause\nthe most damage, mischief, or sabotage. The\ndownsizing of organizations in both the public\nand private sectors has created a group of\nindividuals with organizational knowledge,\nwho may retain potential system access (e.g.,\nif system accounts are not deleted in a timely\nmanner). The number of incidents of\n28\nemployee sabotage is believed to be much\nsmaller than the instances of theft, but the cost of such incidents can be quite high. \nMartin Sprouse, author of Sabotage in the American Workplace, reported that the motivation for\nsabotage can range from altruism to revenge:\nAs long as people feel cheated, bored, harassed, endangered, or betrayed at work, sabotage\nwill be used as a direct method of achieving job satisfaction the kind that never has to get\nthe bosses' approval.29\n4.4 Loss of Physical and Infrastructure Support\nThe loss of supporting infrastructure includes power failures (outages, spikes, and brownouts),\nloss of communications, water outages and leaks, sewer problems, lack of transportation services,\nfire, flood, civil unrest, and strikes. These losses include such dramatic events as the explosion at\nthe World Trade Center and the Chicago tunnel flood, as well as more common events, such as\nbroken water pipes. Many of these issues are covered in Chapter 15. A loss of infrastructure\noften results in system downtime, sometimes in unexpected ways. For example, employees may\nnot be able to get to work during a winter storm, although the computer system may be\nfunctional. \n4.5 Malicious Hackers\nThe term malicious hackers, sometimes called crackers, refers to those who break into computers\n" }, { "page_number": 37, "text": "4. Threats: A Brief Overview\n Steven M. Bellovin, \"There Be Dragons,\" Proceedings of the Third Usenix UNIX Security Symposium. \n30\n National Research Council, Growing Vulnerability of the Public Switched Networks: Implication for National\n31\nSecurity Emergency Preparedness (Washington, DC: National Academy Press), 1989.\n Report of the National Security Task Force, November 1990.\n32\n25\nwithout authorization. They can include both outsiders and insiders. Much of the rise of hacker\nactivity is often attributed to increases in connectivity in both government and industry. One 1992\nstudy of a particular Internet site (i.e., one computer system) found that hackers attempted to\nbreak in at least once every other day. \n30\nThe hacker threat should be considered in terms of past and potential future damage. Although\ncurrent losses due to hacker attacks are significantly smaller than losses due to insider theft and\nsabotage, the hacker problem is widespread and serious. One example of malicious hacker\nactivity is that directed against the public telephone system. \nStudies by the National Research Council and the National Security Telecommunications\nAdvisory Committee show that hacker activity is not limited to toll fraud. It also includes the\nability to break into telecommunications systems (such as switches), resulting in the degradation\nor disruption of system availability. While unable to reach a conclusion about the degree of threat\nor risk, these studies underscore the ability of hackers to cause serious damage.31, 32\nThe hacker threat often receives more attention than more common and dangerous threats. The\nU.S. Department of Justice's Computer Crime Unit suggests three reasons for this. \nFirst, the hacker threat is a more recently encountered threat. Organizations have\nalways had to worry about the actions of their own employees and could use\ndisciplinary measures to reduce that threat. However, these measures are\nineffective against outsiders who are not subject to the rules and regulations of the\nemployer. \nSecond, organizations do not know the purposes of a hacker some hackers\nbrowse, some steal, some damage. This inability to identify purposes can suggest\nthat hacker attacks have no limitations. \nThird, hacker attacks make people feel vulnerable, particularly because their\nidentity is unknown. For example, suppose a painter is hired to paint a house and,\nonce inside, steals a piece of jewelry. Other homeowners in the neighborhood may\nnot feel threatened by this crime and will protect themselves by not doing business\nwith that painter. But if a burglar breaks into the same house and steals the same\n" }, { "page_number": 38, "text": "I. Introduction and Overview\n Charney.\n33\n The government is included here because it often is the custodian for proprietary data (e.g., patent\n34\napplications). \n The figures of 30 and 58 percent are not mutually exclusive.\n35\n Richard J. Heffernan and Dan T. Swartwood, \"Trends in Competitive Intelligence,\" Security Management\n36\n37, no. 1 (January 1993), pp. 70-73.\n Robert M. Gates, testimony before the House Subcommittee on Economic and Commercial Law, Committee\n37\non the Judiciary, 29 April 1992.\n William S. Sessions, testimony before the House Subcommittee on Economic and Commercial Law,\n38\nCommittee on the Judiciary, 29 April 1992.\n26\npiece of jewelry, the entire neighborhood may feel victimized and vulnerable.33\n4.6 Industrial Espionage\nIndustrial espionage is the act of gathering proprietary data from private companies or the\ngovernment for the purpose of aiding another company(ies). Industrial espionage can be\n34\nperpetrated either by companies seeking to improve their competitive advantage or by\ngovernments seeking to aid their domestic industries. Foreign industrial espionage carried out by\na government is often referred to as economic espionage. Since information is processed and\nstored on computer systems, computer security can help protect against such threats; it can do\nlittle, however, to reduce the threat of authorized employees selling that information. \nIndustrial espionage is on the rise. A 1992 study sponsored by the American Society for\nIndustrial Security (ASIS) found that proprietary business information theft had increased 260\npercent since 1985. The data indicated 30 percent of the reported losses in 1991 and 1992 had\nforeign involvement. The study also found that 58 percent of thefts were perpetrated by current\nor former employees. The three most damaging types of stolen information were pricing\n35\ninformation, manufacturing process information, and product development and specification\ninformation. Other types of information stolen included customer lists, basic research, sales data,\npersonnel data, compensation data, cost data, proposals, and strategic plans.36\nWithin the area of economic espionage, the Central Intelligence Agency has stated that the main\nobjective is obtaining information related to technology, but that information on U.S. Government\npolicy deliberations concerning foreign affairs and information on commodities, interest rates, and\nother economic factors is also a target. The Federal Bureau of Investigation concurs that\n37\ntechnology-related information is the main target, but also lists corporate proprietary information,\nsuch as negotiating positions and other contracting data, as a target.38\n" }, { "page_number": 39, "text": "4. Threats: A Brief Overview\n Jeffrey O. Kephart and Steve R. White, \"Measuring and Modeling Computer Virus Prevalence,\" Proceedings,\n39\n1993 IEEE Computer Society Symposium on Research in Security and Privacy (May 1993): 14.\n Ibid.\n40\n Estimates of virus occurrences may not consider the strength of an organization's antivirus program.\n41\n27\nMalicious Software: A Few Key Terms\nVirus: A code segment that replicates by attaching copies of itself to\nexisting executables. The new copy of the virus is executed when a\nuser executes the new host program. The virus may include an\nadditional \"payload\" that triggers when specific conditions are met. \nFor example, some viruses display a text string on a particular date. \nThere are many types of viruses, including variants, overwriting,\nresident, stealth, and polymorphic. \nTrojan Horse: A program that performs a desired task, but that also\nincludes unexpected (and undesirable) functions. Consider as an\nexample an editing program for a multiuser system. This program\ncould be modified to randomly delete one of the users' files each\ntime they perform a useful function (editing), but the deletions are\nunexpected and definitely undesired!\nWorm: A self-replicating program that is self-contained and does\nnot require a host program. The program creates a copy of itself and\ncauses it to execute; no user intervention is required. Worms\ncommonly use network services to propagate to other host systems. \nSource: NIST Special Publication 800-5.\n4.7 Malicious Code\nMalicious code refers to viruses, worms, Trojan horses, logic bombs, and other \"uninvited\"\nsoftware. Sometimes mistakenly associated only with personal computers, malicious code can\nattack other platforms.\nA 1993 study of viruses found that\nwhile the number of known viruses is\nincreasing exponentially, the number of\nvirus incidents is not. The study\n39\nconcluded that viruses are becoming\nmore prevalent, but only \"gradually.\"\nThe rate of PC-DOS virus\nincidents in medium to large North\nAmerican businesses appears to be\napproximately 1 per 1000 PCs per\nquarter; the number of infected\nmachines is perhaps 3 or 4 times\nthis figure if we assume that most\nsuch businesses are at least weakly\nprotected against viruses.\n \n40, 41\nActual costs attributed to the presence\nof malicious code have resulted\nprimarily from system outages and staff\ntime involved in repairing the systems. \nNonetheless, these costs can be\nsignificant.\n4.8 Foreign Government Espionage\nIn some instances, threats posed by foreign government intelligence services may be present. In\naddition to possible economic espionage, foreign intelligence services may target unclassified\n" }, { "page_number": 40, "text": "I. Introduction and Overview\n House Committee on Ways and Means, Subcommittee on Social Security, Illegal Disclosure of Social\n42\nSecurity Earnings Information by Employees of the Social Security Administration and the Department of Health\nand Human Services' Office of Inspector General: Hearing, 102nd Cong., 2nd sess., 24 September 1992, Serial\n102-131.\n Stephen Barr, \"Probe Finds IRS Workers Were `Browsing' in Files,\" The Washington Post, 3 August 1993, p.\n43\nA1.\n28\nsystems to further their intelligence missions. Some unclassified information that may be of\ninterest includes travel plans of senior officials, civil defense and emergency preparedness,\nmanufacturing technologies, satellite data, personnel and payroll data, and law enforcement,\ninvestigative, and security files. Guidance should be sought from the cognizant security office\nregarding such threats.\n4.9 Threats to Personal Privacy\nThe accumulation of vast amounts of electronic information about individuals by governments,\ncredit bureaus, and private companies, combined with the ability of computers to monitor,\nprocess, and aggregate large amounts of information about individuals have created a threat to\nindividual privacy. The possibility that all of this information and technology may be able to be\nlinked together has arisen as a specter of the modern information age. This is often referred to as\n\"Big Brother.\" To guard against such intrusion, Congress has enacted legislation, over the years,\nsuch as the Privacy Act of 1974 and the Computer Matching and Privacy Protection Act of 1988,\nwhich defines the boundaries of the legitimate uses of personal information collected by the\ngovernment. \nThe threat to personal privacy arises from many sources. In several cases federal and state\nemployees have sold personal information to private investigators or other \"information brokers.\" \nOne such case was uncovered in 1992 when the Justice Department announced the arrest of over\ntwo dozen individuals engaged in buying and selling information from Social Security\nAdministration (SSA) computer files. During the investigation, auditors learned that SSA\n42\nemployees had unrestricted access to over 130 million employment records. Another\ninvestigation found that 5 percent of the employees in one region of the IRS had browsed through\ntax records of friends, relatives, and celebrities. Some of the employees used the information to\n43\ncreate fraudulent tax refunds, but many were acting simply out of curiosity. \nAs more of these cases come to light, many individuals are becoming increasingly concerned\nabout threats to their personal privacy. A July 1993 special report in MacWorld cited polling data\ntaken by Louis Harris and Associates showing that in 1970 only 33 percent of respondents were\n" }, { "page_number": 41, "text": "4. Threats: A Brief Overview\n Charles Piller, \"Special Report: Workplace and Consumer Privacy Under Siege,\" MacWorld, July 1993, pp.\n44\n1-14.\n29\nconcerned about personal privacy. By 1990, that number had jumped to 79 percent.44\nWhile the magnitude and cost to society of the personal privacy threat are difficult to gauge, it is\napparent that information technology is becoming powerful enough to warrant fears of both\ngovernment and corporate \"Big Brothers.\" Increased awareness of the problem is needed. \nReferences\nHouse Committee on Science, Space and Technology, Subcommittee on Investigations and\nOversight. Bugs in the Program: Problems in Federal Government Computer Software\nDevelopment and Regulation. 101st Congress, 1st session, August 3, 1989.\nNational Research Council. Computers at Risk: Safe Computing in the Information Age.\nWashington, DC: National Academy Press, 1991.\nNational Research Council. Growing Vulnerability of the Public Switched Networks: Implication\nfor National Security Emergency Preparedness. Washington, DC: National Academy Press,\n1989.\nNeumann, Peter G. Computer-Related Risks. Reading, MA: Addison-Wesley, 1994.\nSchwartau, W. Information Warfare. New York, NY: Thunders Mouth Press, 1994 (Rev.\n1995). \nSprouse, Martin, ed. Sabotage in the American Workplace: Anecdotes of Dissatisfaction,\nMischief, and Revenge. San Francisco, CA: Pressure Drop Press, 1992.\n" }, { "page_number": 42, "text": "30\n" }, { "page_number": 43, "text": "31\nII. MANAGEMENT CONTROLS\n" }, { "page_number": 44, "text": "32\n" }, { "page_number": 45, "text": " There are variations in the use of the term policy, as noted in a 1994 Office of Technology Assessment\n45\nreport, Information Security and Privacy in Network Environments: \"Security Policy refers here to the statements\nmade by organizations, corporations, and agencies to establish overall policy on information access and\nsafeguards. Another meaning comes from the Defense community and refers to the rules relating clearances of\nusers to classification of information. In another usage, security policies are used to refine and implement the\nbroader, organizational security policy....\"\n These are the kind of policies that computer security experts refer to as being enforced by the system's\n46\ntechnical controls as well as its management and operational controls. \n In general, policy is set by a manager. However, in some cases, it may be set by a group (e.g., an\n47\nintraorganizational policy board).\n33\nPolicy means different things to different people. \nThe term \"policy\" is used in this chapter in a broad\nmanner to refer to important computer security-\nrelated decisions.\nChapter 5\nCOMPUTER SECURITY POLICY\nIn discussions of computer security, the term policy has more than one meaning. Policy is\n45\nsenior management's directives to create a computer security program, establish its goals, and\nassign responsibilities. The term policy is also used to refer to the specific security rules for\nparticular systems. Additionally, policy may refer to entirely different matters, such as the\n46\nspecific managerial decisions setting an organization's e-mail privacy policy or fax security policy.\nIn this chapter the term computer security\npolicy is defined as the \"documentation of\ncomputer security decisions\" which covers\nall the types of policy described above. In\n47\nmaking these decisions, managers face hard\nchoices involving resource allocation,\ncompeting objectives, and organizational\nstrategy related to protecting both technical and information resources as well as guiding\nemployee behavior. Managers at all levels make choices that can result in policy, with the scope\nof the policy's applicability varying according to the scope of the manager's authority. In this\nchapter we use the term policy in a broad manner to encompass all of the types of policy\ndescribed above regardless of the level of manager who sets the particular policy. \nManagerial decisions on computer security issues vary greatly. To differentiate among various\nkinds of policy, this chapter categorizes them into three basic types:\nProgram policy is used to create an organization's computer security program. \nIssue-specific policies address specific issues of concern to the organization. \n" }, { "page_number": 46, "text": "II. Management Controls\n A system refers to the entire collection of processes, both those performed manually and those using a\n48\ncomputer (e.g., manual data collection and subsequent computer manipulation), which performs a function. This\nincludes both application systems and support systems, such as a network.\n34\nTools to Implement Policy: \nStandards, Guidelines, and Procedures\nBecause policy is written at a broad level, organizations also develop standards, guidelines, and\nprocedures that offer users, managers, and others a clearer approach to implementing policy and meeting\norganizational goals. Standards and guidelines specify technologies and methodologies to be used to\nsecure systems. Procedures are yet more detailed steps to be followed to accomplish particular security-\nrelated tasks. Standards, guidelines, and procedures may be promulgated throughout an organization via\nhandbooks, regulations, or manuals.\nOrganizational standards (not to be confused with American National Standards, FIPS, Federal\nStandards, or other national or international standards) specify uniform use of specific technologies,\nparameters, or procedures when such uniform use will benefit an organization. Standardization of\norganizationwide identification badges is a typical example, providing ease of employee mobility and\nautomation of entry/exit systems. Standards are normally compulsory within an organization.\nGuidelines assist users, systems personnel, and others in effectively securing their systems. The nature of\nguidelines, however, immediately recognizes that systems vary considerably, and imposition of standards\nis not always achievable, appropriate, or cost-effective. For example, an organizational guideline may be\nused to help develop system-specific standard procedures. Guidelines are often used to help ensure that\nspecific security measures are not overlooked, although they can be implemented, and correctly so, in\nmore than one way. \nProcedures normally assist in complying with applicable security policies, standards, and guidelines. \nThey are detailed steps to be followed by users, system operations personnel, or others to accomplish a\nparticular task (e.g., preparing new user accounts and assigning the appropriate privileges). \nSome organizations issue overall computer security manuals, regulations, handbooks, or similar\ndocuments. These may mix policy, guidelines, standards, and procedures, since they are closely linked. \nWhile manuals and regulations can serve as important tools, it is often useful if they clearly distinguish\nbetween policy and its implementation. This can help in promoting flexibility and cost-effectiveness by\noffering alternative implementation approaches to achieving policy goals.\nSystem-specific policies focus on decisions taken by management to protect a\nparticular system. \n48\nProcedures, standards, and guidelines are used to describe how these policies will be implemented\nwithin an organization. (See following box.)\n" }, { "page_number": 47, "text": "5. Computer Security Policy\n No standard terms exist for various types of policies. These terms are used to aid the reader's understanding\n49\nof this topic; no implication of their widespread usage is intended.\n35\nFamiliarity with various types and components of policy will aid managers in addressing computer\nsecurity issues important to the organization. Effective policies ultimately result in the\ndevelopment and implementation of a better computer security program and better protectio n of\nsystems and information. \nThese types of policy are described to aid the reader's understanding. It is not important that\n49\none categorizes specific organizational policies into these three categories; it is more important to\nfocus on the functions of each. \n5.1 Program Policy\nA management official, normally the head of the organization or the senior administration official,\nissues program policy to establish (or restructure) the organization's computer security program\nand its basic structure. This high-level policy defines the purpose of the program and its scope\nwithin the organization; assigns responsibilities (to the computer security organization) for direct\nprogram implementation, as well as other responsibilities to related offices (such as the\nInformation Resources Management [IRM] organization); and addresses compliance issues. \nProgram policy sets organizational strategic directions for security and assigns resources for its\nimplementation. \n5.1.1 Basic Components of Program Policy \nComponents of program policy should address:\n \nPurpose. Program policy normally includes a statement describing why the program is being\nestablished. This may include defining the goals of the program. Security-related needs, such as\nintegrity, availability, and confidentiality, can form the basis of organizational goals established in\npolicy. For instance, in an organization responsible for maintaining large mission-critical\ndatabases, reduction in errors, data loss, data corruption, and recovery might be specifically\nstressed. In an organization responsible for maintaining confidential personal data, however,\ngoals might emphasize stronger protection against unauthorized disclosure. \nScope. Program policy should be clear as to which resources -- including facilities, hardware, and\nsoftware, information, and personnel -- the computer security program covers. In many cases, the\nprogram will encompass all systems and organizational personnel, but this is not always true. In\nsome instances, it may be appropriate for an organization's computer security program to be more\nlimited in scope. \n" }, { "page_number": 48, "text": "II. Management Controls\n The program management structure should be organized to best address the goals of the program and\n50\nrespond to the particular operating and risk environment of the organization. Important issues for the structure of\nthe computer security program include management and coordination of security-related resources, interaction\nwith diverse communities, and the ability to relay issues of concern, trade-offs, and recommended actions to upper\nmanagement. (See Chapter 6, Computer Security Program Management.) \n In assigning responsibilities, it is necessary to be specific; such assignments as \"computer security is\n51\neveryone's responsibility,\" in reality, mean no one has specific responsibility.\n The need to obtain guidance from appropriate legal counsel is critical when addressing issues involving\n52\npenalties and disciplinary action for individuals. The policy does not need to restate penalties already provided\n36\nProgram policy establishes the security program\nand assigns program management and supporting\nresponsibilities.\n \nResponsibilities. Once the computer security\nprogram is established, its management is\nnormally assigned to either a newly created or\nexisting office. \n50\nThe responsibilities of officials and offices\nthroughout the organization also need to be addressed, including line managers, applications\nowners, users, and the data processing or IRM organizations. This section of the policy\nstatement, for example, would distinguish between the responsibilities of computer services\nproviders and those of the managers of applications using the provided services. The policy could\nalso establish operational security offices for major systems, particularly those at high risk or most\ncritical to organizational operations. It also can serve as the basis for establishing employee\naccountability. \nAt the program level, responsibilities should be specifically assigned to those organizational\nelements and officials responsible for the implementation and continuity of the computer security\npolicy.51\nCompliance. Program policy typically will address two compliance issues: \n1.\nGeneral compliance to ensure meeting the requirements to establish a program and\nthe responsibilities assigned therein to various organizational components. Often\nan oversight office (e.g., the Inspector General) is assigned responsibility for\nmonitoring compliance, including how well the organization is implementing\nmanagement's priorities for the program. \n2.\nThe use of specified penalties and disciplinary actions. Since the security policy is\na high-level document, specific penalties for various infractions are normally not\ndetailed here; instead, the policy may authorize the creation of compliance\nstructures that include violations and specific disciplinary action(s). \n52\n" }, { "page_number": 49, "text": "5. Computer Security Policy\nfor by law, although they can be listed if the policy will also be used as an awareness or training document. \n Examples presented in this section are not all-inclusive nor meant to imply that policies in each of these\n53\nareas are required by all organizations.\n37\nBoth new technologies and the appearance of new\nthreats often require the creation of issue-specific\npolicies.\n \nThose developing compliance policy should remember that violations of policy can be\nunintentional on the part of employees. For example, nonconformance can often be due to a lack\nof knowledge or training. \n5.2 Issue-Specific Policy \n \nWhereas program policy is intended to address the broad organizationwide computer security\nprogram, issue-specific policies are developed to focus on areas of current relevance and concern\n(and sometimes controversy) to an organization. Management may find it appropriate, for\nexample, to issue a policy on how the organization will approach contingency planning\n(centralized vs. decentralized) or the use of a particular methodology for managing risk to\nsystems. A policy could also be issued, for example, on the appropriate use of a cutting-edge\ntechnology (whose security vulnerabilities are still largely unknown) within the organization. \nIssue-specific policies may also be appropriate when new issues arise, such as when implementing\na recently passed law requiring additional protection of particular information. Program policy is\nusually broad enough that it does not require much modification over time, whereas issue-specific\npolicies are likely to require more frequent revision as changes in technology and related factors\ntake place. \nIn general, for issue-specific and system-specific policy, the issuer is a senior official; the more\nglobal, controversial, or resource-intensive, the more senior the issuer.\n5.2.1 Example Topics for Issue-Specific\nPolicy53\nThere are many areas for which issue-specific\npolicy may be appropriate. Two examples are\nexplained below. \nInternet Access. Many organizations are looking at the Internet as a means for expanding their\nresearch opportunities and communications. Unquestionably, connecting to the Internet yields\nmany benefits and some disadvantages. Some issues an Internet access policy may address\ninclude who will have access, which types of systems may be connected to the network, what\ntypes of information may be transmitted via the network, requirements for user authentication for\nInternet-connected systems, and the use of firewalls and secure gateways.\n" }, { "page_number": 50, "text": "II. Management Controls\n38\nOther potential candidates for issue-specific\npolicies include: approach to risk management and\ncontingency planning, protection of\nconfidential/proprietary information, unauthorized\nsoftware, acquisition of software, doing computer\nwork at home, bringing in disks from outside the\nworkplace, access to other employees' files,\nencryption of files and e-mail, rights of privacy,\nresponsibility for correctness of data, suspected\nmalicious code, and physical emergencies. \nE-Mail Privacy. Users of computer e-mail\nsystems have come to rely upon that service\nfor informal communication with colleagues\nand others. However, since the system is\ntypically owned by the employing\norganization, from time-to-time, management\nmay wish to monitor the employee's e-mail for\nvarious reasons (e.g., to be sure that it is used\nfor business purposes only or if they are\nsuspected of distributing viruses, sending\noffensive e-mail, or disclosing organizational\nsecrets.) On the other hand, users may have\nan expectation of privacy, similar to that accorded U.S. mail. Policy in this area addresses what\nlevel of privacy will be accorded e-mail and the circumstances under which it may or may not be\nread. \n5.2.2 Basic Components of Issue-Specific Policy\nAs suggested for program policy, a useful structure for issue-specific policy is to break the policy\ninto its basic components. \nIssue Statement. To formulate a policy on an issue, managers first must define the issue with any\nrelevant terms, distinctions, and conditions included. It is also often useful to specify the goal or\njustification for the policy which can be helpful in gaining compliance with the policy. For\nexample, an organization might want to develop an issue-specific policy on the use of \"unofficial\nsoftware,\" which might be defined to mean any software not approved, purchased, screened,\nmanaged, and owned by the organization. Additionally, the applicable distinctions and conditions\nmight then need to be included, for instance, for software privately owned by employees but\napproved for use at work, and for software owned and used by other businesses under contract to\nthe organization. \n \nStatement of the Organization's Position. Once the issue is stated and related terms and\nconditions are discussed, this section is used to clearly state the organization's position (i.e.,\nmanagement's decision) on the issue. To continue the previous example, this would mean stating\nwhether use of unofficial software as defined is prohibited in all or some cases, whether there are\nfurther guidelines for approval and use, or whether case-by-case exceptions will be granted, by\nwhom, and on what basis. \nApplicability. Issue-specific policies also need to include statements of applicability. This means\nclarifying where, how, when, to whom, and to what a particular policy applies. For example, it\ncould be that the hypothetical policy on unofficial software is intended to apply only to the\norganization's own on-site resources and employees and not to contractors with offices at other\n" }, { "page_number": 51, "text": "5. Computer Security Policy\n39\nSome Helpful Hints on Policy \n \nTo be effective, policy requires visibility. \nVisibility aids implementation of policy by helping\nto ensure policy is fully communicated throughout\nthe organization. Management presentations,\nvideos, panel discussions, guest speakers,\nquestion/answer forums, and newsletters increase\nvisibility. The organization's computer security\ntraining and awareness program can effectively\nnotify users of new policies. It also can be used to\nfamiliarize new employees with the organization's\npolicies.\nComputer security policies should be introduced in\na manner that ensures that management's\nunqualified support is clear, especially in\nenvironments where employees feel inundated with\npolicies, directives, guidelines, and procedures. \nThe organization's policy is the vehicle for\nemphasizing management's commitment to\ncomputer security and making clear their\nexpectations for employee performance, behavior,\nand accountability. \nTo be effective, policy should be consistent with\nother existing directives, laws, organizational\nculture, guidelines, procedures, and the\norganization's overall mission. It should also be\nintegrated into and consistent with other\norganizational policies (e.g., personnel policies). \nOne way to help ensure this is to coordinate\npolicies during development with other\norganizational offices.\nlocations. Additionally, the policy's applicability to employees travelling among different sites\nand/or working at home who need to transport and use disks at multiple sites might need to be\nclarified. \n \nRoles and Responsibilities. The assignment of roles and responsibilities is also usually included in\nissue-specific policies. For example, if the policy\npermits unofficial software privately owned by\nemployees to be used at work with the appropriate\napprovals, then the approval authority granting\nsuch permission would need to be stated. (Policy\nwould stipulate, who, by position, has such\nauthority.) Likewise, it would need to be clarified\nwho would be responsible for ensuring that only\napproved software is used on organizational\ncomputer resources and, perhaps, for monitoring\nusers in regard to unofficial software. \n \nCompliance. For some types of policy, it may be\nappropriate to describe, in some detail, the\ninfractions that are unacceptable, and the\nconsequences of such behavior. Penalties may be\nexplicitly stated and should be consistent with\norganizational personnel policies and practices. \nWhen used, they should be coordinated with\nappropriate officials and offices and, perhaps,\nemployee bargaining units. It may also be\ndesirable to task a specific office within the\norganization to monitor compliance. \nPoints of Contact and Supplementary\nInformation. For any issue-specific policy, the\nappropriate individuals in the organization to\ncontact for further information, guidance, and\ncompliance should be indicated. Since positions\ntend to change less often than the people\noccupying them, specific positions may be\npreferable as the point of contact. For example,\nfor some issues the point of contact might be a\nline manager; for other issues it might be a facility\nmanager, technical support person, system administrator, or security program representative. \nUsing the above example once more, employees would need to know whether the point of contact\nfor questions and procedural information would be their immediate superior, a system\n" }, { "page_number": 52, "text": "II. Management Controls\n It is important to remember that policy is not created in a vacuum. For example, it is critical to understand\n54\nthe system mission and how the system is intended to be used. Also, users may play an important role in setting\npolicy.\n40\nSystem-specific security policy includes two\ncomponents: security objectives and operational\nsecurity rules. It is often accompanied by\nimplementing procedures and guidelines. \nSample Security Objective\nOnly individuals in the accounting and personnel\ndepartments are authorized to provide or modify\ninformation used in payroll processing. \nadministrator, or a computer security official. \nGuidelines and procedures often accompany policy. The issue-specific policy on unofficial\nsoftware, for example, might include procedural guidelines for checking disks brought to work\nthat had been used by employees at other locations.\n5.3 System-Specific Policy\nProgram policy and issue-specific policy both address policy from a broad level, usually\nencompassing the entire organization. However, they do not provide sufficient information or\ndirection, for example, to be used in establishing an access control list or in training users on what\nactions are permitted. System-specific policy fills this need. It is much more focused, since it\naddresses only one system. \nMany security policy decisions may apply only at the system level and may vary from system to\nsystem within the same organization. While these decisions may appear to be too detailed to be\npolicy, they can be extremely important, with significant impacts on system usage and security. \nThese types of decisions can be made by a management official, not by a technical system\nadministrator. (The impacts of these decisions, however, are often analyzed by technical system\n54\nadministrators.) \nTo develop a cohesive and comprehensive set\nof security policies, officials may use a\nmanagement process that derives security\nrules from security goals. It is helpful to\nconsider a two-level model for system security\npolicy: security objectives and operational\nsecurity rules, which together comprise the\nsystem-specific policy. Closely linked and often difficult to distinguish, however, is the\nimplementation of the policy in technology. \n5.3.1 Security Objectives\nThe first step in the management process is to\ndefine security objectives for the specific\nsystem. Although, this process may start with\nan analysis of the need for integrity,\n" }, { "page_number": 53, "text": "5. Computer Security Policy\n41\nSample Operational Security Rule\nPersonnel clerks may update fields for weekly\nattendance, charges to annual leave, employee\naddresses, and telephone numbers. Personnel\nspecialists may update salary information. No\nemployees may update their own records.\navailability, and confidentiality, it should not stop there. A security objective needs to more\nspecific; it should be concrete and well defined. It also should be stated so that it is clear that the\nobjective is achievable. This process will also draw upon other applicable organization policies. \nSecurity objectives consist of a series of statements that describe meaningful actions about explicit\nresources. These objectives should be based on system functional or mission requirements, but\nshould state the security actions that support the requirements. \nDevelopment of system-specific policy will require management to make trade-offs, since it is\nunlikely that all desired security objectives will be able to be fully met. Management will face\ncost, operational, technical, and other constraints.\n5.3.2 Operational Security Rules\nAfter management determines the security objectives, the rules for operating a system can be laid\nout, for example, to define authorized and unauthorized modification. Who (by job category,\norganization placement, or name) can do what\n(e.g., modify, delete) to which specific classes\nand records of data, and under what\nconditions.\nThe degree of specificity needed for\noperational security rules varies greatly. The\nmore detailed the rules are, up to a point, the\neasier it is to know when one has been\nviolated. It is also, up to a point, easier to\nautomate policy enforcement. However,\noverly detailed rules may make the job of instructing a computer to implement them difficult or\ncomputationally complex.\nIn addition to deciding the level of detail, management should decide the degree of formality in\ndocumenting the system-specific policy. Once again, the more formal the documentation, the\neasier it is to enforce and to follow policy. On the other hand, policy at the system level that is\ntoo detailed and formal can also be an administrative burden. In general, good practice suggests a\nreasonably detailed formal statement of the access privileges for a system. Documenting access\ncontrols policy will make it substantially easier to follow and to enforce. (See Chapters 10 and\n17, Personnel/User Issues and Logical Access Control.) Another area that normally requires a\ndetailed and formal statement is the assignment of security responsibilities. Other areas that\nshould be addressed are the rules for system usage and the consequences of noncompliance.\nPolicy decisions in other areas of computer security, such as those described in this handbook, are\noften documented in the risk analysis, accreditation statements, or procedural manuals. However,\n" }, { "page_number": 54, "text": "II. Management Controls\n Doing all of these things properly is, unfortunately, the exception rather than the rule. Confidence in the\n55\nsystem's ability to enforce system-specific policy is closely tied to assurance. (See Chapter 9, Assurance.)\n42\nany controversial, atypical, or uncommon policies will also need formal statements. Atypical\npolicies would include any areas where the system policy is different from organizational policy or\nfrom normal practice within the organization, either more or less stringent. The documentation\nfor a typical policy contains a statement explaining the reason for deviation from the\norganization's standard policy. \n5.3.3 System-Specific Policy Implementation\nTechnology plays an important but not sole role in enforcing system-specific policies. When\ntechnology is used to enforce policy, it is important not to neglect nontechnology- based methods. \nFor example, technical system-based controls could be used to limit the printing of confidential\nreports to a particular printer. However, corresponding physical security measures would also\nhave to be in place to limit access to the printer output or the desired security objective would not\nbe achieved. \nTechnical methods frequently used to implement system-security policy are likely to include the\nuse of logical access controls. However, there are other automated means of enforcing or\nsupporting security policy that typically supplement logical access controls. For example,\ntechnology can be used to block telephone users from calling certain numbers. Intrusion-\ndetection software can alert system administrators to suspicious activity or can take action to stop\nthe activity. Personal computers can be configured to prevent booting from a floppy disk.\nTechnology-based enforcement of system-security policy has both advantages and disadvantages. \nA computer system, properly designed, programmed, installed, configured, and maintained,55\nconsistently enforces policy within the computer system, although no computer can force users to\nfollow all procedures. Management controls also play an important role and should not be\nneglected. In addition, deviations from the policy may sometimes be necessary and appropriate;\nsuch deviations may be difficult to implement easily with some technical controls. This situation\noccurs frequently if implementation of the security policy is too rigid (which can occur when the\nsystem analysts fail to anticipate contingencies and prepare for them).\n5.4 Interdependencies\nPolicy is related to many of the topics covered in this handbook:\nProgram Management. Policy is used to establish an organization's computer security program,\nand is therefore closely tied to program management and administration. Both program and\nsystem-specific policy may be established in any of the areas covered in this handbook. For\n" }, { "page_number": 55, "text": "5. Computer Security Policy\n43\nexample, an organization may wish to have a consistent approach to incident handling for all its\nsystems and would issue appropriate program policy to do so. On the other hand, it may decide\nthat its applications are sufficiently independent of each other that application managers should\ndeal with incidents on an individual basis. \nAccess Controls. System-specific policy is often implemented through the use of access controls. \nFor example, it may be a policy decision that only two individuals in an organization are\nauthorized to run a check-printing program. Access controls are used by the system to implement\n(or enforce) this policy. \nLinks to Broader Organizational Policies. This chapter has focused on the types and\ncomponents of computer security policy. However, it is important to realize that computer\nsecurity policies are often extensions of an organization's information security policies for\nhandling information in other forms (e.g., paper documents). For example, an organization's e-\nmail policy would probably be tied to its broader policy on privacy. Computer security policies\nmay also be extensions of other policies, such as those about appropriate use of equipment and\nfacilities.\n5.5 Cost Considerations \n \nA number of potential costs are associated with developing and implementing computer security\npolicies. Overall, the major cost of policy is the cost of implementing the policy and its impacts\nupon the organization. For example, establishing a computer security program, accomplished\nthrough policy, does not come at negligible cost. \nOther costs may be those incurred through the policy development process. Numerous\nadministrative and management activities may be required for drafting, reviewing, coordinating,\nclearing, disseminating, and publicizing policies. In many organizations, successful policy\nimplementation may require additional staffing and training and can take time. In general, the\ncosts to an organization for computer security policy development and implementation will\ndepend upon how extensive the change needed to achieve a level of risk acceptable to\nmanagement. \n \nReferences\nHowe, D. \"Information System Security Engineering: Cornerstone to the Future.\" Proceedings of\nthe 15th National Computer Security Conference. Baltimore, MD, Vol. 1, October 15, 1992. pp.\n244-251.\n \nFites, P., and M. Kratz. \"Policy Development.\" Information Systems Security: A Practitioner's\nReference. New York, NY: Van Nostrand Reinhold, 1993. pp. 411-427.\n" }, { "page_number": 56, "text": "II. Management Controls\n44\nLobel, J. \"Establishing a System Security Policy.\" Foiling the System Breakers. New York, NY:\nMcGraw-Hill, 1986. pp. 57-95. \nMenkus, B. \"Concerns in Computer Security.\" Computers and Security. 11(3), 1992. pp.\n211-215.\n \nOffice of Technology Assessment. \"Federal Policy Issues and Options.\" Defending Secrets,\nSharing Data: New Locks for Electronic Information. Washington, DC: U.S Congress, Office of\nTechnology Assessment, 1987. pp. 151-160.\nOffice of Technology Assessment. \"Major Trends in Policy Development.\" Defending Secrets,\nSharing Data: New Locks and Keys for Electronic Information. Washington, DC: U.S. Congress,\nOffice of Technology Assessment, 1987. p. 131-148. \nO'Neill, M., and F. Henninge, Jr. \"Understanding ADP System and Network Security\nConsiderations and Risk Analysis.\" ISSA Access. 5(4), 1992. pp. 14-17.\n \nPeltier, Thomas. \"Designing Information Security Policies That Get Results.\" Infosecurity News.\n4(2), 1993. pp. 30-31.\n \nPresident's Council on Management Improvement and the President's Council on Integrity and\nEfficiency. Model Framework for Management Control Over Automated Information System.\nWashington, DC: President's Council on Management Improvement, January 1988.\nSmith, J. \"Privacy Policies and Practices: Inside the Organizational Maze.\" Communications of\nthe ACM. 36(12), 1993. pp. 104-120.\nSterne, D. F. \"On the Buzzword `Computer Security Policy.'\" In Proceedings of the 1991 IEEE\nSymposium on Security and Privacy, Oakland, CA: May 1991. pp. 219-230.\n \nWood, Charles Cresson. \"Designing Corporate Information Security Policies.\" DATAPRO\nReports on Information Security, April 1992. \n" }, { "page_number": 57, "text": " This chapter is primarily directed at federal agencies, which are generally very large and complex\n56\norganizations. This chapter discusses programs which are suited to managing security in such environments. \nThey may be wholly inappropriate for smaller organizations or private sector firms.\n This chapter addresses the management of security programs, not the various activities such as risk analysis\n57\nor contingency planning that make up an effective security program.\n45\nOMB Circular A-130, \"Management of Federal\nInformation Resources,\" requires that federal\nagencies establish computer security programs.\nChapter 6\nCOMPUTER SECURITY PROGRAM MANAGEMENT\nComputers and the information they process are critical to many organizations' ability to perform\ntheir mission and business functions. It therefore makes sense that executives view computer\n56\nsecurity as a management issue and seek to protect their organization's computer resources as\nthey would any other valuable asset. To do this effectively requires developing of a\ncomprehensive management approach.\nThis chapter presents an organizationwide\napproach to computer security and discusses\nits important management function. Because\n57\norganizations differ vastly in size, complexity,\nmanagement styles, and culture, it is not\npossible to describe one ideal computer\nsecurity program. However, this chapter does describe some of the features and issues common\nto many federal organizations.\n6.1 Structure of a Computer Security Program \nMany computer security programs that are distributed throughout the organization have different\nelements performing various functions. While this approach has benefits, the distribution of the\ncomputer security function in many organizations is haphazard, usually based upon history (i.e.,\nwho was available in the organization to do what when the need arose). Ideally, the distribution\nof computer security functions should result from a planned and integrated management\nphilosophy.\nManaging computer security at multiple levels brings many benefits. Each level contributes to the\noverall computer security program with different types of expertise, authority, and resources. In\ngeneral, higher-level officials (such as those at the headquarters or unit levels in the agency\ndescribed above) better understand the organization as a whole and have more authority. On the\nother hand, lower-level officials (at the computer facility and applications levels) are more familiar\nwith the specific requirements, both technical and procedural, and problems of the systems and\n" }, { "page_number": 58, "text": "II. Management Controls\n46\nFigure 6.1\nthe users. The levels of computer security program management should be complementary; each\ncan help the other be more effective.\nSince many organizations have at least two levels of computer security management, this chapter\ndivides computer security program management into two levels: the central level and the system\nlevel. (Each organization, though, may have its own unique structure.) The central computer\n" }, { "page_number": 59, "text": "6. Computer Security Program Management\n47\nFigure 6.2\nsecurity program can be used to address the overall management of computer security within an\norganization or a major component of an organization. The system-level computer security\nprogram addresses the management of computer security for a particular system. \n6.2 Central Computer Security Programs\nThe purpose of a central computer security program is to address the overall management of\n" }, { "page_number": 60, "text": "II. Management Controls\n OMB Circular A-130, Section 5; Appendix III, Section 3.\n58\n48\ncomputer security within an organization. In the federal government, the organization could\nconsist of a department, agency, or other major operating unit. \nAs with the management of all resources, central computer security management can be\nperformed in many practical and cost-effective ways. The importance of sound management\ncannot be overemphasized. There is also a downside to centrally managed computer security\nprograms. Specifically, they present greater risk that errors in judgement will be more widely\npropagated throughout the organization. As they strive to meet their objectives, managers need\nto consider the full impact of available options when establishing their computer security\nprograms.\n6.2.1 Benefits of Central Computer Security Programs\nA central security program should provide two quite distinct types of benefits: \nIncreased efficiency and economy of security throughout the organization, and \nthe ability to provide centralized enforcement and oversight. \nBoth of these benefits are in keeping with the purpose of the Paperwork Reduction Act, as\nimplemented in OMB Circular A-130.\nThe Paperwork Reduction Act establishes a broad mandate for agencies to perform their\ninformation management activities in an efficient, effective, and economical manner... . \nAgencies shall assure an adequate level of security for all agency automated information\nsystems, whether maintained in-house or commercially.58\n6.2.2 Efficient, Economic Coordination of Information\nA central computer security program helps to coordinate and manage effective use of security-\nrelated resources throughout the organization. The most important of these resources are\nnormally information and financial resources. \nSound and timely information is necessary for managers to accomplish their tasks effectively. \nHowever, most organizations have trouble collecting information from myriad sources and\neffectively processing and distributing it within the organization. This section discusses some of\nthe sources and efficient uses of computer security information.\nWithin the federal government, many organizations such as the Office of Management and\n" }, { "page_number": 61, "text": "6. Computer Security Program Management\n49\nAn organization's components may develop\nspecialized expertise, which can be shared among\ncomponents. For example, one operating unit may\nprimarily use UNIX and have developed skills in\nUNIX security. A second operating unit (with only\none UNIX machine), may concentrate on MVS\nsecurity and rely on the first unit's knowledge and\nskills for its UNIX machine.\nBudget, the General Services Administration, the National Institute of Standards and Technology,\nand the National Telecommunications and Information Administration, provide information on\ncomputer, telecommunications, or information resources. This information includes security-\nrelated policy, regulations, standards, and guidance. A portion of the information is channelled\nthrough the senior designated official for each agency (see Federal Information Resources\nManagement Regulation [FIRMR] Part 201-2). Agencies are expected to have mechanisms in\nplace to distribute the information the senior designated official receives.\nComputer security-related information is also available from private and federal professional\nsocieties and groups. These groups will often provide the information as a public service,\nalthough some private groups charge a fee for it. However, even for information that is free or\ninexpensive, the costs associated with personnel gathering the information can be high. \nInternal security-related information, such as which procedures were effective, virus infections,\nsecurity problems, and solutions, need to be shared within an organization. Often this information\nis specific to the operating environment and culture of the organization. \nA computer security program administered at the organization level can provide a way to collect\nthe internal security-related information and distribute it as needed throughout the organization. \nSometimes an organization can also share this information with external groups. See Figure 6.3. \nAnother use of an effective conduit of information is to increase the central computer security\nprogram's ability to influence external and internal policy decisions. If the central computer\nsecurity program office can represent the entire organization, then its advice is more likely to be\nheeded by upper management and external organizations. However, to be effective, there should\nbe excellent communication between the system-level computer security programs and the\norganization level. For example, if an organization were considering consolidating its mainframes\ninto one site (or considering distributing the processing currently done at one site), personnel at\nthe central program could provide initial opinions about the security implications. However, to\nspeak authoritatively, central program personnel would have to actually know the security\nimpacts of the proposed change information that would have to be obtained from the system-\nlevel computer security program.\n \nBesides being able to help an organization use\ninformation more cost effectively, a computer\nsecurity program can also help an organization\nbetter spend its scarce security dollars. \nOrganizations can develop expertise and then\nshare it, reducing the need to contract out\nrepeatedly for similar services. The central\ncomputer security program can help facilitate\ninformation sharing.\n" }, { "page_number": 62, "text": "II. Management Controls\n50\nFigure 6.3\nPersonnel at the central computer security program level can also develop their own areas of\nexpertise. For example, they could sharpen their skills could in contingency planning and risk\nanalysis to help the entire organization perform these vital security functions.\n" }, { "page_number": 63, "text": "6. Computer Security Program Management\n51\nBesides allowing an organization to share expertise and, therefore, save money, a central\ncomputer security program can use its position to consolidate requirements so the organization\ncan negotiate discounts based on volume purchasing of security hardware and software. It also\nfacilitates such activities as strategic planning and organizationwide incident handling and security\ntrend analysis.\n6.2.3 Central Enforcement and Oversight\n \nBesides helping an organization improve the economy and efficiency of its computer security\nprogram, a centralized program can include an independent evaluation or enforcement function to\nensure that organizational subunits are cost-effectively securing resources and following\napplicable policy. While the Office of the Inspector General (OIG) and external organizations,\nsuch as the General Accounting Office (GAO), also perform a valuable evaluation role, they\noperate outside the regular management channels. Chapters 8 and 9 further discuss the role of\nindependent audit.\nThere are several reasons for having an oversight function within the regular management\nchannel. First, computer security is an important component in the management of organizational\nresources. This is a responsibility that cannot be transferred or abandoned. Second, maintaining\nan internal oversight function allows an organization to find and correct problems without the\npotential embarrassment of an IG or GAO audit or investigation. Third, the organization may find\ndifferent problems from those that an outside organization may find. The organization\nunderstands its assets, threats, systems, and procedures better than an external organization;\nadditionally, people may have a tendency to be more candid with insiders. \n6.3 Elements of an Effective Central Computer Security Program\nFor a central computer security program to be effective, it should be an established part of\norganization management. If system managers and applications owners do not need to\nconsistently interact with the security program, then it can become an empty token of upper\nmanagement's \"commitment to security.\" \nStable Program Management Function. A well-established program will have a program\nmanager recognized within the organization as the central computer security program manager. \nIn addition, the program will be staffed with able personnel, and links will be established between\nthe program management function and computer security personnel in other parts of the\norganization. A computer security program is a complex function that needs a stable base from\nwhich to direct the management of such security resources as information and money. The\nbenefits of an oversight function cannot be achieved if the computer security program is not\nrecognized within an organization as having expertise and authority.\n" }, { "page_number": 64, "text": "II. Management Controls\n52\nExample\nAgency IRM offices engage in strategic and\ntactical planning for both information and\ninformation technology, in accordance with the\nPaperwork Reduction Act and OMB Circular A-\n130. Security should be an important component\nof these plans. The security needs of the agency\nshould be reflected in the information technology\nchoices and the information needs of the agency\nshould be reflected in the security program.\nStable Resource Base. A well-established program will have a stable resource base in terms of\npersonnel, funds, and other support. Without a stable resource base, it is impossible to plan and\nexecute programs and projects effectively. \nExistence of Policy. Policy provides the foundation for the central computer security program\nand is the means for documenting and promulgating important decisions about computer security. \nA central computer security program should also publish standards, regulations, and guidelines\nthat implement and expand on policy. (See Chapter 5.) \nPublished Mission and Functions Statement. A published mission statement grounds the central\ncomputer security program into the unique operating environment of the organization. The\nstatement clearly establishes the function of the computer security program and defines\nresponsibilities for both the computer security program and other related programs and entities. \nWithout such a statement, it is impossible to develop criteria for evaluating the effectiveness of\nthe program.\nLong-Term Computer Security Strategy. A well-established program explores and develops long-\nterm strategies to incorporate computer security into the next generation of information\ntechnology. Since the computer and telecommunications field moves rapidly, it is essential to plan\nfor future operating environments.\nCompliance Program. A central computer security program needs to address compliance with\nnational policies and requirements, as well as organization-specific requirements. National\nrequirements include those prescribed under the Computer Security Act of 1987, OMB Circular\nA-130, the FIRMR, and Federal Information Processing Standards.\nIntraorganizational Liaison. Many offices\nwithin an organization can affect computer\nsecurity. The Information Resources\nManagement organization and physical\nsecurity office are two obvious examples. \nHowever, computer security often overlaps\nwith other offices, such as safety, reliability\nand quality assurance, internal control, or the\nOffice of the Inspector General. An effective\nprogram should have established relationships\nwith these groups in order to integrate\ncomputer security into the organization's\nmanagement. The relationships should\nencompass more than just the sharing of information; the offices should influence each other. \nLiaison with External Groups. There are many sources of computer security information, such as\n" }, { "page_number": 65, "text": "6. Computer Security Program Management\n As is implied by the name, an organization will typically have several system-level computer security programs.\n59\nIn setting up these programs, the organization should carefully examine the scope of each system-level program. \nSystem-level computer security programs may address, for example, the computing resources within an\noperational element, a major application, or a group of similar systems (either technologically or functionally). \n53\nNIST's Computer Security Program Managers' Forum, computer security clearinghouse, and the\nForum of Incident Response and Security Teams (FIRST). An established program will be\nknowledgeable of and will take advantage of external sources of information. It will also be a\nprovider of information.\n6.4 System-Level Computer Security Programs \nWhile the central program addresses the entire spectrum of computer security for an organization,\nsystem-level programs ensure appropriate and cost-effective security for each system. This\n59\nincludes influencing decisions about what controls to implement, purchasing and installing\ntechnical controls, day-to-day computer security administration, evaluating system vulnerabilities,\nand responding to security problems. It encompasses all the areas discussed in the handbook.\nSystem-level computer security program personnel are the local advocates for computer security. \nThe system security manager/officer raises the issue of security with the cognizant system\nmanager and helps develop solutions for security problems. For example, has the application\nowner made clear the system's security requirements? Will bringing a new function online affect\nsecurity, and if so, how? Is the system vulnerable to hackers and viruses? Has the contingency\nplan been tested? Raising these kinds of questions will force system managers and application\nowners to identify and address their security requirements. \n6.5 Elements of Effective System-Level Programs \nLike the central computer security program, many factors influence how successful a system-level\ncomputer security program is. Many of these are similar to the central program. This section\naddresses some additional considerations. \nSecurity Plans. The Computer Security Act mandates that agencies develop computer security\nand privacy plans for sensitive systems. These plans ensure that each federal and federal interest\nsystem has appropriate and cost-effective security. System-level security personnel should be in a\nposition to develop and implement security plans. Chapter 8 discusses the plans in more detail. \nSystem-Specific Security Policy. Many computer security policy issues need to be addressed on a\nsystem-specific basis. The issues can vary for each system, although access control and the\ndesignation of personnel with security responsibility are likely to be needed for all systems. A\ncohesive and comprehensive set of security policies can be developed by using a process that\n" }, { "page_number": 66, "text": "II. Management Controls\n General Accounting Office, \"Automated System Security -- Federal Agencies Should Strengthen Safeguards\n60\nOver Personal and Other Sensitive Data,\" GAO Report LCD 78-123, Washington, DC, 1978.\n54\nderives security rules from security goals, as discussed in Chapter 5.\nLife Cycle Management. As discussed in Chapter 8, security must be managed throughout a\nsystem's life cycle. This specifically includes ensuring that changes to the system are made with\nattention to security and that accreditation is accomplished. \nIntegration With System Operations. The system-level computer security program should consist\nof people who understand the system, its mission, its technology, and its operating environment. \nEffective security management usually needs to be integrated into the management of the system. \nEffective integration will ensure that system managers and application owners consider security in\nthe planning and operation of the system. The system security manager/officer should be able to\nparticipate in the selection and implementation of appropriate technical controls and security\nprocedures and should understand system vulnerabilities. Also, the system-level computer\nsecurity program should be capable of responding to security problems in a timely manner.\nFor large systems, such as a mainframe data center, the security program will often include a\nmanager and several staff positions in such areas as access control, user administration, and\ncontingency and disaster planning. For small systems, such as an officewide local-area-network\n(LAN), the LAN administrator may have adjunct security responsibilities. \nSeparation From Operations. A natural tension often exists between computer security and\noperational elements. In many instances, operational components -- which tend to be far larger\nand therefore more influential -- seek to resolve this tension by embedding the computer security\nprogram in computer operations. The typical result of this organizational strategy is a computer\nsecurity program that lacks independence, has minimal authority, receives little management\nattention, and has few resources. As early as 1978, GAO identified this organizational mode as\none of the principal basic weaknesses in federal agency computer security programs. System-\n60\nlevel programs face this problem most often.\nThis conflict between the need to be a part of system management and the need for independence\nhas several solutions. The basis of many of the solutions is a link between the computer security\nprogram and upper management, often through the central computer security program. A key\nrequirement of this setup is the existence of a reporting structure that does not include system\nmanagement. Another possibility is for the computer security program to be completely\nindependent of system management and to report directly to higher management. There are many\nhybrids and permutations, such as co-location of computer security and systems management staff\nbut separate reporting (and supervisory) structures. Figure 6.4 presents one example of\n" }, { "page_number": 67, "text": "6. Computer Security Program Management\n No implication that this structure is ideal is intended.\n61\n55\nFigure 6.4\nplacement of the computer security program within a typical Federal agency.61\n" }, { "page_number": 68, "text": "II. Management Controls\n56\n6.6 Central and System-Level Program Interactions\nA system-level program that is not integrated into the organizational program may have difficulty\ninfluencing significant areas affecting security. The system-level computer security program\nimplements the policies, guidance, and regulations of the central computer security program. The\nsystem-level office also learns from the information disseminated by the central program and uses\nthe experience and expertise of the entire organization. The system-level computer security\nprogram further distributes information to systems management as appropriate. \nCommunications, however, should not be just one way. System-level computer security\nprograms inform the central office about their needs, problems, incidents, and solutions. \nAnalyzing this information allows the central computer security program to represent the various\nsystems to the organization's management and to external agencies and advocate programs and\npolicies beneficial to the security of all the systems. \n6.7 Interdependencies\nThe general purpose of the computer security program, to improve security, causes it to overlap\nwith other organizational operations as well as the other security controls discussed in the\nhandbook. The central or system computer security program will address most controls at the\npolicy, procedural, or operational level.\nPolicy. Policy is issued to establish the computer security program. The central computer\nsecurity program(s) normally produces policy (and supporting procedures and guidelines)\nconcerning general and organizational security issues and often issue-specific policy. However,\nthe system-level computer security program normally produces policy for that system. Chapter 5\nprovides additional guidance.\nLife Cycle Management. The process of securing a system over its life cycle is the role of the\nsystem-level computer security program. Chapter 8 addresses these issues.\nIndependent Audit. The independent audit function described in Chapters 8 and 9 should\ncomplement a central computer security program's compliance functions.\n6.8 Cost Considerations\nThis chapter discussed how an organizationwide computer security program can manage security\nresources, including financial resources, more effectively. The cost considerations for a system-\nlevel computer security program are more closely aligned with the overall cost savings in having\nsecurity.\n" }, { "page_number": 69, "text": "6. Computer Security Program Management\n57\nThe most significant direct cost of a computer security program is personnel. In addition, many\nprograms make frequent and effective use of consultants and contractors. A program also needs\nfunds for training and for travel, oversight, information collection and dissemination, and meetings\nwith personnel at other levels of computer security management.\nReferences\nFederal Information Resources Management Regulations, especially 201-2. General Services\nAdministration. Washington, DC.\nGeneral Accounting Office. Automated Systems Security Federal Agencies Should Strengthen\nSafeguards Over Personal and Other Sensitive Data. GAO Report LCD 78-123. Washington,\nDC. 1978.\nGeneral Services Administration. Information Resources Security: What Every Federal Manager\nShould Know. Washington, DC.\nHelsing, C., M. Swanson, and M. Todd. Executive Guide to the Protection of Information\nResources., Special Publication 500-169. Gaithersburg, MD: National Institute of Standards and\nTechnology, 1989.\n \nHelsing, C., M. Swanson, and M. Todd. Management Guide for the Protection of Information\nResources. Special Publication 500-170. Gaithersburg, MD: National Institute of Standards and\nTechnology, 1989.\n\"Managing an Organization Wide Security Program.\" Computer Security Institute, San Francisco,\nCA. (course) \nOffice of Management and Budget. \"Guidance for Preparation of Security Plans for Federal\nComputer Systems That Contain Sensitive Information.\" OMB Bulletin 90-08. Washington, DC,\n1990.\nOffice of Management and Budget. Management of Federal Information Resources. OMB\nCircular A-130.\n \nOwen, R., Jr. \"Security Management: Using the Quality Approach.\" Proceedings of the 15th\nNational Computer Security Conference. Baltimore, MD: Vol. 2, 1992. pp. 584-592.\n \nSpiegel, L. \"Good LAN Security Requires Analysis of Corporate Data.\" Infoworld. 15(52), 1993.\np. 49.\n" }, { "page_number": 70, "text": "II. Management Controls\n58\nU.S. Congress. Computer Security Act of 1987. Public Law 100-235. 1988.\n" }, { "page_number": 71, "text": "59\nManagement is concerned with many types of risk. \nComputer security risk management addresses\nrisks which arise from an organization's use of\ninformation technology.\nRisk assessment often produces an important side\nbenefit -- indepth knowledge about a system and\nan organization as risk analysts try to figure out\nhow systems and functions are interrelated.\nChapter 7\nCOMPUTER SECURITY RISK MANAGEMENT\nRisk is the possibility of something adverse happening. Risk management is the process of\nassessing risk, taking steps to reduce risk to an acceptable level and maintaining that level of risk. \nThough perhaps not always aware of it, individuals manage risks every day. Actions as routine as\nbuckling a car safety belt, carrying an umbrella when rain is forecast, or writing down a list of\nthings to do rather than trusting to memory fall into the purview of risk management. People\nrecognize various threats to their best interests and take precautions to guard against them or to\nminimize their effects.\nBoth government and industry routinely\nmanage a myriad of risks. For example, to\nmaximize the return on their investments,\nbusinesses must often decide between\naggressive (but high-risk) and slow-growth\n(but more secure) investment plans. These\ndecisions require analysis of risk, relative to\npotential benefits, consideration of alternatives, and, finally, implementation of what management\ndetermines to be the best course of action.\nWhile there are many models and methods for\nrisk management, there are several basic\nactivities and processes that should be\nperformed. In discussing risk management, it\nis important to recognize its basic, most\nfundamental assumption: computers cannot\never be fully secured. There is always risk,\nwhether it is from a trusted employee who defrauds the system or a fire that destroys critical\nresources. Risk management is made up of two primary and one underlying activities; risk\nassessment and risk mitigation are the primary activities and uncertainty analysis is the underlying\none.\n7.1 Risk Assessment\nRisk assessment, the process of analyzing and interpreting risk, is comprised of three basic\nactivities: (1) determining the assessment's scope and methodology; (2) collecting and analyzing\n" }, { "page_number": 72, "text": "II. Management Controls\n Many different terms are used to describe risk management and its elements. The definitions used in this\n62\npaper are based on the NIST Risk Management Framework.\n60\nA risk assessment can focus on many different\nareas such as: technical and operational controls to\nbe designed into a new application, the use of\ntelecommunications, a data center, or an entire\norganization.\nGood documentation of risk assessments will make\nlater risk assessments less time consuming and, if a\nquestion arises, will help explain why particular\nsecurity decisions were made.\ndata; and 3) interpreting the risk analysis results.62\n7.1.1 Determining the Assessment's Scope and Methodology\nThe first step in assessing risk is to identify the system under consideration, the part of the system\nthat will be analyzed, and the analytical method including its level of detail and formality. \nThe assessment may be focused on certain\nareas where either the degree of risk is\nunknown or is known to be high. Different\nparts of a system may be analyzed in greater\nor lesser detail. Defining the scope and\nboundary can help ensure a cost-effective\nassessment. Factors that influence scope\ninclude what phase of the life cycle a system is\nin: more detail might be appropriate for a new system being developed than for an existing system\nundergoing an upgrade. Another factor is the relative importance of the system under\nexamination: the more essential the system, the more thorough the risk analysis should be. A\nthird factor may be the magnitude and types of changes the system has undergone since the last\nrisk analysis. The addition of new interfaces would warrant a different scope than would\ninstalling a new operating system.\nMethodologies can be formal or informal, detailed or simplified, high or low level, quantitative\n(computationally based) or qualitative (based on descriptions or rankings), or a combination of\nthese. No single method is best for all users and all environments.\nHow the boundary, scope, and methodology are defined will have major consequences in terms of\n(1) the total amount of effort spent on risk management and (2) the type and usefulness of the\nassessment's results. The boundary and scope should be selected in a way that will produce an\noutcome that is clear, specific, and useful to the system and environment under scrutiny. \n7.1.2 Collecting and Analyzing Data\nRisk has many different components: assets,\nthreats, vulnerabilities, safeguards,\nconsequences, and likelihood. This\nexamination normally includes gathering data\nabout the threatened area and synthesizing\n" }, { "page_number": 73, "text": "7. Computer Security Risk Management\n61\nand analyzing the information to make it useful. \nBecause it is possible to collect much more information than can be analyzed, steps need to be\ntaken to limit information gathering and analysis. This process is called screening. A risk\nmanagement effort should focus on those areas that result in the greatest consequence to the\norganization (i.e., can cause the most harm). This can be done by ranking threats and assets.\nA risk management methodology does not necessarily need to analyze each of the components of\nrisk separately. For example, assets/consequences or threats/likelihoods may be analyzed\ntogether. \nAsset Valuation. These include the information, software, personnel, hardware, and physical\nassets (such as the computer facility). The value of an asset consists of its intrinsic value and the\nnear-term impacts and long-term consequences of its compromise.\nConsequence Assessment. The consequence assessment estimates the degree of harm or loss that\ncould occur. Consequences refers to the overall, aggregate harm that occurs, not just to the near-\nterm or immediate impacts. While such impacts often result in disclosure, modification,\ndestruction, or denial of service, consequences are the more significant long-term effects, such as\nlost business, failure to perform the system's mission, loss of reputation, violation of privacy,\ninjury, or loss of life. The more severe the consequences of a threat, the greater the risk to the\nsystem (and, therefore, the organization).\nThreat Identification. A threat is an entity or event with the potential to harm the system. Typi\ncal threats are errors, fraud, disgruntled employees, fires, water damage, hackers, and viruses. \nThreats should be identified and analyzed to determine the likelihood of their occurrence and their\npotential to harm assets.\n \nIn addition to looking at \"big-ticket\" threats, the risk analysis should investigate areas that are\npoorly understood, new, or undocumented. If a facility has a well-tested physical access control\nsystem, less effort to identify threats may be warranted for it than for unclear, untested software\nbackup procedures.\n \nThe risk analysis should concentrate on those threats most likely to occur and affect important\nassets. In some cases, determining which threats are realistic is not possible until after the threat\nanalysis is begun. Chapter 4 provides additional discussion of today's most prevalent threats.\nSafeguard Analysis. A safeguard is any action, device, procedure, technique, or other measure\nthat reduces a system's vulnerability to a threat. Safeguard analysis should include an examination\nof the effectiveness of the existing security measures. It can also identify new safeguards that\ncould be implemented in the system; however, this is normally performed later in the risk\nmanagement process. \n" }, { "page_number": 74, "text": "VULNERABILITY\nVULNERABILITY\nTHREAT\nASSETS\nData\nFacilities\nHardware/Software\nFigure 7.1 Safeguards prevent threats from harming assets. However, if an appropriate safeguard is not present, a\nvulnerability exists which can be exploited by a threat, thereby puttting assets at risk.\nThreats, Vulnerabilities, Safeguards, and Assets\nII. Management Controls\n62\nFigure 7.1\nVulnerability Analysis. A vulnerability is a condition or weakness in (or absence of) security\nprocedures, technical controls, physical controls, or other controls that could be exploited by a\nthreat. Vulnerabilities are often analyzed in terms of missing safeguards. Vulnerabilities\ncontribute to risk because they may \"allow\" a threat to harm the system. \nThe interrelationship of vulnerabilities, threats, and assets is critical to the analysis of risk. Some\nof these interrelationships are pictured in Figure 7.1. However, there are other interrelationships\nsuch as the presence of a vulnerability inducing a threat. (For example, a normally honest\nemployee might be tempted to alter data when the employee sees that a terminal has been left\nlogged on.)\nLikelihood Assessment. Likelihood is an estimation of the frequency or chance of a threat\nhappening. A likelihood assessment considers the presence, tenacity, and strengths of threats as\n" }, { "page_number": 75, "text": "7. Computer Security Risk Management\n \n The NIST Risk Management Framework refers to risk interpretation as risk measurement. The term\n63\n\"interpretation\" was chosen to emphasize the wide variety of possible outputs from a risk assessment.\n63\nRisk Analysis Results\nRisk analysis results are typically represented\nquantitatively and/or qualitatively. Quantitative\nmeasures may be expressed in terms of reduced\nexpected monetary losses, such as annualized loss\nexpectancies or single occurrences of loss. \nQualitative measures are descriptive, expressed in\nterms such as high, medium, or low, or rankings on\na scale of 1 to 10.\nRisk management can help a manager select the\nmost appropriate controls; however, it is not a\nmagic wand that instantly eliminates all difficult\nissues. The quality of the output depends on the\nquality of the input and the type of analytical\nmethodology used. In some cases, the amount of\nwork required to achieve high-quality input will be\ntoo costly. In other cases, achieving high-quality\ninput may be impossible, especially for such\nvariables as the prevalence of a particular threat or\nthe anticipated effectiveness of a proposed\nsafeguard. For all practical purposes, complete\ninformation is never available; uncertainty is\nalways present. Despite these drawbacks, risk\nmanagement provides a very powerful tool for\nanalyzing the risk associated with computer\nsystems.\nwell as the effectiveness of safeguards (or presence of vulnerabilities). In general, historical\ninformation about many threats is weak, particularly with regard to human threats; thus,\nexperience in this area is important. Some threat data -- especially on physical threats such as\nfires or floods -- is stronger. Care needs to be taken in using any statistical threat data; the source\nof the data or the analysis may be inaccurate or incomplete. In general, the greater the likelihood\nof a threat occurring, the greater the risk.\n7.1.3 Interpreting Risk Analysis Results63\nThe risk assessment is used to support two\nrelated functions: the acceptance of risk and\nthe selection of cost-effective controls. To\naccomplish these functions, the risk\nassessment must produce a meaningful output\nthat reflects what is truly important to the\norganization. Limiting the risk interpretation\nactivity to the most significant risks is another\nway that the risk management process can be\nfocused to reduce the overall effort while still\nyielding useful results. \nIf risks are interpreted consistently across an\norganization, the results can be used to\nprioritize systems to be secured.\n7.2 Risk Mitigation\nRisk mitigation involves the selection and\nimplementation of security controls to reduce\nrisk to a level acceptable to management,\nwithin applicable constraints. Although there is\nflexibility in how risk assessment is conducted,\nthe sequence of identifying boundaries,\nanalyzing input, and producing an output is\nquite natural. The process of risk mitigation\nhas greater flexibility, and the sequence will\ndiffer more, depending on organizational\nculture and the purpose of the risk management\n" }, { "page_number": 76, "text": "II. Management Controls\n64\nactivity. Although these activities are discussed \n" }, { "page_number": 77, "text": "7. Computer Security Risk Management\n65\nFigure 7.2\n" }, { "page_number": 78, "text": "II. Management Controls\n This is often viewed as a circular, iterative process.\n64\n66\nin a specific sequence, they need not be performed in that sequence. In particular, the selection of\nsafeguards and risk acceptance testing are likely to be performed simultaneously.64\n" }, { "page_number": 79, "text": "7. Computer Security Risk Management\n67\nWhat Is a What If Analysis?\nA what if analysis looks at the costs and benefits of\nvarious combinations of controls to determine the\noptimal combination for a particular circumstance. In\nthis simple example (which addresses only one\ncontrol), suppose that hacker break-ins alert agency\ncomputer security personnel to the security risks of\nusing passwords. They may wish to consider replacing\nthe password system with stronger identification and\nauthentication mechanisms, or just strengthening their\npassword procedures. First, the status quo is\nexamined. The system in place puts minimal demands\nupon users and system administrators, but the agency\nhas had three hacker break-ins in the last six months. \nWhat if passwords are strengthened? Personnel may\nbe required to change passwords more frequently or\nmay be required to use a numeral or other\nnonalphabetic character in their password. There are\nno direct monetary expenditures, but staff and\nadministrative overhead (e.g., training and replacing\nforgotten passwords) is increased. Estimates, however,\nare that this will reduce the number of successful\nhacker break-ins to three or four per year.\nWhat if stronger identification and authentication\ntechnology is used? The agency may wish to\nimplement stronger safeguards in the form of one-time\ncryptographic-based passwords so that, even if a\npassword were obtained, it would be useless. Direct\ncosts may be estimated at $45,000, and yearly recurring\ncosts at $8,000. An initial training program would be\nrequired, at a cost of $17,500. The agency estimates,\nhowever, that this would prevent virtually all break-ins.\nComputer security personnel use the results of this\nanalysis to make a recommendation to their\nmanagement officer, who then weighs the costs and\nbenefits, takes into account other constraints (e.g.,\nbudget), and selects a solution.\n 7.2.1 Selecting Safeguards\nA primary function of computer security\nrisk management is the identification of\nappropriate controls. In designing (or\nreviewing) the security of a system, it may\nbe obvious that some controls should be\nadded (e.g., because they are required by\nlaw or because they are clearly cost-\neffective). It may also be just as obvious\nthat other controls may be too expensive\n(considering both monetary and\nnonmonetary factors). For example, it\nmay be immediately apparent to a manager\nthat closing and locking the door to a\nparticular room that contains local area\nnetwork equipment is a needed control,\nwhile posting a guard at the door would\nbe too expensive and not user-friendly.\nIn every assessment of risk, there will be\nmany areas for which it will not be\nobvious what kind of controls are\nappropriate. Even considering only\nmonetary issues, such as whether a control\nwould cost more than the loss it is\nsupposed to prevent, the selection of\ncontrols is not simple. However, in\nselecting appropriate controls, managers\nneed to consider many factors, including:\norganizational policy,\nlegislation, and regulation;\nsafety, reliability, and\nquality requirements;\nsystem performance\nrequirements;\ntimeliness, accuracy, and\ncompleteness requirements;\nthe life cycle costs of security measures;\ntechnical requirements; and\ncultural constraints.\n" }, { "page_number": 80, "text": "II. Management Controls\n68\nOne method of selecting safeguards uses a \"what if\" analysis. With this method, the effect of\nadding various safeguards (and, therefore, reducing vulnerabilities) is tested to see what difference\neach makes with regard to cost, effectiveness, and other relevant factors, such as those listed\nabove. Trade-offs among the factors can be seen. The analysis of trade-offs also supports the\nacceptance of residual risk, discussed below. This method typically involves multiple iterations of\nthe risk analysis to see how the proposed changes affect the risk analysis result.\nAnother method is to categorize types of safeguards and recommend implementing them for\nvarious levels of risk. For example, stronger controls would be implemented on high-risk systems\nthan on low-risk systems. This method normally does not require multiple iterations of the risk\nanalysis. \nAs with other aspects of risk management, screening can be used to concentrate on the highest-\nrisk areas. For example once could focus on risks with very severe consequences, such as a very\nhigh dollar loss or loss of life or on the threats that are most likely to occur. \n7.2.2 Accept Residual Risk\nAt some point, management needs to decide if the operation of the computer system is acceptable,\ngiven the kind and severity of remaining risks. Many managers do not fully understand computer-\nbased risk for several reasons: (1) the type of risk may be different from risks previously\nassociated with the organization or function; (2) the risk may be technical and difficult for a lay\nperson to understand, or (3) the proliferation and decentralization of computing power can make\nit difficult to identify key assets that may be at risk.\nRisk acceptance, like the selection of safeguards, should take into account various factors besides\nthose addressed in the risk assessment. In addition, risk acceptance should take into account the\nlimitations of the risk assessment. (See the section below on uncertainty.) Risk acceptance is\nlinked to the selection of safeguards since, in some cases, risk may have to be accepted because\nsafeguards are too expensive (in either monetary or nonmonetary factors).\nWithin the federal government, the acceptance of risk is closely linked with the authorization to\nuse a computer system, often called accreditation, discussed in Chapters 8 and 9. Accreditation\nis the acceptance of risk by management resulting in a formal approval for the system to become\noperational or remain so. As discussed earlier in this chapter, one of the two primary functions of\nrisk management is the interpretation of risk for the purpose of risk acceptance.\n7.2.3 Implementing Controls and Monitoring Effectiveness\nMerely selecting appropriate safeguards does not reduce risk; those safeguards need to be\neffectively implemented. Moreover, to continue to be effective, risk management needs to be an\nongoing process. This requires a periodic assessment and improvement of safeguards and re-\n" }, { "page_number": 81, "text": "7. Computer Security Risk Management\n69\nWhile uncertainty is always present it should not\ninvalidate a risk assessment. Data and models,\nwhile imperfect, can be good enough for a given\npurpose.\nanalysis of risks. Chapter 8 discusses how periodic risk assessment is an integral part of the\noverall management of a system. (See especially the diagram on page 83.)\nThe risk management process normally produces security requirements that are used to design,\npurchase, build, or otherwise obtain safeguards or implement system changes. The integration of\nrisk management into the life cycle process is discussed in Chapter 8. \n7.3 Uncertainty Analysis\nRisk management often must rely on\nspeculation, best guesses, incomplete data,\nand many unproven assumptions. The\nuncertainty analysis attempts to document this\nso that the risk management results can be\nused knowledgeably. There are two primary\nsources of uncertainty in the risk management\nprocess: (1) a lack of confidence or precision in the risk management model or methodology and\n(2) a lack of sufficient information to determine the exact value of the elements of the risk model,\nsuch as threat frequency, safeguard effectiveness, or consequences.\nThe risk management framework presented in this chapter is a generic description of risk\nmanagement elements and their basic relationships. For a methodology to be useful, it should\nfurther refine the relationships and offer some means of screening information. In this process,\nassumptions may be made that do not accurately reflect the user's environment. This is especially\nevident in the case of safeguard selection, where the number of relationships among assets,\nthreats, and vulnerabilities can become unwieldy. \nThe data are another source of uncertainty. Data for the risk analysis normally come from two\nsources: statistical data and expert analysis. Statistics and expert analysis can sound more\nauthoritative than they really are. There are many potential problems with statistics. For\nexample, the sample may be too small, other parameters affecting the data may not be properly\naccounted for, or the results may be stated in a misleading manner. In many cases, there may be\ninsufficient data. When expert analysis is used to make projections about future events, it should\nbe recognized that the projection is subjective and is based on assumptions made (but not always\nexplicitly articulated) by the expert.\n" }, { "page_number": 82, "text": "II. Management Controls\n70\n7.4 Interdependencies\nRisk management touches on every control and every chapter in this handbook. It is, however,\nmost closely related to life cycle management and the security planning process. The requirement\nto perform risk management is often discussed in organizational policy and is an issue for\norganizational oversight. These issues are discussed in Chapters 5 and 6.\n7.5 Cost Considerations\nThe building blocks of risk management presented in this chapter can be used creatively to\ndevelop methodologies that concentrate expensive analysis work where it is most needed. Risk\nmanagement can become expensive very quickly if an expansive boundary and detailed scope are\nselected. It is very important to use screening techniques, as discussed in this chapter, to limit the\noverall effort. The goals of risk management should be kept in mind as a methodology is selected\nor developed. The methodology should concentrate on areas where identification of risk and the\nselection of cost-effective safeguards are needed. \nThe cost of different methodologies can be significant. A \"back-of-the-envelope\" analysis or\nhigh-medium-low ranking can often provide all the information needed. However, especially for\nthe selection of expensive safeguards or the analysis of systems with unknown consequences,\nmore in-depth analysis may be warranted. \nReferences\nCaelli, William, Dennis Longley, and Michael Shain. Information Security Handbook. New York,\nNY: Stockton Press, 1991.\nCarroll, J.M. Managing Risk: A Computer-Aided Strategy. Boston, MA: Butterworths 1984.\nGilbert, Irene. Guide for Selecting Automated Risk Analysis Tools. Special Publication 500-174.\nGaithersburg, MD: National Institute of Standards and Technology, October 1989.\nJaworski, Lisa. \"Tandem Threat Scenarios: A Risk Assessment Approach.\" Proceedings of the\n16th National Computer Security Conference, Baltimore, MD: Vol. 1, 1993. pp. 155-164.\nKatzke, Stuart. \"A Framework for Computer Security Risk Management.\" 8th Asia Pacific\nInformation Systems Control Conference Proceedings. EDP Auditors Association, Inc.,\nSingapore, October 12-14, 1992.\nLevine, M. \"Audit Serve Security Evaluation Criteria.\" Audit Vision. 2(2), 1992. pp. 29-40.\n" }, { "page_number": 83, "text": "7. Computer Security Risk Management\n71\nNational Bureau of Standards. Guideline for Automatic Data Processing Risk Analysis. Federal\nInformation Processing Standard Publication 65. August 1979.\nNational Institute of Standards and Technology. Guideline for the Analysis of Local Area\nNetwork Security. Federal Information Processing Standard Publication 191. November 1994.\nO'Neill, M., and F. Henninge, Jr., \"Understanding ADP System and Network Security\nConsiderations and Risk Analysis.\" ISSA Access. 5(4), 1992. pp. 14-17.\nProceedings, 4th International Computer Security Risk Management Model Builders Workshop.\nUniversity of Maryland, National Institute of Standards and Technology, College Park, MD,\nAugust 6-8, 1991.\nProceedings, 3rd International Computer Security Risk Management Model Builders Workshop,\nLos Alamos National Laboratory, National Institute of Standards and Technology, National\nComputer Security Center, Santa Fe, New Mexico, August 21-23, 1990.\nProceedings, 1989 Computer Security Risk Management Model Builders Workshop, AIT\nCorporation, Communications Security Establishment, National Computer Security Center,\nNational Institute of Standards and Technology, Ottawa, Canada, June 20-22, 1989.\nProceedings, 1988 Computer Security Risk Management Model Builders Workshop, Martin\nMarietta, National Bureau of Standards, National Computer Security Center, Denver, Colorado,\nMay 24-26, 1988.\n \nSpiegel, L. \"Good LAN Security Requires Analysis of Corporate Data.\" Infoworld. 15(52), 1993.\np. 49.\n \nWood, C. \"Building Security Into Your System Reduces the Risk of a Breach.\" LAN Times.\n10(3), 1993. p. 47.\n \nWood C., et al., Computer Security: A Comprehensive Controls Checklist. New York, NY: John\nWiley & Sons, 1987.\n" }, { "page_number": 84, "text": "72\n" }, { "page_number": 85, "text": " A computer system refers to a collection of processes, hardware, and software that perform a function. This\n65\nincludes applications, networks, or support systems. \n Although this chapter addresses a life cycle process that starts with system initiation, the process can be\n66\ninitiated at any point in the life cycle. \n An organization will typically have many computer security plans. However, it is not necessary that a\n67\nseparate and distinct plan exist for every physical system (e.g., PCs). Plans may address, for example, the\ncomputing resources within an operational element, a major application, or a group of similar systems (either\ntechnologically or functionally).\n73\nChapter 8\nSECURITY AND PLANNING \nIN THE COMPUTER SYSTEM LIFE CYCLE\nLike other aspects of information processing systems, security is most effective and efficient if\nplanned and managed throughout a computer system's life cycle, from initial planning, through\ndesign, implementation, and operation, to disposal. Many security-relevant events and analyses\n65\noccur during a system's life. This chapter explains the relationship among them and how they fit\ntogether. It also discusses the important role of security planning in helping to ensure that\n66\nsecurity issues are addressed comprehensively.\nThis chapter examines:\nsystem security plans, \nthe components of the computer system life cycle,\nthe benefits of integrating security into the computer system life cycle, and\ntechniques for addressing security in the life cycle.\n8.1 Computer Security Act Issues for Federal Systems\nPlanning is used to help ensure that security is addressed in a comprehensive manner throughout a\nsystem's life cycle. For federal systems, the Computer Security Act of 1987 sets forth a statutory\nrequirement for the preparation of computer security plans for all sensitive systems. The intent\n67\nand spirit of the Act is to improve computer security in the federal government, not to create\npaperwork. In keeping with this intent, the Office of Management and Budget (OMB) and NIST\nhave guided agencies toward a planning process that emphasizes good planning and management\nof computer security within an agency and for each computer system. As emphasized in this\n" }, { "page_number": 86, "text": "II. Management Controls\n74\n\"The purpose of the system security plan is to\nprovide a basic overview of the security and\nprivacy requirements of the subject system and the\nagency's plan for meeting those requirements. The\nsystem security plan may also be viewed as\ndocumentation of the structured process of\nplanning adequate, cost-effective security\nprotection for a system.\"\n- OMB Bulletin 90-08\nDifferent people can provide security input\nthroughout the life cycle of a system, including the\naccrediting official, data users, systems users, and\nsystem technical staff.\nchapter, computer security management should be a part of computer systems management. The\nbenefit of having a distinct computer security plan is to ensure that computer security is not\noverlooked. \nThe Act required the submission of plans to\nNIST and the National Security Agency\n(NSA) for review and comment, a process\nwhich has been completed. Current guidance\non implementing the Act requires agencies to\nobtain independent review of computer\nsecurity plans. This review may be internal or\nexternal, as deemed appropriate by the\nagency. \nA \"typical\" plan briefly describes the\nimportant security considerations for the\nsystem and provides references to more detailed documents, such as system security plans,\ncontingency plans, training programs, accreditation statements, incident handling plans, or audit\nresults. This enables the plan to be used as a management tool without requiring repetition of\nexisting documents. For smaller systems, the plan may include all security documentation. As\nwith other security documents, if a plan addresses specific vulnerabilities or other information that\ncould compromise the system, it should be kept private. It also has to be kept up-to-date.\n8.2 Benefits of Integrating Security in the Computer System Life Cycle\nAlthough a computer security plan can be\ndeveloped for a system at any point in the life\ncycle, the recommended approach is to draw\nup the plan at the beginning of the computer\nsystem life cycle. Security, like other aspects\nof a computer system, is best managed if\nplanned for throughout the computer system\nlife cycle. It has long been a tenet of the computer community that it costs ten times more to add\na feature in a system after it has been designed than to include the feature in the system at the\ninitial design phase. The principal reason for implementing security during a system's\ndevelopment is that it is more difficult to implement it later (as is usually reflected in the higher\ncosts of doing so). It also tends to disrupt ongoing operations. \nSecurity also needs to be incorporated into the later phases of the computer system life cycle to\nhelp ensure that security keeps up with changes in the system's environment, technology,\nprocedures, and personnel. It also ensures that security is considered in system upgrades,\nincluding the purchase of new components or the design of new modules. Adding new security\n" }, { "page_number": 87, "text": "8. Life Cycle Security\n75\ncontrols to a system after a security breach, mishap, or audit can lead to haphazard security that\ncan be more expensive and less effective that security that is already integrated into the system. It\ncan also significantly degrade system performance. Of course, it is virtually impossible to\nanticipate the whole array of problems that may arise during a system's lifetime. Therefore, it is\ngenerally useful to update the computer security plan at least at the end of each phase in the life\ncycle and after each re-accreditation. For many systems, it may be useful to update the plan more\noften. \nLife cycle management also helps document security-relevant decisions, in addition to helping\nassure management that security is fully considered in all phases. This documentation benefits\nsystem management officials as well as oversight and independent audit groups. System\nmanagement personnel use documentation as a self-check and reminder of why decisions were\nmade so that the impact of changes in the environment can be more easily assessed. Oversight\nand independent audit groups use the documentation in their reviews to verify that system\nmanagement has done an adequate job and to highlight areas where security may have been\noverlooked. This includes examining whether the documentation accurately reflects how the\nsystem is actually being operated. \nWithin the federal government, the Computer Security Act of 1987 and its implementing\ninstructions provide specific requirements for computer security plans. These plans are a form of\ndocumentation that helps ensure that security is considered not only during system design and\ndevelopment but also throughout the rest of the life cycle. Plans can also be used to be sure that\nrequirements of Appendix III to OMB Circular A-130, as well as other applicable requirements,\nhave been addressed. \n8.3 Overview of the Computer System Life Cycle\nThere are many models for the computer system life cycle but most contain five basic phases, as\npictured in Figure 8.1. \nInitiation. During the initiation phase, the need for a system is expressed and the purpose of\nthe system is documented. \nDevelopment/Acquisition. During this phase the system is designed, purchased,\nprogrammed, developed, or otherwise constructed. This phase often consists of other\ndefined cycles, such as the system development cycle or the acquisition cycle. \nImplementation. After initial system testing, the system is installed or fielded.\nOperation/Maintenance. During this phase the system performs its work. The system is\nalmost always modified by the addition of hardware and software and by numerous other\n" }, { "page_number": 88, "text": "II. Management Controls\n For brevity and because of the uniqueness of each system, none of these discussions can include the details\n68\nof all possible security activities at any particular life cycle phase.\n76\nMany different \"life cycles\" are associated with\ncomputer systems, including the system\ndevelopment, acquisition, and information life\ncycles.\nevents.\nDisposal. The computer system is disposed of once the transition to a new computer system\nis completed. \nEach phase can apply to an entire system, a\nnew component or module, or a system\nupgrade. As with other aspects of systems\nmanagement, the level of detail and analysis\nfor each activity described here is determined\nby many factors including size, complexity,\nsystem cost, and sensitivity. \nMany people find the concept of a computer system life cycle confusing because many cycles\noccur within the broad framework of the entire computer system life cycle. For example, an\norganization could develop a system, using a system development life cycle. During the system's\nlife, the organization might purchase new components, using the acquisition life cycle.\nMoreover, the computer system life cycle itself is merely one component of other life cycles. For\nexample, consider the information life cycle. Normally information, such as personnel data, is\nused much longer than the life of one computer system. If an employee works for an organization\nfor thirty years and collects retirement for another twenty, the employee's automated personnel\nrecord will probably pass through many different organizational computer systems owned by the\ncompany. In addition, parts of the information will also be used in other computer systems, such\nas those of the Internal Revenue Service and the Social Security Administration.\n8.4 Security Activities in the Computer System Life Cycle68\nThis section reviews the security activities that arise in each stage of the computer system life\ncycle. (See Figure 8.1.) \n8.4.1 Initiation\n \nThe conceptual and early design process of a system involves the discovery of a need for a new\nsystem or enhancements to an existing system; early ideas as to system characteristics and\nproposed functionality; brainstorming sessions on architectural, performance, or functional system\naspects; and environmental, financial, political, or other constraints. At the same time, the basic\nsecurity aspects of a system should be developed along with the early system design. This can be\n" }, { "page_number": 89, "text": "8. Life Cycle Security\n77\ndone through a sensitivity assessment.\n" }, { "page_number": 90, "text": "II. Management Controls\n78\nFigure 8.1\n" }, { "page_number": 91, "text": "8. Life Cycle Security\n79\nThe definition of sensitive is often misconstrued. \nSensitive is synonymous with important or valuable.\nSome data is sensitive because it must be kept\nconfidential. Much more data, however, is sensitive\nbecause its integrity or availability must be assured. \nThe Computer Security Act and OMB Circular A-\n130 clearly state that information is sensitive if its\nunauthorized disclosure, modification (i.e., loss of\nintegrity), or unavailability would harm the agency. \nIn general, the more important a system is to the\nmission of the agency, the more sensitive it is. \n8.4.1.1 Conducting a Sensitivity\nAssessment \nA sensitivity assessment looks at the\nsensitivity of both the information to be\nprocessed and the system itself. The\nassessment should consider legal implications,\norganization policy (including federal and\nagency policy if a federal system), and the\nfunctional needs of the system. Sensitivity is\nnormally expressed in terms of integrity,\navailability, and confidentiality. Such factors\nas the importance of the system to the\norganization's mission and the consequences of unauthorized modification, unauthorized\ndisclosure, or unavailability of the system or data need to be examined when assessing sensitivity. \nTo address these types of issues, the people who use or own the system or information should\nparticipate in the assessment.\nA sensitivity assessment should answer the following questions:\nWhat information is handled by the system?\nWhat kind of potential damage could occur through error, unauthorized disclosure\nor modification, or unavailability of data or the system?\nWhat laws or regulations affect security (e.g., the Privacy Act or the Fair Trade\nPractices Act)?\nTo what threats is the system or information particularly vulnerable?\nAre there significant environmental considerations (e.g., hazardous location of\nsystem)?\nWhat are the security-relevant characteristics of the user community (e.g., level of\ntechnical sophistication and training or security clearances)?\nWhat internal security standards, regulations, or guidelines apply to this system?\nThe sensitivity assessment starts an analysis of security that continues throughout the life cycle. \nThe assessment helps determine if the project needs special security oversight, if further analysis is\n" }, { "page_number": 92, "text": "II. Management Controls\n80\nneeded before committing to begin system development (to ensure feasibility at a reasonable\ncost), or in rare instances, whether the security requirements are so strenuous and costly that\nsystem development or acquisition will not be pursued. The sensitivity assessment can be\nincluded with the system initiation documentation either as a separate document or as a section of\nanother planning document. The development of security features, procedures, and assurances,\ndescribed in the next section, builds on the sensitivity assessment.\nA sensitivity assessment can also be performed during the planning stages of system upgrades (for\neither upgrades being procured or developed in house). In this case, the assessment focuses on\nthe affected areas. If the upgrade significantly affects the original assessment, steps can be taken\nto analyze the impact on the rest of the system. For example, are new controls needed? Will\nsome controls become unnecessary?\n8.4.2 Development/Acquisition\nFor most systems, the development/acquisition phase is more complicated than the initiation\nphase. Security activities can be divided into three parts: \ndetermining security features, assurances, and operational practices; \nincorporating these security requirements into design specifications; and \nactually acquiring them. \nThese divisions apply to systems that are designed and built in house, to systems that are\npurchased, and to systems developed using a hybrid approach.\nDuring this phase, technical staff and system sponsors should actively work together to ensure\nthat the technical designs reflect the system's security needs. As with development and\nincorporation of other system requirements, this process requires an open dialogue between\ntechnical staff and system sponsors. It is important to address security requirements effectively in\nsynchronization with development of the overall system.\n8.4.2.1 Determining Security Requirements \nDuring the first part of the development/ acquisition phase, system planners define the\nrequirements of the system. Security requirements should be developed at the same time. These\nrequirements can be expressed as technical features (e.g., access controls), assurances (e.g.,\nbackground checks for system developers), or operational practices (e.g., awareness and training). \nSystem security requirements, like other system requirements, are derived from a number of\nsources including law, policy, applicable standards and guidelines, functional needs of the system,\nand cost-benefit trade-offs. \n" }, { "page_number": 93, "text": "8. Life Cycle Security\n81\nLaw. Besides specific laws that place security requirements on information, such as the Privacy\nAct of 1974, there are laws, court cases, legal opinions, and other similar legal material that may\naffect security directly or indirectly.\nPolicy. As discussed in Chapter 5, management officials issue several different types of policy. \nSystem security requirements are often derived from issue-specific policy. \nStandards and Guidelines. International, national, and organizational standards and guidelines\nare another source for determining security features, assurances, and operational practices. \nStandards and guidelines are often written in an \"if...then\" manner (e.g., if the system is encrypting\ndata, then a particular cryptographic algorithm should be used). Many organizations specify\nbaseline controls for different types of systems, such as administrative, mission- or business-\ncritical, or proprietary. As required, special care should be given to interoperability standards.\n \nFunctional Needs of the System. The purpose of security is to support the function of the system,\nnot to undermine it. Therefore, many aspects of the function of the system will produce related\nsecurity requirements.\nCost-Benefit Analysis. When considering security, cost-benefit analysis is done through risk\nassessment, which examines the assets, threats, and vulnerabilities of the system in order to\ndetermine the most appropriate, cost-effective safeguards (that comply with applicable laws,\npolicy, standards, and the functional needs of the system). Appropriate safeguards are normally\nthose whose anticipated benefits outweigh their costs. Benefits and costs include monetary and\nnonmonetary issues, such as prevented losses, maintaining an organization's reputation, decreased\nuser friendliness, or increased system administration. \nRisk assessment, like cost-benefit analysis, is used to support decision making. It helps managers\nselect cost-effective safeguards. The extent of the risk assessment, like that of other cost-benefit\nanalyses, should be commensurate with the complexity and cost (normally an indicator of\ncomplexity) of the system and the expected benefits of the assessment. Risk assessment is further\ndiscussed in Chapter 7. \nRisk assessment can be performed during the requirements analysis phase of a procurement or the\ndesign phase of a system development cycle. Risk should also normally be assessed during the\ndevelopment/acquisition phase of a system upgrade. The risk assessment may be performed once\nor multiple times, depending upon the project's methodology. \nCare should be taken in differentiating between security risk assessment and project risk analysis. \nMany system development and acquisition projects analyze the risk of failing to successfully\ncomplete the project a different activity from security risk assessment.\n" }, { "page_number": 94, "text": "II. Management Controls\n This is an example of a risk-based decision.\n69\n82\nDeveloping testing specifications early can be\ncritical to being able to cost-effectively test\nsecurity features.\n8.4.2.2 Incorporating Security Requirements Into Specifications \nDetermining security features, assurances, and operational practices can yield significant security\ninformation and often voluminous requirements. This information needs to be validated, updated,\nand organized into the detailed security protection requirements and specifications used by\nsystems designers or purchasers. Specifications can take on quite different forms, depending on\nthe methodology used for to develop the system, or whether the system, or parts of the system,\nare being purchased off the shelf. \nAs specifications are developed, it may be necessary to update initial risk assessments. A\nsafeguard recommended by the risk\nassessment could be incompatible with other\nrequirements, or a control may be difficult to\nimplement. For example, a security\nrequirement that prohibits dial-in access could\nprevent employees from checking their e-mail\nwhile away from the office.69\nBesides the technical and operational controls of a system, assurance also should be addressed.\nThe degree to which assurance (that the security features and practices can and do work correctly\nand effectively) is needed should be determined early. Once the desired level of assurance is\ndetermined, it is necessary to figure out how the system will be tested or reviewed to determine\nwhether the specifications have been satisfied (to obtain the desired assurance). This applies to\nboth system developments and acquisitions. For example, if rigorous assurance is needed, the\nability to test the system or to provide another form of initial and ongoing assurance needs to be\ndesigned into the system or otherwise provided for. See Chapter 9 for more information.\n8.4.2.3 Obtaining the System and Related Security Activities \nDuring this phase, the system is actually built or bought. If the system is being built, security\nactivities may include developing the system's security aspects, monitoring the development\nprocess itself for security problems, responding to changes, and monitoring threat. Threats or\nvulnerabilities that may arise during the development phase include Trojan horses, incorrect code,\npoorly functioning development tools, manipulation of code, and malicious insiders.\nIf the system is being acquired off the shelf, security activities may include monitoring to ensure\nsecurity is a part of market surveys, contract solicitation documents, and evaluation of proposed\nsystems. Many systems use a combination of development and acquisition. In this case, security\nactivities include both sets.\n" }, { "page_number": 95, "text": "8. Life Cycle Security\n83\nIn federal government contracting, it is often\nuseful if personnel with security expertise\nparticipate as members of the source selection\nboard to help evaluate the security aspects of\nproposals.\nAs the system is built or bought, choices are\nmade about the system, which can affect\nsecurity. These choices include selection of\nspecific off-the-shelf products, finalizing an\narchitecture, or selecting a processing site or\nplatform. Additional security analysis will\nprobably be necessary.\nIn addition to obtaining the system, operational practices need to be developed. These refer to\nhuman activities that take place around the system such as contingency planning, awareness and\ntraining, and preparing documentation. The chapters in the Operational Controls section of this\nhandbook discuss these areas. These need to be developed along with the system, although they\nare often developed by different individuals. These areas, like technical specifications, should be\nconsidered from the beginning of the development and acquisition phase.\n8.4.3 Implementation \nA separate implementation phase is not always specified in some life cycle planning efforts. (It is\noften incorporated into the end of development and acquisition or the beginning of operation and\nmaintenance.) However, from a security point of view, a critical security activity, accreditation,\noccurs between development and the start of system operation. The other activities described in\nthis section, turning on the controls and testing, are often incorporated at the end of the\ndevelopment/acquisition phase.\n8.4.3.1 Install/Turn-On Controls \nWhile obvious, this activity is often overlooked. When acquired, a system often comes with\nsecurity features disabled. These need to be enabled and configured. For many systems this is a\ncomplex task requiring significant skills. Custom-developed systems may also require similar\nwork. \n8.4.3.2 Security Testing \nSystem security testing includes both the testing of the particular parts of the system that have\nbeen developed or acquired and the testing of the entire system. Security management, physical\nfacilities, personnel, procedures, the use of commercial or in-house services (such as networking\nservices), and contingency planning are examples of areas that affect the security of the entire\nsystem, but may be specified outside of the development or acquisition cycle. Since only items\nwithin the development or acquisition cycle will have been tested during system acceptance\ntesting, separate tests or reviews may need to be performed for these additional security elements. \n" }, { "page_number": 96, "text": "II. Management Controls\n Some federal agencies use a broader definition of the term certification to refer to security reviews or \n70\nevaluations, formal or information, that take place prior to and are used to support accreditation.\n84\nSample Accreditation Statement\nIn accordance with (Organization Directive), I\nhereby issue an accreditation for (name of system). \nThis accreditation is my formal declaration that a\nsatisfactory level of operational security is present\nand that the system can operate under reasonable\nrisk. This accreditation is valid for three years. \nThe system will be re-evaluated annually to\ndetermine if changes have occurred affecting its\nsecurity.\nSecurity certification is a formal testing of the security safeguards implemented in the computer\nsystem to determine whether they meet applicable requirements and specifications. To provide\n70\nmore reliable technical information, certification is often performed by an independent reviewer,\nrather than by the people who designed the system.\n8.4.3.3 Accreditation \nSystem security accreditation is the formal authorization by the accrediting (management) official\nfor system operation and an explicit acceptance of risk. It is usually supported by a review of the\nsystem, including its management, operational, and technical controls. This review may include a\ndetailed technical evaluation (such as a Federal Information Processing Standard 102 certification,\nparticularly for complex, critical, or high-risk systems), security evaluation, risk assessment, audit,\nor other such review. If the life cycle process is being used to manage a project (such as a system\nupgrade), it is important to recognize that the accreditation is for the entire system, not just for\nthe new addition.\nThe best way to view computer security\naccreditation is as a form of quality control. It\nforces managers and technical staff to work\ntogether to find the best fit for security, given\ntechnical constraints, operational constraints,\nand mission requirements. The accreditation\nprocess obliges managers to make critical\ndecisions regarding the adequacy of security\nsafeguards. A decision based on reliable\ninformation about the effectiveness of\ntechnical and non-technical safeguards and the\nresidual risk is more likely to be a sound\ndecision.\nAfter deciding on the acceptability of security safeguards and residual risks, the accrediting\nofficial should issue a formal accreditation statement. While most flaws in system security are not\nsevere enough to remove an operational system from service or to prevent a new system from\nbecoming operational, the flaws may require some restrictions on operation (e.g., limitations on\ndial-in access or electronic connections to other organizations). In some cases, an interim\naccreditation may be granted, allowing the system to operate requiring review at the end of the\n" }, { "page_number": 97, "text": "8. Life Cycle Security\n85\nOperational assurance examines whether a system\nis operated according to its current security\nrequirements. This includes both the actions of\npeople who operate or use the system and the\nfunctioning of technical controls.\ninterim period, presumably after security upgrades have been made.\n8.4.4 Operation and Maintenance\nMany security activities take place during the operational phase of a system's life. In general,\nthese fall into three areas: (1) security operations and administration; (2) operational assurance;\nand (3) periodic re-analysis of the security. Figure 8.2 diagrams the flow of security activities\nduring the operational phase.\n8.4.4.1 Security Operations and Administration \nOperation of a system involves many security activities discussed throughout this handbook. \nPerforming backups, holding training classes, managing cryptographic keys, keeping up with user\nadministration and access privileges, and updating security software are some examples.\n8.4.4.2 Operational Assurance \nSecurity is never perfect when a system is\nimplemented. In addition, system users and\noperators discover new ways to intentionally\nor unintentionally bypass or subvert security. \nChanges in the system or the environment can\ncreate new vulnerabilities. Strict adherence to\nprocedures is rare over time, and procedures become outdated. Thinking risk is minimal, users\nmay tend to bypass security measures and procedures. \nAs shown in Figure 8.2, changes occur. Operational assurance is one way of becoming aware of\nthese changes whether they are new vulnerabilities (or old vulnerabilities that have not been\ncorrected), system changes, or environmental changes. Operational assurance is the process of\nreviewing an operational system to see that security controls, both automated and manual, are\nfunctioning correctly and effectively.\nTo maintain operational assurance, organizations use two basic methods: system audits and\nmonitoring. These terms are used loosely within the computer security community and often\noverlap. A system audit is a one-time or periodic event to evaluate security. Monitoring refers to\nan ongoing activity that examines either the system or the users. In general, the more \"real-time\"\nan activity is, the more it falls into the category of monitoring. (See Chapter 9.)\n" }, { "page_number": 98, "text": "II. Management Controls\n86\nFigure 8.2\n" }, { "page_number": 99, "text": "8. Life Cycle Security\n87\nSecurity change management helps develop new\nsecurity requirements.\n8.4.4.3 Managing Change \nComputer systems and the environments in\nwhich they operate change continually. In\nresponse to various events such as user\ncomplaints, availability of new features and\nservices, or the discovery of new threats\nand vulnerabilities, system managers and\nusers modify the system and incorporate new features, new procedures, and software updates.\nThe environment in which the system operates also changes. Networking and interconnections\ntend to increase. A new user group may be added, possibly external groups or anonymous\ngroups. New threats may emerge, such as increases in network intrusions or the spread of\npersonal computer viruses. If the system has a configuration control board or other structure to\nmanage technical system changes, a security specialist can be assigned to the board to make\ndeterminations about whether (and if so, how) changes will affect security.\nSecurity should also be considered during system upgrades (and other planned changes) and in\ndetermining the impact of unplanned changes. As shown in Figure 8.2, when a change occurs or\nis planned, a determination is made whether the change is major or minor. A major change, such\nas reengineering the structure of the system, significantly affects the system. Major changes often\ninvolve the purchase of new hardware, software, or services or the development of new software\nmodules.\nAn organization does not need to have a specific cutoff for major-minor change decisions. A\nsliding scale between the two can be implemented by using a combination of the following\nmethods:\nMajor change. A major change requires analysis to determine security\nrequirements. The process described above can be used, although the analysis may\nfocus only on the area(s) in which the change has occurred or will occur. If the\noriginal analysis and system changes have been documented throughout the life\ncycle, the analysis will normally be much easier. Since these changes result in\nsignificant system acquisitions, development work, or changes in policy, the system\nshould be reaccredited to ensure that the residual risk is still acceptable.\nMinor change. Many of the changes made to a system do not require the\nextensive analysis performed for major changes, but do require some analysis. \nEach change can involve a limited risk assessment that weighs the pros (benefits)\nand cons (costs) and that can even be performed on-the-fly at meetings. Even if\nthe analysis is conducted informally, decisions should still be appropriately\ndocumented. This process recognizes that even \"small\" decisions should be \n" }, { "page_number": 100, "text": "II. Management Controls\n88\nIt is important to consider legal requirements for\nrecords retention when disposing of computer\nsystems. For federal systems, system management\nofficials should consult with their agency office\nresponsible for retaining and archiving federal\nrecords.\nMedia Sanitization\nSince electronic information is easy to copy and\ntransmit, information that is sensitive to disclosure\noften needs to be controlled throughout the\ncomputer system life cycle so that managers can\nensure its proper disposition. The removal of\ninformation from a storage medium (such as a hard\ndisk or tape) is called sanitization. Different kinds\nof sanitization provide different levels of\nprotection. A distinction can be made between\nclearing information (rendering it unrecoverable by\nkeyboard attack) and purging (rendering\ninformation unrecoverable against laboratory\nattack). There are three general methods of\npurging media: overwriting, degaussing (for\nmagnetic media only), and destruction.\nrisk-based.\n8.4.4.4 Periodic Reaccreditation \nPeriodically, it is useful to formally reexamine the security of a system from a wider perspective. \nThe analysis, which leads to reaccreditation, should address such questions as: Is the security still\nsufficient? Are major changes needed? \nThe reaccreditation should address high-level security and management concerns as well as the\nimplementation of the security. It is not\nalways necessary to perform a new risk\nassessment or certification in conjunction with\nthe re-accreditation, but the activities support\neach other (and both need be performed\nperiodically). The more extensive system\nchanges have been, the more extensive the\nanalyses should be (e.g., a risk assessment or\nre-certification). A risk assessment is likely to\nuncover security concerns that result in\nsystem changes. After the system has been changed, it may need testing (including certification). \nManagement then reaccredits the system for continued operation if the risk is acceptable.\n8.4.5 Disposal\nThe disposal phase of the computer system\nlife cycle involves the disposition of\ninformation, hardware, and software. \nInformation may be moved to another system,\narchived, discarded, or destroyed. When\narchiving information, consider the method for\nretrieving the information in the future. The\ntechnology used to create the records may not\nbe readily available in the future.\nHardware and software can be sold, given\naway, or discarded. There is rarely a need to\ndestroy hardware, except for some storage\nmedia containing confidential information that\ncannot be sanitized without destruction. The\ndisposition of software needs to be in keeping\nwith its license or other agreements with the\ndeveloper, if applicable. Some licenses are\n" }, { "page_number": 101, "text": "8. Life Cycle Security\n89\nsite-specific or contain other agreements that prevent the software from being transferred. \nMeasures may also have to be taken for the future use of data that has been encrypted, such as\ntaking appropriate steps to ensure the secure long-term storage of cryptographic keys.\n8.5 Interdependencies\nLike many management controls, life cycle planning relies upon other controls. Three closely\nlinked control areas are policy, assurance, and risk management.\nPolicy. The development of system-specific policy is an integral part of determining the security\nrequirements.\nAssurance. Good life cycle management provides assurance that security is appropriately\nconsidered in system design and operation. \nRisk Management. The maintenance of security throughout the operational phase of a system is a\nprocess of risk management: analyzing risk, reducing risk, and monitoring safeguards. Risk\nassessment is a critical element in designing the security of systems and in reaccreditations. \n8.6 Cost Considerations\nSecurity is a factor throughout the life cycle of a system. Sometimes security choices are made by\ndefault, without anyone analyzing why choices are made; sometimes security choices are made\ncarefully, based on analysis. The first case is likely to result in a system with poor security that is\nsusceptible to many types of loss. In the second case, the cost of life cycle management should be\nmuch smaller than the losses avoided. The major cost considerations for life cycle management\nare personnel costs and some delays as the system progresses through the life cycle for\ncompleting analyses and reviews and obtaining management approvals.\nIt is possible to overmanage a system: to spend more time planning, designing, and analyzing risk\nthan is necessary. Planning, by itself, does not further the mission or business of an organization. \nTherefore, while security life cycle management can yield significant benefits, the effort should be\ncommensurate with the system's size, complexity, and sensitivity and the risks associated with the\nsystem. In general, the higher the value of the system, the newer the system's architecture,\ntechnologies, and practices, and the worse the impact if the system security fails, the more effort\nshould be spent on life cycle management. \nReferences\nCommunications Security Establishment. A Framework for Security Risk Management in\n" }, { "page_number": 102, "text": "II. Management Controls\n90\nInformation Technology Systems. Canada.\nDykman, Charlene A. ed., and Charles K. Davis, asc. ed. Control Objectives Controls in an\nInformation Systems Environment: Objectives, Guidelines, and Audit Procedures. (fourth\nedition). Carol Stream, IL: The EDP Auditors Foundation, Inc., April 1992. \nGuttman, Barbara. Computer Security Considerations in Federal Procurements: A Guide for\nProcurement Initiators, Contracting Officers, and Computer Security Officials. Special\nPublication 800-4. Gaithersburg, MD: National Institute of Standards and Technology, March\n1992.\nInstitute of Internal Auditors Research Foundation. System Auditability and Control Report.\nAltamonte Springs, FL: The Institute of Internal Auditors, 1991.\nMurphy, Michael, and Xenia Ley Parker. Handbook of EDP Auditing, especially Chapter 2 \"The\nAuditing Profession,\" and Chapter 3, \"The EDP Auditing Profession.\" Boston, MA: Warren,\nGorham & Lamont, 1989.\nNational Bureau of Standards. Guideline for Computer Security Certification and Accreditation.\nFederal Information Processing Standard Publication 102. September 1983.\nNational Institute of Standards and Technology. \"Disposition of Sensitive Automated\nInformation.\" Computer Systems Laboratory Bulletin. October 1992.\nNational Institute of Standards and Technology. \"Sensitivity of Information.\" Computer Systems\nLaboratory Bulletin. November 1992.\nOffice of Management and Budget. \"Guidance for Preparation of Security Plans for Federal\nComputer Systems That Contain Sensitive Information.\" OMB Bulletin 90-08. 1990.\nRuthberg, Zella G, Bonnie T. Fisher and John W. Lainhart IV. System Development Auditor.\nOxford, England: Elsevier Advanced Technology, 1991.\nRuthberg, Z., et al. Guide to Auditing for Controls and Security: A System Development Life\nCycle Approach. Special Publication 500-153. Gaithersburg, MD: National Bureau of Standards.\nApril 1988.\nVickers Benzel, T. C. Developing Trusted Systems Using DOD-STD-2167A. Oakland, CA: IEEE\nComputer Society Press, 1990.\nWood, C. \"Building Security Into Your System Reduces the Risk of a Breach.\" LAN Times,\n10(3), 1993. p 47. \n" }, { "page_number": 103, "text": " Accreditation is a process used primarily within the federal government. It is the process of managerial\n71\nauthorization for processing. Different agencies may use other terms for this approval function. The terms used\nhere are consistent with Federal Information Processing Standard 102, Guideline for Computer Security\nCertification and Accreditation. (See reference section of this chapter.)\n91\nSecurity assurance is the degree of confidence one\nhas that the security controls operate correctly and\nprotect the system as intended. \nChapter 9\nASSURANCE\nComputer security assurance is the degree of confidence one has that the security measures, both\ntechnical and operational, work as intended to protect the system and the information it processes. \nAssurance is not, however, an absolute guarantee that the measures work as intended. Like the\nclosely related areas of reliability and quality, assurance can be difficult to analyze; however, it is\nsomething people expect and obtain (though often without realizing it). For example, people may\nroutinely get product recommendations from colleagues but may not consider such\nrecommendations as providing assurance.\nAssurance is a degree of confidence, not a\ntrue measure of how secure the system\nactually is. This distinction is necessary\nbecause it is extremely difficult -- and in many\ncases virtually impossible -- to know exactly\nhow secure a system is. \nAssurance is a challenging subject because it is difficult to describe and even more difficult to\nquantify. Because of this, many people refer to assurance as a \"warm fuzzy feeling\" that controls\nwork as intended. However, it is possible to apply a more rigorous approach by knowing two\nthings: (1) who needs to be assured and (2) what types of assurance can be obtained. The person\nwho needs to be assured is the management official who is ultimately responsible for the security\nof the system. Within the federal government, this person is the authorizing or accrediting\nofficial.71\nThere are many methods and tools for obtaining assurance. For discussion purposes, this chapter\ncategorizes assurance in terms of a general system life cycle. The chapter first discusses planning\nfor assurance and then presents the two categories of assurance methods and tools: (1) design and\nimplementation assurance and (2) operational assurance. Operational assurance is further\ncategorized into audits and monitoring. \nThe division between design and implementation assurance and operational assurance can be\nfuzzy. While such issues as configuration management or audits are discussed under operational\nassurance, they may also be vital during a system's development. The discussion tends to focus\nmore on technical issues during design and implementation assurance and to be a mixture of\n" }, { "page_number": 104, "text": "II. Management Controls\n OMB Circular A-130 requires management security authorization of operation for federal systems.\n72\n92\nmanagement, operational, and technical issues under operational assurance. The reader should\nkeep in mind that the division is somewhat artificial and that there is substantial overlap.\n9.1 Accreditation and Assurance\nAccreditation is a management official's formal acceptance of the adequacy of a system's security. \nThe best way to view computer security accreditation is as a form of quality control. It forces\nmanagers and technical staff to work together to find workable, cost-effective solutions given\nsecurity needs, technical constraints, operational constraints, and mission or business\nrequirements. The accreditation process obliges managers to make the critical decision regarding\nthe adequacy of security safeguards and, therefore, to recognize and perform their role in securing\ntheir systems. In order for the decisions to be sound, they need to be based on reliable\ninformation about the implementation of both technical and nontechnical safeguards. These\ninclude:\nTechnical features (Do they operate as intended?).\nOperational practices (Is the system operated according to stated procedures?).\nOverall security (Are there threats which the technical features and operational\npractices do not address?).\nRemaining risks (Are they acceptable?).\nA computer system should be accredited before the system becomes operational with periodic\nreaccreditation after major system changes or when significant time has elapsed. Even if a\n72\nsystem was not initially accredited, the accreditation process can be initiated at any time. Chapter\n8 further discusses accreditation. \n9.1.1 Accreditation and Assurance\nAssurance is an extremely important -- but not the only -- element in accreditation. As shown in\nthe diagram, assurance addresses whether the technical measures and procedures operate either\n(1) according to a set of security requirements and specifications or (2) according to general\nquality principles. Accreditation also addresses whether the system's security requirements are\ncorrect and well implemented and whether the level of quality is sufficiently high. These activities\nare discussed in Chapters 7 and 8. \n" }, { "page_number": 105, "text": "9. Assurance\n In the past, accreditation has been defined to require a certification, which is an in-depth testing of technical\n73\ncontrols. It is now recognized within the federal government that other analyses (e.g., a risk analysis or audit) can\nalso provide sufficient assurance for accreditation. \n93\n9.1.2 Selecting Assurance Methods\nThe accrediting official makes the final decision about how much and what types of assurance are\nneeded for a system. For this decision to be informed, it is derived from a review of security, such\nas a risk assessment or other study (e.g., certification), as deemed appropriate by the accrediting\nofficial. The accrediting official needs to be in a position to analyze the pros and cons of the\n73\ncost of assurance, the cost of controls, and the risks to the organization. At the end of the\naccreditation process, the accrediting official will be the one to accept the remaining risk. Thus,\n" }, { "page_number": 106, "text": "II. Management Controls\n94\nDesign and implementation assurance should be\nexamined from two points of view: the component\nand the system. Component assurance looks at the\nsecurity of a specific product or system\ncomponent, such as an operating system,\napplication, security add-on, or\ntelecommunications module. System assurance\nlooks at the security of the entire system, including\nthe interaction between products and modules.\nthe selection of assurance methods should be coordinated with the accrediting official. \nIn selecting assurance methods, the need for assurance should be weighed against its cost. \nAssurance can be quite expensive, especially if extensive testing is done. Each method has\nstrengths and weaknesses in terms of cost and what kind of assurance is actually being delivered. \nA combination of methods can often provide greater assurance, since no method is foolproof, and\ncan be less costly than extensive testing. \nThe accrediting official is not the only arbiter of assurance. Other officials who use the system\nshould also be consulted. (For example, a Production Manager who relies on a Supply System\nshould provide input to the Supply Manager.) In addition, there may be constraints outside the\naccrediting official's control that also affect the selection of methods. For instance, some of the\nmethods may unduly restrict competition in acquisitions of federal information processing\nresources or may be contrary to the organization's privacy policies. Certain assurance methods\nmay be required by organizational policy or directive.\n9.2 Planning and Assurance\nAssurance planning should begin during the planning phase of the system life cycle, either for new\nsystems or a system upgrades. Planning for assurance when planning for other system\nrequirements makes sense. If a system is going to need extensive testing, it should be built to\nfacilitate such testing. \nPlanning for assurance helps a manager make decisions about what kind of assurance will be cost-\neffective. If a manager waits until a system is built or bought to consider assurance, the number\nof ways to obtain assurance may be much smaller than if the manager had planned for it earlier,\nand the remaining assurance options may be more expensive.\n9.3 Design and Implementation\nAssurance\nDesign and implementation assurance\naddresses whether the features of a system,\napplication, or component meets security\nrequirements and specifications and whether\nthey are they are well designed and well built. \nChapter 8 discusses the source for security\nrequirements and specifications. Design and\nimplementation assurance examines system\ndesign, development, and installation. Design and implementation assurance is usually associated\n" }, { "page_number": 107, "text": "9. Assurance\n95\nwith the development/acquisition and implementation phase of the system life cycle; however, it\nshould also be considered throughout the life cycle as the system is modified. \nAs stated earlier, assurance can address whether the product or system meets a set of security\nspecifications, or it can provide other evidence of quality. This section outlines the major\nmethods for obtaining design and implementation assurance. \n9.3.1 Testing and Certification\nTesting can address the quality of the system as built, as implemented, or as operated. Thus, it\ncan be performed throughout the development cycle, after system installation, and throughout its\noperational phase. Some common testing techniques include functional testing (to see if a given\nfunction works according to its requirements) or penetration testing (to see if security can be\nbypassed). These techniques can range from trying several test cases to in-depth studies using\nmetrics, automated tools, or multiple detailed test cases.\nCertification is a formal process for testing components or systems against a specified set of\nsecurity requirements. Certification is normally performed by an independent reviewer, rather\nthan one involved in building the system. Certification is more often cost-effective for complex or\nhigh-risk systems. Less formal security testing can be used for lower-risk systems. Certification\ncan be performed at many stages of the system design and implementation process and can take\nplace in a laboratory, operating environment, or both.\n9.3.2 NIST Conformance Testing and Validation Suites \nNIST produces validation suites and conformance testing to determine if a product (software,\nhardware, firmware) meets specified standards. These test suites are developed for specific\nstandards and use many methods. Conformance to standards can be important for many reasons,\nincluding interoperability or strength of security provided. NIST publishes a list of validated\nproducts quarterly.\n9.3.3 Use of Advanced or Trusted Development \nIn the development of both commercial off-the-shelf products and more customized systems, the\nuse of advanced or trusted system architectures, development methodologies, or software\nengineering techniques can provide assurance. Examples include security design and development\nreviews, formal modeling, mathematical proofs, ISO 9000 quality techniques, or use of security\narchitecture concepts, such as a trusted computing base (TCB) or reference monitor. \n9.3.4 Use of Reliable Architectures\nSome system architectures are intrinsically more reliable, such as systems that use fault-tolerance,\n" }, { "page_number": 108, "text": "II. Management Controls\n96\nredundance, shadowing, or redundant array of inexpensive disks (RAID) features. These\nexamples are primarily associated with system availability.\n9.3.5 Use of Reliable Security \nOne factor in reliable security is the concept of ease of safe use, which postulates that a system\nthat is easier to secure will be more likely to be secure. Security features may be more likely to be\nused when the initial system defaults to the \"most secure\" option. In addition, a system's security\nmay be deemed more reliable if it does not use very new technology that has not been tested in the\n\"real\" world (often called \"bleeding-edge\" technology). Conversely, a system that uses older,\nwell-tested software may be less likely to contain bugs. \n9.3.6 Evaluations\nA product evaluation normally includes testing. Evaluations can be performed by many types of\norganizations, including government agencies, both domestic and foreign; independent\norganizations, such as trade and professional organizations; other vendors or commercial groups;\nor individual users or user consortia. Product reviews in trade literature are a form of evaluation,\nas are more formal reviews made against specific criteria. Important factors for using evaluations\nare the degree of independence of the evaluating group, whether the evaluation criteria reflect\nneeded security features, the rigor of the testing, the testing environment, the age of the\nevaluation, the competence of the evaluating organization, and the limitations placed on the\nevaluations by the evaluating group (e.g., assumptions about the threat or operating environment). \n9.3.7 Assurance Documentation\nThe ability to describe security requirements and how they were met can reflect the degree to\nwhich a system or product designer understands applicable security issues. Without a good\nunderstanding of the requirements, it is not likely that the designer will be able to meet them.\nAssurance documentation can address the security either for a system or for specific components. \nSystem-level documentation should describe the system's security requirements and how they\nhave been implemented, including interrelationships among applications, the operating system, or\nnetworks. System-level documentation addresses more than just the operating system, the\nsecurity system, and applications; it describes the system as integrated and implemented in a\nparticular environment. Component documentation will generally be an off-the-shelf product,\nwhereas the system designer or implementer will generally develop system documentation.\n9.3.8 Accreditation of Product to Operate in Similar Situation\nThe accreditation of a product or system to operate in a similar situation can be used to provide\n" }, { "page_number": 109, "text": "9. Assurance\n97\nsome assurance. However, it is important to realize that an accreditation is environment- and\nsystem-specific. Since accreditation balances risk against advantages, the same product may be\nappropriately accredited for one environment but not for another, even by the same accrediting\nofficial.\n9.3.9 Self-Certification\nA vendor's, integrator's, or system developer's self-certification does not rely on an impartial or\nindependent agent to perform a technical evaluation of a system to see how well it meets a stated\nsecurity requirement. Even though it is not impartial, it can still provide assurance. The self-\ncertifier's reputation is on the line, and a resulting certification report can be read to determine\nwhether the security requirement was defined and whether a meaningful review was performed. \nA hybrid certification is possible where the work is performed under the auspices or review of an\nindependent organization by having that organization analyze the resulting report, perform spot\nchecks, or perform other oversight. This method may be able to combine the lower cost and\ngreater speed of a self-certification with the impartiality of an independent review. The review,\nhowever, may not be as thorough as independent evaluation or testing.\n9.3.10 Warranties, Integrity Statements, and Liabilities\nWarranties are another source of assurance. If a manufacturer, producer, system developer, or\nintegrator is willing to correct errors within certain time frames or by the next release, this should\ngive the system manager a sense of commitment to the product and of the product's quality. An\nintegrity statement is a formal declaration or certification of the product. It can be backed up by a\npromise to (a) fix the item (warranty) or (b) pay for losses (liability) if the product does not\nconform to the integrity statement. \n9.3.11 Manufacturer's Published Assertions\nA manufacturer's or developer's published assertion or formal declaration provides a limited\namount of assurance based exclusively on reputation.\n9.3.12 Distribution Assurance\nIt is often important to know that software has arrived unmodified, especially if it is distributed\nelectronically. In such cases, checkbits or digital signatures can provide high assurance that code\nhas not been modified. Anti-virus software can be used to check software that comes from\nsources with unknown reliability (such as a bulletin board).\n" }, { "page_number": 110, "text": "II. Management Controls\n98\n9.4 Operational Assurance\nDesign and implementation assurance addresses the quality of security features built into systems. \nOperational assurance addresses whether the system's technical features are being bypassed or\nhave vulnerabilities and whether required procedures are being followed. It does not address\nchanges in the system's security requirements, which could be caused by changes to the system\nand its operating or threat environment. (These changes are addressed in Chapter 8.)\nSecurity tends to degrade during the operational phase of the system life cycle. System users and\noperators discover new ways to intentionally or unintentionally bypass or subvert security\n(especially if there is a perception that bypassing security improves functionality). Users and\nadministrators often think that nothing will happen to them or their system, so they shortcut\nsecurity. Strict adherence to procedures is rare, and they become outdated, and errors in the\nsystem's administration commonly occur.\nOrganizations use two basic methods to maintain operational assurance: \nA system audit -- a one-time or periodic event to evaluate security. An audit can\nvary widely in scope: it may examine an entire system for the purpose of\nreaccreditation or it may investigate a single anomalous event. \nMonitoring -- an ongoing activity that checks on the system, its users, or the\nenvironment. \nIn general, the more \"real-time\" an activity is, the more it falls into the category of monitoring. \nThis distinction can create some unnecessary linguistic hairsplitting, especially concerning system-\ngenerated audit trails. Daily or weekly reviewing of the audit trail (for unauthorized access\nattempts) is generally monitoring, while an historical review of several months' worth of the trail\n(tracing the actions of a specific user) is probably an audit. \n9.4.1 Audit Methods and Tools \nAn audit conducted to support operational assurance examines whether the system is meeting\nstated or implied security requirements including system and organization policies. Some audits\nalso examine whether security requirements are appropriate, but this is outside the scope of\noperational assurance. (See Chapter 8.) Less formal audits are often called security reviews.\n" }, { "page_number": 111, "text": "9. Assurance\n An example of an internal auditor in the federal government is the Inspector General. The General\n74\nAccounting Office can perform the role of external auditor in the federal government. In the private sector, the\ncorporate audit staff serves the role of internal auditor, while a public accounting firm would be an external\nauditor.\n99\nA person who performs an independent audit\nshould be free from personal and external\nconstraints which may impair their independence\nand should be organizationally independent.\nAudits can be self-administered or independent (either internal or external). Both types can\n74\nprovide excellent information about technical, procedural, managerial, or other aspects of\nsecurity. The essential difference between a\nself-audit and an independent audit is\nobjectivity. Reviews done by system\nmanagement staff, often called self-audits/\nassessments, have an inherent conflict of\ninterest. The system management staff may\nhave little incentive to say that the computer\nsystem was poorly designed or is sloppily\noperated. On the other hand, they may be\nmotivated by a strong desire to improve the security of the system. In addition, they are\nknowledgeable about the system and may be able to find hidden problems. \nThe independent auditor, by contrast, should have no professional stake in the system. \nIndependent audit may be performed by a professional audit staff in accordance with generally\naccepted auditing standards. \nThere are many methods and tools, some of which are described here, that can be used to audit a\nsystem. Several of them overlap. \n9.4.1.1 Automated Tools \nEven for small multiuser computer systems, it is a big job to manually review security features. \nAutomated tools make it feasible to review even large computer systems for a variety of security\nflaws.\nThere are two types of automated tools: (1) active tools, which find vulnerabilities by trying to\nexploit them, and (2) passive tests, which only examine the system and infer the existence of\nproblems from the state of the system. \nAutomated tools can be used to help find a variety of threats and vulnerabilities, such as improper\naccess controls or access control configurations, weak passwords, lack of integrity of the system\nsoftware, or not using all relevant software updates and patches. These tools are often very\nsuccessful at finding vulnerabilities and are sometimes used by hackers to break into systems. Not\ntaking advantage of these tools puts system administrators at a disadvantage. Many of the tools\n" }, { "page_number": 112, "text": "II. Management Controls\n100\nThe General Accounting Office provides standards\nand guidance for internal controls audits of federal\nagencies.\nWarning: Security Checklists that are passed (e.g.,\nwith a B+ or better score) are often used\nmistakenly as proof (instead of an indication) that\nsecurity is sufficient. Also, managers of systems\nwhich \"fail\" a checklist often focus too much\nattention on \"getting the points,\" rather than\nwhether the security measures makes sense in the\nparticular environment and are correctly\nimplemented. \nare simple to use; however, some programs (such as access-control auditing tools for large\nmainframe systems) require specialized skill to use and interpret.\n9.4.1.2 Internal Controls Audit \nAn auditor can review controls in place and\ndetermine whether they are effective. The\nauditor will often analyze both computer and\nnoncomputer-based controls. Techniques\nused include inquiry, observation, and testing (of both the controls themselves and the data). The\naudit can also detect illegal acts, errors, irregularities, or a lack of compliance with laws and\nregulations. Security checklists and penetration testing, discussed below, may be used.\n9.4.1.3 Security Checklists \nWithin the government, the computer security\nplan provides a checklist against which the\nsystem can be audited. This plan, discussed in\nChapter 8, outlines the major security\nconsiderations for a system, including\nmanagement, operational, and technical\nissues. One advantage of using a computer\nsecurity plan is that it reflects the unique\nsecurity environment of the system, rather\nthan a generic list of controls. Other checklists can be developed, which include national or\norganizational security policies and practices (often referred to as baselines). Lists of \"generally\naccepted security practices\" (GSSPs) can also be used. Care needs to be taken so that deviations\nfrom the list are not automatically considered wrong, since they may be appropriate for the\nsystem's particular environment or technical constraints.\nChecklists can also be used to verify that changes to the system have been reviewed from a\nsecurity point of view. A common audit examines the system's configuration to see if major\nchanges (such as connecting to the Internet) have occurred that have not yet been analyzed from a\nsecurity point of view.\n9.4.1.4 Penetration Testing \nPenetration testing can use many methods to attempt a system break-in. In addition to using\nactive automated tools as described above, penetration testing can be done \"manually.\" The most\nuseful type of penetration testing is to use methods that might really be used against the system. \nFor hosts on the Internet, this would certainly include automated tools. For many systems, lax\n" }, { "page_number": 113, "text": "9. Assurance\n While penetration testing is a very powerful technique, it should preferably be conducted with the\n75\nknowledge and consent of system management. Unknown penetration attempts can cause a lot of stress among\noperations personnel, and may create unnecessary disturbances.\n101\nprocedures or a lack of internal controls on applications are common vulnerabilities that\npenetration testing can target. Another method is \"social engineering,\" which involves getting\nusers or administrators to divulge information about systems, including their passwords.75\n9.4.2 Monitoring Methods and Tools\nSecurity monitoring is an ongoing activity that looks for vulnerabilities and security problems. \nMany of the methods are similar to those used for audits, but are done more regularly or, for\nsome automated tools, in real time.\n9.4.2.1 Review of System Logs \nAs discussed in Chapter 8, a periodic review of system-generated logs can detect security\nproblems, including attempts to exceed access authority or gain system access during unusual\nhours.\n9.4.2.2 Automated Tools \nSeveral types of automated tools monitor a system for security problems. Some examples follow:\nVirus scanners are a popular means of checking for virus infections. These programs test for\nthe presence of viruses in executable program files. \nChecksumming presumes that program files should not change between updates. They work\nby generating a mathematical value based on the contents of a particular file. When the\nintegrity of the file is to be verified, the checksum is generated on the current file and\ncompared with the previously generated value. If the two values are equal, the integrity of\nthe file is verified. Program checksumming can detect viruses, Trojan horses, accidental\nchanges to files caused by hardware failures, and other changes to files. However, they may\nbe subject to covert replacement by a system intruder. Digital signatures can also be used.\nPassword crackers check passwords against a dictionary (either a \"regular\" dictionary or a\nspecialized one with easy-to-guess passwords) and also check if passwords are common\npermutations of the user ID. Examples of special dictionary entries could be the names of\nregional sports teams and stars; common permutations could be the user ID spelled\nbackwards.\n" }, { "page_number": 114, "text": "II. Management Controls\n102\nIntegrity verification programs can be used by such applications to look for evidence of data\ntampering, errors, and omissions. Techniques include consistency and reasonableness checks\nand validation during data entry and processing. These techniques can check data elements,\nas input or as processed, against expected values or ranges of values; analyze transactions for\nproper flow, sequencing, and authorization; or examine data elements for expected\nrelationships. These programs comprise a very important set of processes because they can\nbe used to convince people that, if they do what they should not do, accidentally or\nintentionally, they will be caught. Many of these programs rely upon logging of individual\nuser activities.\nIntrusion detectors analyze the system audit trail, especially log-ons, connections, operating\nsystem calls, and various command parameters, for activity that could represent unauthorized\nactivity. Intrusion detection is covered in Chapters 12 and 18.\nSystem performance monitoring analyzes system performance logs in real time to look for\navailability problems, including active attacks (such as the 1988 Internet worm) and system\nand network slowdowns and crashes. \n9.4.2.3 Configuration Management \nFrom a security point of view, configuration management provides assurance that the system in\noperation is the correct version (configuration) of the system and that any changes to be made are\nreviewed for security implications. Configuration management can be used to help ensure that\nchanges take place in an identifiable and controlled environment and that they do not\nunintentionally harm any of the system's properties, including its security. Some organizations,\nparticularly those with very large systems (such as the federal government), use a configuration\ncontrol board for configuration management. When such a board exists, it is helpful to have a\ncomputer security expert participate. In any case, it is useful to have computer security officers\nparticipate in system management decision making.\nChanges to the system can have security implications because they may introduce or remove\nvulnerabilities and because significant changes may require updating the contingency plan, risk\nanalysis, or accreditation.\n9.4.2.4 Trade Literature/Publications/Electronic News \nIn addition to monitoring the system, it is useful to monitor external sources for information. \nSuch sources as trade literature, both printed and electronic, have information about security\nvulnerabilities, patches, and other areas that impact security. The Forum of Incident Response\nTeams (FIRST) has an electronic mailing list that receives information on threats, vulnerabilities,\n" }, { "page_number": 115, "text": "9. Assurance\n For information on FIRST, send e-mail to FIRST-SEC@FIRST.ORG.\n76\n103\nand patches. \n76\n9.5 Interdependencies\nAssurance is an issue for every control and safeguard discussed in this handbook. Are user ID\nand access privileges kept up to date? Has the contingency plan been tested? Can the audit trail\nbe tampered with? One important point to be reemphasized here is that assurance is not only for\ntechnical controls, but for operational controls as well. Although the chapter focused on\ninformation systems assurance, it is also important to have assurance that management controls\nare working well. Is the security program effective? Are policies understood and followed? As\nnoted in the introduction to this chapter, the need for assurance is more widespread than people\noften realize.\nLife Cycle. Assurance is closely linked to the planning for security in the system life cycle. \nSystems can be designed to facilitate various kinds of testing against specified security\nrequirements. By planning for such testing early in the process, costs can be reduced; in some\ncases, without proper planning, some kinds of assurance cannot be otherwise obtained. \n9.6 Cost Considerations\nThere are many methods of obtaining assurance that security features work as anticipated. Since\nassurance methods tend to be qualitative rather than quantitative, they will need to be evaluated. \nAssurance can also be quite expensive, especially if extensive testing is done. It is useful to\nevaluate the amount of assurance received for the cost to make a best-value decision. In general,\npersonnel costs drive up the cost of assurance. Automated tools are generally limited to\naddressing specific problems, but they tend to be less expensive. \nReferences\nBorsook, P. \"Seeking Security.\" Byte. 18(6), 1993. pp. 119-128. \nDykman, Charlene A. ed., and Charles K. Davis, asc. ed. Control Objectives Controls in an\nInformation Systems Environment: Objectives, Guidelines, and Audit Procedures. (fourth\nedition). Carol Stream, IL: The EDP Auditors Foundation, Inc., April 1992. \nFarmer, Dan and Wietse Venema. \"Improving the Security of Your Site by Breaking Into It.\"\nAvailable from FTP.WIN.TUE.NL. 1993.\n" }, { "page_number": 116, "text": "II. Management Controls\n104\nGuttman, Barbara. Computer Security Considerations in Federal Procurements: A Guide for\nProcurement Initiators, Contracting Officers, and Computer Security Officials. Special\nPublication 800-4. Gaithersburg, MD: National Institute of Standards and Technology, March\n1992.\nHowe, D. \"Information System Security Engineering: Cornerstone to the Future.\" Proceedings of\nthe 15th National Computer Security Conference, Vol 1. (Baltimore, MD) Gaithersburg, MD:\nNational Institute of Standards and Technology, 1992. pp. 244-251.\nLevine, M. \"Audit Serve Security Evaluation Criteria.\" Audit Vision. 2(2). 1992, pp. 29-40.\nNational Bureau of Standards. Guideline for Computer Security Certification and Accreditation.\nFederal Information Processing Standard Publication 102. September 1983.\nNational Bureau of Standards. Guideline for Lifecycle Validation, Verification, and Testing of\nComputer Software. Federal Information Processing Standard Publication 101. June 1983.\nNational Bureau of Standards. Guideline for Software Verification and Validation Plans. Federal\nInformation Processing Standard Publication 132. November 1987.\nNuegent, W., J. Gilligan, L. Hoffman, and Z. Ruthberg. Technology Assessment: Methods for\nMeasuring the Level of Computer Security. Special Publication 500-133. Gaithersburg, MD:\nNational Bureau of Standards, 1985. \nPeng, Wendy W., and Dolores R. Wallace. Software Error Analysis. Special Publication 500-209.\nGaithersburg, MD: National Institute of Standards and Technology, 1993.\nPeterson, P. \"Infosecurity and Shrinking Media.\" ISSA Access. 5(2), 1992. pp. 19-22.\nPfleeger, C., S. Pfleeger, and M. Theofanos, \"A Methodology for Penetration Testing.\"\nComputers and Security. 8(7), 1989. pp. 613-620.\n \nPolk, W. Timothy, and Lawrence Bassham. A Guide to the Selection of Anti-Virus Tools and\nTechniques. Special Publication 800-5. Gaithersburg, MD: National Institute of Standards and\nTechnology, December 1992.\nPolk, W. Timothy. Automated Tools for Testing Computer System Vulnerability. Special\nPublication 800-6. Gaithersburg, MD: National Institute of Standards and Technology, December\n1992.\n" }, { "page_number": 117, "text": "9. Assurance\n105\nPresident's Council on Integrity and Efficiency. Review of General Controls in Federal Computer\nSystems. Washington, DC: President's Council on Integrity and Efficiency, October 1988.\nPresident's Council on Management Improvement and the President's Council on Integrity and\nEfficiency. Model Framework for Management Control Over Automated Information System.\nWashington, DC: President's Council on Management Improvement, January 1988.\nRuthberg, Zella G, Bonnie T. Fisher and John W. Lainhart IV. System Development Auditor.\nOxford, England: Elsevier Advanced Technology, 1991.\nRuthburg, Zella, et al. Guide to Auditing for Controls and Security: A System Development Life\nCycle Approach. Special Publication 500-153. Gaithersburg, MD: National Bureau of Standards,\nApril 1988.\nStrategic Defense Initiation Organization. Trusted Software Methodology. Vols. 1 and II. SDI-S-\nSD-91-000007. June 17, 1992. \nWallace, Dolores, and J.C. Cherniasvsky. Guide to Software Acceptance. Special Publication 500-\n180. Gaithersburg, MD: National Institute of Standards and Technology, April 1990.\nWallace, Dolores, and Roger Fugi. Software Verification and Validation: Its Role in Computer\nAssurance and Its Relationship with Software Product Management Standards. Special\nPublication 500-165. Gaithersburg, MD: National Institute of Standards and Technology,\nSeptember 1989.\nWallace, Dolores R., Laura M. Ippolito, and D. Richard Kuhn. High Integrity Software Standards\nand Guidelines. Special Publication 500-204. Gaithersburg, MD: National Institute of Standards\nand Technology, 1992.\nWood, C., et al. Computer Security: A Comprehensive Controls Checklist. New York, NY: John\nWiley & Sons, 1987.\n" }, { "page_number": 118, "text": "106\n" }, { "page_number": 119, "text": "107\nIII. OPERATIONAL CONTROLS\n" }, { "page_number": 120, "text": "108\n" }, { "page_number": 121, "text": " A distinction is made between users and personnel, since some users (e.g., contractors and members of the\n77\npublic) may not be considered personnel (i.e., employees).\n109\nChapter 10\nPERSONNEL/USER ISSUES\nMany important issues in computer security involve human users, designers, implementors, and\nmanagers. A broad range of security issues relate to how these individuals interact with\ncomputers and the access and authorities they need to do their job. No computer system can be\nsecured without properly addressing these security issues.77\nThis chapter examines issues concerning the staffing of positions that interact with computer\nsystems; the administration of users on a system, including considerations for terminating\nemployee access; and special considerations that may arise when contractors or the public have\naccess to systems. Personnel issues are closely linked to logical access controls, discussed in\nChapter 17. \n10.1\nStaffing\nThe staffing process generally involves at least four steps and can apply equally to general users as\nwell as to application managers, system management personnel, and security personnel. These\nfour steps are: (1) defining the job, normally involving the development of a position description;\n(2) determining the sensitivity of the position; (3) filling the position, which involves screening\napplicants and selecting an individual; and (4) training. \n10.1.1 Groundbreaking Position Definition\n \nEarly in the process of defining a position, security issues should be identified and dealt with. \nOnce a position has been broadly defined, the responsible supervisor should determine the type of\ncomputer access needed for the position. There are two general principles to apply when granting\naccess: separation of duties and least privilege. \nSeparation of duties refers to dividing roles and responsibilities so that a single individual cannot\nsubvert a critical process. For example, in financial systems, no single individual should normally\nbe given authority to issue checks. Rather, one person initiates a request for a payment and\nanother authorizes that same payment. In effect, checks and balances need to be designed into\nboth the process as well as the specific, individual positions of personnel who will implement the\nprocess. Ensuring that such duties are well defined is the responsibility of management.\nLeast privilege refers to the security objective of granting users only those accesses they need to\n" }, { "page_number": 122, "text": "III. Operational Controls\n110\nIn general, it is more effective to use separation of\nduties and least privilege to limit the sensitivity of\nthe position, rather than relying on screening to\nreduce the risk to the organization. \nperform their official duties. Data entry clerks, for example, may not have any need to run\nanalysis reports of their database. However, least privilege does not mean that all users will have\nextremely little functional access; some employees will have significant access if it is required for\ntheir position. However, applying this principle may limit the damage resulting from accidents,\nerrors, or unauthorized use of system resources. It is important to make certain that the\nimplementation of least privilege does not interfere with the ability to have personnel substitute\nfor each other without undue delay. Without careful planning, access control can interfere with\ncontingency plans. \n10.1.2 Determining Position Sensitivity\nKnowledge of the duties and access levels that a particular position will require is necessary for\ndetermining the sensitivity of the position. The responsible management official should correctly\nidentify position sensitivity levels so that appropriate, cost-effective screening can be completed.\nVarious levels of sensitivity are assigned to positions in the federal government. Determining the\nappropriate level is based upon such factors as the type and degree of harm (e.g., disclosure of\nprivate information, interruption of critical processing, computer fraud) the individual can cause\nthrough misuse of the computer system as well as more traditional factors, such as access to\nclassified information and fiduciary responsibilities. Specific agency guidance should be followed\non this matter. \nIt is important to select the appropriate position sensitivity, since controls in excess of the\nsensitivity of the position wastes resources, while too little may cause unacceptable risks. \n10.1.3 Filling the Position -- Screening and Selecting\nOnce a position's sensitivity has been determined, the position is ready to be staffed. In the\nfederal government, this typically includes publishing a formal vacancy announcement and\nidentifying which applicants meet the position requirements. More sensitive positions typically\nrequire preemployment background screening; screening after employment has commenced (post-\nentry-on-duty) may suffice for less sensitive positions. \nBackground screening helps determine\nwhether a particular individual is suitable\nfor a given position. For example, in\npositions with high-level fiduciary\nresponsibility, the screening process will\nattempt to ascertain the person's\ntrustworthiness and appropriateness for a\nparticular position. In the federal government, the screening process is formalized through a\nseries of background checks conducted through a central investigative office within the\n" }, { "page_number": 123, "text": "10. Personnel / User Issues\n In the federal government, separate and unique screening procedures are not established for each position. \n78\nRather, positions are categorized by general sensitivity and are assigned a corresponding level of background\ninvestigation or other checks. \n111\norganization or through another organization (e.g., the Office of Personnel Management).\nWithin the Federal Government, the most basic screening technique involves a check for a\ncriminal history, checking FBI fingerprint records, and other federal indices. More extensive\n78\nbackground checks examine other factors, such as a person's work and educational history,\npersonal interview, history of possession or use of illegal substances, and interviews with current\nand former colleagues, neighbors, and friends. The exact type of screening that takes place\ndepends upon the sensitivity of the position and applicable agency implementing regulations. \nScreening is not conducted by the prospective employee's manager; rather, agency security and\npersonnel officers should be consulted for agency-specific guidance. \nOutside of the Federal Government, employee screening is accomplished in many ways. Policies\nvary considerably among organizations due to the sensitivity of examining an individual's\nbackground and qualifications. Organizational policies and procedures normally try to balance\nfears of invasiveness and slander against the need to develop confidence in the integrity of\nemployees. One technique may be to place the individual in a less sensitive position initially. \nFor both the Federal Government and private sector, finding something compromising in a\nperson's background does not necessarily mean they are unsuitable for a particular job. A\ndetermination should be made based on the type of job, the type of finding or incident, and other\nrelevant factors. In the federal government, this process is referred to as adjudication.\n10.1.4 Employee Training and Awareness\nEven after a candidate has been hired, the staffing process cannot yet be considered complete \nemployees still have to be trained to do their job, which includes computer security responsibilities\nand duties. As discussed in Chapter 13, such security training can be very cost-effective in\npromoting security. \nSome computer security experts argue that employees must receive initial computer security\ntraining before they are granted any access to computer systems. Others argue that this must be a\nrisk-based decision, perhaps granting only restricted access (or, perhaps, only access to their PC)\nuntil the required training is completed. Both approaches recognize that adequately trained\nemployees are crucial to the effective functioning of computer systems and applications. \nOrganizations may provide introductory training prior to granting any access with follow-up more\nextensive training. In addition, although training of new users is critical, it is important to\nrecognize that security training and awareness activities should be ongoing during the time an\n" }, { "page_number": 124, "text": "Fill\nPosition\nDetermine\nPosition\nSensitivity\nPosition\nDefinition\nTraining\nand Awarenss\nThe\nStaffing\nProcess\nIII. Operational Controls\n112\nindividual is a system user. (See Chapter 13 for a more thorough discussion.)\nFigure 10.1\n10.2\nUser Administration\nEffective administration of users' computer access is essential to maintaining system security. User\naccount management focuses on identification, authentication, and access authorizations. This is\naugmented by the process of auditing and otherwise periodically verifying the legitimacy of\ncurrent accounts and access authorizations. Finally, there are considerations involved in the\ntimely modification or removal of access and associated issues for employees who are reassigned,\npromoted, or terminated, or who retire. \n" }, { "page_number": 125, "text": "10. Personnel / User Issues\n113\nExample of Access Levels \nWithin an Application\nLevel\nFunction\n1\nCreate Records\n2\nEdit Group A records\n3\nEdit Group B records\n4\nEdit all records\n10.2.1 User Account Management \nUser account management involves (1) the process of requesting, establishing, issuing, and\nclosing user accounts; (2) tracking users and their respective access authorizations; and\n(3) managing these functions. \nUser account management typically begins with a request from the user's supervisor to the system\nmanager for a system account. If a user is to have access to a particular application, this request\nmay be sent through the application manager to the system manager. This will ensure that the\nsystems office receives formal approval from the \"application manager\" for the employee to be\ngiven access. The request will normally state the level of access to be granted, perhaps by\nfunction or by specifying a particular user profile. (Often when more than one employee is doing\nthe same job, a \"profile\" of permitted authorizations is created.)\nSystems operations staff will normally then\nuse the account request to create an account\nfor the new user. The access levels of the\naccount will be consistent with those\nrequested by the supervisor. This account will\nnormally be assigned selected access\nauthorizations. These are sometimes built\ndirectly into applications, and other times rely\nupon the operating system. \"Add-on\" access\napplications are also used. These access\nlevels and authorizations are often tied to specific access levels within an application. \nNext, employees will be given their account information, including the account identifier (e.g.,\nuser ID) and a means of authentication (e.g., password or smart card/PIN). One issue that may\narise at this stage is whether the user ID is to be tied to the particular position an employee holds\n(e.g., ACC5 for an accountant) or the individual employee (e.g., BSMITH for Brenda Smith). \nTying user IDs to positions may simplify administrative overhead in some cases; however, it may\nmake auditing more difficult as one tries to trace the actions of a particular individual. It is\nnormally more advantageous to tie the user ID to the individual employee. However, if the user\nID is created and tied to a position, procedures will have to be established to change them if\nemployees switch jobs or are otherwise reassigned. \nWhen employees are given their account, it is often convenient to provide initial or refresher\ntraining and awareness on computer security issues. Users should be asked to review a set of\nrules and regulations for system access. To indicate their understanding of these rules, many\norganizations require employees to sign an \"acknowledgment statement,\" which may also state\ncauses for dismissal or prosecution under the Computer Fraud and Abuse Act and other\n" }, { "page_number": 126, "text": "III. Operational Controls\n Whenever users are asked to sign a document, appropriate review by organizational legal counsel and, if\n79\napplicable, by employee bargaining units should be accomplished.\n114\nSample User Account and Password\nAcknowledgment Form\nI hereby acknowledge personal receipt of the\nsystem password(s) associated with the user Ids\nlisted below. I understand that I am responsible for\nprotecting the password(s), will comply with all\napplicable system security standards, and will not\ndivulge my password(s) to any person. I further\nunderstand that I must report to the Information\nSystems Security Officer any problem I encounter\nin the use of the password(s) or when I have reason\nto believe that the private nature of my password(s)\nhas been compromised. \napplicable state and local laws.79\nWhen user accounts are no longer required,\nthe supervisor should inform the application\nmanager and system management office so\naccounts can be removed in a timely manner. \nOne useful secondary check is to work with\nthe local organization's personnel officer to\nestablish a procedure for routine notification\nof employee departures to the systems office. \nFurther issues are discussed in the\n\"Termination\" section of this chapter.\n \nIt is essential to realize that access and\nauthorization administration is a continuing\nprocess. New user accounts are added while\nothers are deleted. Permissions change: sometimes permanently, sometimes temporarily. New\napplications are added, upgraded, and removed. Tracking this information to keep it up to date is\nnot easy, but is necessary to allow users access to only those functions necessary to accomplish\ntheir assigned responsibilities thereby helping to maintain the principle of least privilege. In\nmanaging these accounts, there is a need to balance timeliness of service and record keeping. \nWhile sound record keeping practices are necessary, delays in processing requests (e.g., change\nrequests) may lead to requests for more access than is really necessary just to avoid delays\nshould such access ever be required. \nManaging this process of user access is also one that, particularly for larger systems, is often \ndecentralized. Regional offices may be granted the authority to create accounts and change user\naccess authorizations or to submit forms requesting that the centralized access control function\nmake the necessary changes. Approval of these changes is important it may require the\napproval of the file owner and the supervisor of the employee whose access is being changed. \n10.2.2 Audit and Management Reviews\nFrom time to time, it is necessary to review user account management on a system. Within the\narea of user access issues, such reviews may examine the levels of access each individual has,\nconformity with the concept of least privilege, whether all accounts are still active, whether\nmanagement authorizations are up-to-date, whether required training has been completed, and so\nforth. \n" }, { "page_number": 127, "text": "10. Personnel / User Issues\n Note that this is not an either/or distinction.\n80\n The term auditing is used here in a broad sense to refer to the review and analysis of past events.\n81\n115\nThese reviews can be conducted on at least two levels: (1) on an application-by-application\n80\nbasis or (2) on a systemwide basis. Both kinds of reviews can be conducted by, among others, in-\nhouse systems personnel (a self-audit), the organization's internal audit staff, or external auditors. \nFor example, a good practice is for application managers (and data owners, if different) to review\nall access levels of all application users every month and sign a formal access approval list,\nwhich will provide a written record of the approvals. While it may initially appear that such\nreviews should be conducted by systems personnel, they usually are not fully effective. System\npersonnel can verify that users only have those accesses that their managers have specified. \nHowever because access requirements may change over time, it is important to involve the\napplication manager, who is often the only individual in a position to know current access\nrequirements.\nOutside audit organizations (e.g., the Inspector General [IG] or the General Accounting Office)\nmay also conduct audits. For example, the IG may direct a more extensive review of permissions. \nThis may involve discussing the need for particular access levels for specific individuals or the\nnumber of users with sensitive access. For example, how many employees should really have\nauthorization to the check-printing function? (Auditors will also examine non-computer access by\nreviewing, for example, who should have physical access to the check printer or blank-check\nstock.) \n10.2.3 Detecting Unauthorized/Illegal Activities\nSeveral mechanisms are used besides auditing and analysis of audit trails to detect unauthorized\n81\nand illegal acts. (See Chapters 9 and 18.) For example, fraudulent activities may require the\nregular physical presence of the perpetrator(s). In such cases, the fraud may be detected during\nthe employee's absence. Mandatory vacations for critical systems and applications personnel can\nhelp detect such activity (however, this is not a guarantee, for example, if problems are saved for\nthe employees to handle upon their return). It is useful to avoid creating an excessive dependence\nupon any single individual, since the system will have to function during periods of absence. \nParticularly within the government, periodic rescreening of personnel is used to identify possible\nindications of illegal activity (e.g., living a lifestyle in excess of known income level).\n10.2.4 Temporary Assignments and In-house Transfers\nOne significant aspect of managing a system involves keeping user access authorizations up to\ndate. Access authorizations are typically changed under two types of circumstances: (1) change\nin job role, either temporarily (e.g., while covering for an employee on sick leave) or permanently\n" }, { "page_number": 128, "text": "III. Operational Controls\n RIF is a term used within the government as shorthand for \"reduction in force.\"\n82\n116\n(e.g., after an in-house transfer) and (2) termination discussed in the following section.\nUsers often are required to perform duties outside their normal scope during the absence of\nothers. This requires additional access authorizations. Although necessary, such extra access\nauthorizations should be granted sparingly and monitored carefully, consistent with the need to\nmaintain separation of duties for internal control purposes. Also, they should be removed\npromptly when no longer required. \nPermanent changes are usually necessary when employees change positions within an\norganization. In this case, the process of granting account authorizations (described in Section\n10.2.1) will occur again. At this time, however, is it also important that access authorizations of\nthe prior position be removed. Many instances of \"authorization creep\" have occurred with\nemployees continuing to maintain access rights for previously held positions within an\norganization. This practice is inconsistent with the principle of least privilege.\n10.2.5 Termination\nTermination of a user's system access generally can be characterized as either \"friendly\" or\n\"unfriendly.\" Friendly termination may occur when an employee is voluntarily transferred, resigns\nto accept a better position, or retires. Unfriendly termination may include situations when the\nuser is being fired for cause, \"RIFed,\" or involuntarily transferred. Fortunately, the former\n82\nsituation is more common, but security issues have to be addressed in both situations. \n10.2.5.1 Friendly Termination \nFriendly termination refers to the removal of an employee from the organization when there is no\nreason to believe that the termination is other than mutually acceptable. Since terminations can be\nexpected regularly, this is usually accomplished by implementing a standard set of procedures for\noutgoing or transferring employees. These are part of the standard employee \"out-processing,\"\nand are put in place, for example, to ensure that system accounts are removed in a timely manner. \nOut-processing often involves a sign-out form initialed by each functional manager with an\ninterest in the separation. This normally includes the group(s) managing access controls, the\ncontrol of keys, the briefing on the responsibilities for confidentiality and privacy, the library, the\nproperty clerk, and several other functions not necessarily related to information security. \nIn addition, other issues should be examined as well. The continued availability of data, for\nexample, must often be assured. In both the manual and the electronic worlds, this may involve\ndocumenting procedures or filing schemes, such as how documents are stored on the hard disk,\nand how are they backed up. Employees should be instructed whether or not to \"clean up\" their\n" }, { "page_number": 129, "text": "10. Personnel / User Issues\n117\nPC before leaving. If cryptography is used to protect data, the availability of cryptographic keys\nto management personnel must be ensured. Authentication tokens must be collected. \nConfidentiality of data can also be an issue. For example, do employees know what information\nthey are allowed to share with their immediate organizational colleagues? Does this differ from\nthe information they may share with the public? These and other organizational-specific issues\nshould be addressed throughout an organization to ensure continued access to data and to provide\ncontinued confidentiality and integrity during personnel transitions. (Many of these issues should\nbe addressed on an ongoing basis, not just during personnel transitions.) The training and\nawareness program normally should address such issues.\n10.2.5.2 Unfriendly Termination \nUnfriendly termination involves the removal of an employee under involuntary or adverse\nconditions. This may include termination for cause, RIF, involuntary transfer, resignation for\n\"personality conflicts,\" and situations with pending grievances. The tension in such terminations\nmay multiply and complicate security issues. Additionally, all of the issues involved in friendly\nterminations are still present, but addressing them may be considerably more difficult.\nThe greatest threat from unfriendly terminations is likely to come from those personnel who are\ncapable of changing code or modifying the system or applications. For example, systems\npersonnel are ideally positioned to wreak considerable havoc on systems operations. Without\nappropriate safeguards, personnel with such access can place logic bombs (e.g., a hidden program\nto erase a disk) in code that will not even execute until after the employee's departure. Backup\ncopies can be destroyed. There are even examples where code has been \"held hostage.\" But\nother employees, such as general users, can also cause damage. Errors can be input purposefully,\ndocumentation can be misfiled, and other \"random\" errors can be made. Correcting these\nsituations can be extremely resource intensive. \nGiven the potential for adverse consequences, security specialists routinely recommend that\nsystem access be terminated as quickly as possible in such situations. If employees are to be fired,\nsystem access should be removed at the same time (or just before) the employees are notified of\ntheir dismissal. When an employee notifies an organization of a resignation and it can be\nreasonably expected that it is on unfriendly terms, system access should be immediately\nterminated. During the \"notice\" period, it may be necessary to assign the individual to a restricted\narea and function. This may be particularly true for employees capable of changing programs or\nmodifying the system or applications. In other cases, physical removal from their offices (and, of\ncourse, logical removal, when logical access controls exist) may suffice. \n" }, { "page_number": 130, "text": "III. Operational Controls\n118\nOMB Circular A-130, Appendix III \"Security of\nFederal Automated Information\" and NIST CSL\nBulletin \"Security Issues in Public Access\nSystems\" both recommend segregating information\nmade directly accessible to the public from official\nrecords.\n10.3\nContractor Access Considerations\nMany federal agencies as well as private organizations use contractors and consultants to assist\nwith computer processing. Contractors are often used for shorter periods of time than regular\nemployees. This factor may change the cost-effectiveness of conducting screening. The often\nhigher turnover among contractor personnel generates additional costs for security programs in\nterms of user administration.\n10.4\nPublic Access Considerations\nMany federal agencies have begun to design, develop, and implement public access systems for\nelectronic dissemination of information to the public. Some systems provide electronic interaction\nby allowing the public to send information to the government (e.g., electronic tax filing) as well as\nto receive it. When systems are made available for access by the public (or a large or significant\nsubset thereof), additional security issues arise due to: (1) increased threats against public access\nsystems and (2) the difficulty of security administration. \nWhile many computer systems have been\nvictims of hacker attacks, public access\nsystems are well known and have published\nphone numbers and network access IDs. In\naddition, a successful attack could result in a\nlot of publicity. For these reasons, public\naccess systems are subject to a greater threat\nfrom hacker attacks on the confidentiality,\navailability, and integrity of information\nprocessed by a system. In general, it is safe to say that when a system is made available for public\naccess, the risk to the system increases and often the constraints on its use are tightened.\nBesides increased risk of hackers, public access systems can be subject to insider malice. For\nexample, an unscrupulous user, such as a disgruntled employee, may try to introduce errors into\ndata files intended for distribution in order to embarrass or discredit the organization. Attacks on\npublic access systems could have a substantial impact on the organization's reputation and the\nlevel of public confidence due to the high visibility of public access systems. Other security\nproblems may arise from unintentional actions by untrained users. \nIn systems without public access, there are procedures for enrolling users that often involve some\nuser training and frequently require the signing of forms acknowledging user responsibilities. In\naddition, user profiles can be created and sophisticated audit mechanisms can be developed to\ndetect unusual activity by a user. In public access systems, users are often anonymous. This can\ncomplicate system security administration.\n" }, { "page_number": 131, "text": "10. Personnel / User Issues\n When analyzing the costs of screening, it is important to realize that screening is often conducted to meet\n83\nrequirements wholly unrelated to computer security.\n119\nIn most systems without public access, users are typically a mix of known employees or\ncontractors. In this case, imperfectly implemented access control schemes may be tolerated. \nHowever, when opening up a system to public access, additional precautions may be necessary\nbecause of the increased threats. \n10.5\nInterdependencies\nUser issues are tied to topics throughout this handbook. \nTraining and Awareness discussed in Chapter 13 is a critical part of addressing the user issues of\ncomputer security.\nIdentification and Authentication and Access Controls in a computer system can only prevent\npeople from doing what the computer is instructed they are not allowed to do, as stipulated by\nPolicy. The recognition by computer security experts that much more harm comes from people\ndoing what they are allowed to do, but should not do, points to the importance of considering\nuser issues in the computer security picture, and the importance of Auditing. \nPolicy, particularly its compliance component, is closely linked to personnel issues. A deterrent\neffect arises among users when they are aware that their misconduct, intentional or unintentional,\nwill be detected.\nThese controls also depend on manager's (1) selecting the right type and level of access for their\nemployees and (2) informing system managers of which employees need accounts and what type\nand level of access they require, and (3) promptly informing system managers of changes to\naccess requirements. Otherwise, accounts and accesses can be granted to or maintained for\npeople who should not have them. \n10.6\nCost Considerations\nThere are many security costs under the category of user issues. Among these are:\n \nScreening -- Costs of initial background screening and periodic updates, as appropriate.83\nTraining and Awareness -- Costs of training needs assessments, training materials, course fees,\nand so forth, as discussed separately in Chapter 13.\nUser Administration -- Costs of managing identification and authentication which, particularly for\n" }, { "page_number": 132, "text": "III. Operational Controls\n120\nlarge distributed systems, may be rather significant.\nAccess Administration -- Particularly beyond the initial account set-up, are ongoing costs of\nmaintaining user accesses currently and completely.\nAuditing -- Although such costs can be reduced somewhat when using automated tools,\nconsistent, resource-intensive human review is still often necessary to detect and resolve security\nanomalies. \nReferences\nFites, P., and M. Kratz. Information Systems Security: A Practitioner's Reference. New York,\nNY: Van Nostrand Reinhold, 1993. (See especially Chapter 6.)\nNational Institute of Standards and Technology. \"Security Issues in Public Access Systems.\"\nComputer Systems Laboratory Bulletin. May 1993.\nNorth, S. \"To Catch a `Crimoid.'\" Beyond Computing. 1(1), 1992. pp. 55-56.\nPankau, E. \"The Consummate Investigator.\" Security Management. 37(2), 1993. pp. 37-41.\n \nSchou, C., W. Machonachy, F. Lynn McNulty, and A. Chantker. \"Information Security\nProfessionalism for the 1990s.\" Computer Security Journal. 9(1), 1992. pp. 27-38.\nWagner, M. \"Possibilities Are Endless, and Frightening.\" Open Systems Today. November 8\n(136), 1993. pp. 16-17.\nWood, C. \"Be Prepared Before You Fire.\" Infosecurity News. 5(2), 1994. pp. 51-54.\n \nWood, C. \"Duress, Terminations and Information Security.\" Computers and Security. 12(6),\n1993. pp. 527-535.\n" }, { "page_number": 133, "text": " There is no distinct dividing line between disasters and other contingencies.\n84\n Other names include disaster recovery, business continuity, continuity of operations, or business resumption\n85\nplanning.\n Some organizations include incident handling as a subset of contingency planning. The relationship is\n86\nfurther discussed in Chapter 12, Incident Handling.\n Some organizations and methodologies may use a different order, nomenclature, number, or combination of\n87\nsteps. The specific steps can be modified, as long as the basic functions are addressed.\n121\nContingency planning directly supports an\norganization's goal of continued operations. \nOrganizations practice contingency planning\nbecause it makes good business sense. \nChapter 11\nPREPARING FOR CONTINGENCIES AND DISASTERS\nA computer security contingency is an event with the potential to disrupt computer operations,\nthereby disrupting critical mission and business functions. Such an event could be a power\noutage, hardware failure, fire, or storm. If the event is very destructive, it is often called a\ndisaster. \n84\nTo avert potential contingencies and disasters\nor minimize the damage they cause \norganizations can take steps early to control\nthe event. Generally called contingency\nplanning, this activity is closely related to\n85\nincident handling, which primarily addresses\nmalicious technical threats such as hackers\nand viruses.86\nContingency planning involves more than planning for a move offsite after a disaster destroys a\ndata center. It also addresses how to keep an organization's critical functions operating in the\nevent of disruptions, both large and small. This broader perspective on contingency planning is\nbased on the distribution of computer support throughout an organization.\nThis chapter presents the contingency planning process in six steps:87\n1.\nIdentifying the mission- or business-critical functions. \n2.\nIdentifying the resources that support the critical functions. \n3.\nAnticipating potential contingencies or disasters. \n4.\nSelecting contingency planning strategies. \n" }, { "page_number": 134, "text": "III. Operational Controls\n However, since this is a computer security handbook, the descriptions here focus on the computer-related\n88\nresources. The logistics of coordinating contingency planning for computer-related and other resources is an\nimportant consideration.\n122\nThis chapter refers to an organization as having\ncritical mission or business functions. In\ngovernment organizations, the focus is normally on\nperforming a mission, such as providing citizen\nbenefits. In private organizations, the focus is\nnormally on conducting a business, such as\nmanufacturing widgets.\nIn many cases, the longer an organization is\nwithout a resource, the more critical the situation\nbecomes. For example, the longer a garbage\ncollection strike lasts, the more critical the\nsituation becomes.\n5.\nImplementing the contingency strategies. \n6.\nTesting and revising the strategy. \n11.1\nStep 1: Identifying the Mission- or Business-Critical Functions\nProtecting the continuity of an organization's\nmission or business is very difficult if it is not\nclearly identified. Managers need to\nunderstand the organization from a point of\nview that usually extends beyond the area they\ncontrol. The definition of an organization's\ncritical mission or business functions is often\ncalled a business plan.\nSince the development of a business plan will\nbe used to support contingency planning, it is necessary not only to identify critical missions and\nbusinesses, but also to set priorities for them. A fully redundant capability for each function is\nprohibitively expensive for most organizations. In the event of a disaster, certain functions will\nnot be performed. If appropriate priorities have been set (and approved by senior management), it\ncould mean the difference in the organization's ability to survive a disaster. \n11.2\nStep 2: Identifying the Resources That Support Critical\nFunctions\nAfter identifying critical missions and business\nfunctions, it is necessary to identify the\nsupporting resources, the time frames in\nwhich each resource is used (e.g., is the\nresource needed constantly or only at the end\nof the month?), and the effect on the mission\nor business of the unavailability of the\nresource. In identifying resources, a\ntraditional problem has been that different managers oversee different resources. They may not\nrealize how resources interact to support the organization's mission or business. Many of these\nresources are not computer resources. Contingency planning should address all the resources\nneeded to perform a function, regardless whether they directly relate to a computer. \n88\n" }, { "page_number": 135, "text": "11. Preparing for Contingencies and Disasters\n123\nResources That Support Critical Functions \nHuman Resources\nProcessing Capability\nComputer-Based Services \nData and Applications\nPhysical Infrastructure\nDocuments and Papers\nContingency Planning Teams\nTo understand what resources are needed from\neach of the six resource categories and to\nunderstand how the resources support critical\nfunctions, it is often necessary to establish a\ncontingency planning team. A typical team\ncontains representatives from various\norganizational elements, and is often headed by a\ncontingency planning coordinator. It has\nrepresentatives from the following three groups:\n 1.\nbusiness-oriented groups , such as\nrepresentatives from functional areas;\n 2.\nfacilities management; and\n 3.\ntechnology management. \nVarious other groups are called on as needed\nincluding financial management, personnel,\ntraining, safety, computer security, physical\nsecurity, and public affairs.\nThe analysis of needed resources should be conducted by those who understand how the function\nis performed and the dependencies of various resources on other resources and other critical\nrelationships. This will allow an organization to assign priorities to resources since not all\nelements of all resources are crucial to the critical functions. \n11.2.1 Human Resources\nPeople are perhaps an organization's most\nobvious resource. Some functions require the\neffort of specific individuals, some require\nspecialized expertise, and some only require\nindividuals who can be trained to perform a\nspecific task. Within the information\ntechnology field, human resources include\nboth operators (such as technicians or system\nprogrammers) and users (such as data entry clerks or information analysts).\n11.2.2 Processing Capability\nTraditionally contingency planning has\nfocused on processing power (i.e., if the data\ncenter is down, how can applications\ndependent on it continue to be processed?). \nAlthough the need for data center backup\nremains vital, today's other processing\nalternatives are also important. Local area\nnetworks (LANs), minicomputers,\nworkstations, and personal computers in all\nforms of centralized and distributed\nprocessing may be performing critical tasks. \n11.2.3 Automated Applications and Data \nComputer systems run applications that\nprocess data. Without current electronic\nversions of both applications and data,\ncomputerized processing may not be possible. \nIf the processing is being performed on\nalternate hardware, the applications must be\ncompatible with the alternate hardware,\noperating systems and other software\n(including version and configuration), and\n" }, { "page_number": 136, "text": "III. Operational Controls\n124\nnumerous other technical factors. Because of the complexity, it is normally necessary to\nperiodically verify compatibility. (See Step 6, Testing and Revising.)\n11.2.4 Computer-Based Services\nAn organization uses many different kinds of computer-based services to perform its functions. \nThe two most important are normally communications services and information services. \nCommunications can be further categorized as data and voice communications; however, in many\norganizations these are managed by the same service. Information services include any source of\ninformation outside of the organization. Many of these sources are becoming automated,\nincluding on-line government and private databases, news services, and bulletin boards. \n11.2.5 Physical Infrastructure \nFor people to work effectively, they need a safe working environment and appropriate equipment\nand utilities. This can include office space, heating, cooling, venting, power, water, sewage, other\nutilities, desks, telephones, fax machines, personal computers, terminals, courier services, file\ncabinets, and many other items. In addition, computers also need space and utilities, such as\nelectricity. Electronic and paper media used to store applications and data also have physical\nrequirements.\n11.2.6 Documents and Papers\nMany functions rely on vital records and various documents, papers, or forms. These records\ncould be important because of a legal need (such as being able to produce a signed copy of a loan)\nor because they are the only record of the information. Records can be maintained on paper,\nmicrofiche, microfilm, magnetic media, or optical disk. \n11.3\nStep 3: Anticipating Potential Contingencies or Disasters\nAlthough it is impossible to think of all the things that can go wrong, the next step is to identify a\nlikely range of problems. The development of scenarios will help an organization develop a plan\nto address the wide range of things that can go wrong. \nScenarios should include small and large contingencies. While some general classes of\ncontingency scenarios are obvious, imagination and creativity, as well as research, can point to\nother possible, but less obvious, contingencies. The contingency scenarios should address each of\nthe resources described above. The following are examples of some of the types of questions that\ncontingency scenarios may address:\n" }, { "page_number": 137, "text": "11. Preparing for Contingencies and Disasters\n Some organizations divide a contingency strategy into emergency response, backup operations, and\n89\nrecovery. The different terminology can be confusing (especially the use of conflicting definitions of recovery),\nalthough the basic functions performed are the same.\n125\nExamples of Some Less Obvious Contingencies\n1. A computer center in the basement of a\nbuilding had a minor problem with rats. \nExterminators killed the rats, but the bodies were\nnot retrieved because they were hidden under the\nraised flooring and in the pipe conduits. \nEmployees could only enter the data center with\ngas masks because of the decomposing rats. \n2. After the World Trade Center explosion when\npeople reentered the building, they turned on their\ncomputer systems to check for problems. Dust and\nsmoke damaged many systems when they were\nturned on. If the systems had been cleaned first,\nthere would not have been significant damage.\nExamples of Some Less Obvious Contingencies\n1. A computer center in the basement of a\nbuilding had a minor problem with rats. \nExterminators killed the rats, but the bodies were\nnot retrieved because they were hidden under the\nraised flooring and in the pipe conduits. \nEmployees could only enter the data center with\ngas masks because of the decomposing rats. \n2. After the World Trade Center explosion when\npeople reentered the building, they turned on their\ncomputer systems to check for problems. Dust and\nsmoke damaged many systems when they were\nturned on. If the systems had been cleaned first,\nthere would not have been significant damage.\nHuman Resources: Can people get to work? \nAre key personnel willing to cross a picket\nline? Are there critical skills and knowledge\npossessed by one person? Can people easily\nget to an alternative site?\nProcessing Capability: Are the computers\nharmed? What happens if some of the\ncomputers are inoperable, but not all?\nAutomated Applications and Data: Has data\nintegrity been affected? Is an application\nsabotaged? Can an application run on a\ndifferent processing platform?\nComputer-Based Services: Can the computers\ncommunicate? To where? Can people\ncommunicate? Are information services down? \nFor how long?\nInfrastructure: Do people have a place to sit? Do they have equipment to do their jobs? Can\nthey occupy the building? \nDocuments/Paper: Can needed records be found? Are they readable?\n11.4\nStep 4: Selecting Contingency Planning Strategies\nThe next step is to plan how to recover needed resources. In evaluating alternatives, it is\nnecessary to consider what controls are in place to prevent and minimize contingencies. Since no\nset of controls can cost-effectively prevent all contingencies, it is necessary to coordinate\nprevention and recovery efforts. \nA contingency planning strategy normally consists of three parts: emergency response, recovery,\nand resumption. Emergency response encompasses the initial actions taken to protect lives and\n89\nlimit damage. Recovery refers to the steps that are taken to continue support for critical\nfunctions. Resumption is the return to normal operations. The relationship between recovery and\nresumption is important. The longer it takes to resume normal operations, the longer the\n" }, { "page_number": 138, "text": "III. Operational Controls\n126\nExample 1: If the system administrator for a LAN has to be out\nof the office for a long time (due to illness or an accident),\narrangements are made for the system administrator of another\nLAN to perform the duties. Anticipating this, the absent\nadministrator should have taken steps beforehand to keep\ndocumentation current. This strategy is inexpensive, but service\nwill probably be significantly reduced on both LANs which may\nprompt the manager of the loaned administrator to partially\nrenege on the agreement.\nExample 2: An organization depends on an on-line information\nservice provided by a commercial vendor. The organization is\nno longer able to obtain the information manually (e.g., from a\nreference book) within acceptable time limits and there are no\nother comparable services. In this case, the organization relies\non the contingency plan of the service provider. The\norganization pays a premium to obtain priority service in case the\nservice provider has to operate at reduced capacity.\nExample #3: A large mainframe data center has a contract with a\nhot site vendor, has a contract with the telecommunications\ncarrier to reroute communications to the hot site, has plans to\nmove people, and stores up-to-date copies of data, applications\nand needed paper records off-site. The contingency plan is\nexpensive, but management has decided that the expense is fully\njustified.\nExample #4. An organization distributes its processing among\ntwo major sites, each of which includes small to medium\nprocessors (personal computers and minicomputers). If one site\nis lost, the other can carry the critical load until more equipment\nis purchased. Routing of data and voice communications can be\nperformed transparently to redirect traffic. Backup copies are\nstored at the other site. This plan requires tight control over the\narchitectures used and types of applications that are developed to\nensure compatibility. In addition, personnel at both sites must be\ncross-trained to perform all functions.\norganization will have to operate in the recovery mode. \nThe selection of a strategy needs to be\nbased on practical considerations,\nincluding feasibility and cost. The\ndifferent categories of resources should\neach be considered. Risk assessment\ncan be used to help estimate the cost of\noptions to decide on an optimal\nstrategy. For example, is it more\nexpensive to purchase and maintain a\ngenerator or to move processing to an\nalternate site, considering the likelihood\nof losing electrical power for various\nlengths of time? Are the consequences\nof a loss of computer-related resources\nsufficiently high to warrant the cost of\nvarious recovery strategies? The risk\nassessment should focus on areas\nwhere it is not clear which strategy is\nthe best.\n \nIn developing contingency planning\nstrategies, there are many factors to\nconsider in addressing each of the\nresources that support critical\nfunctions. Some examples are\npresented in the sidebars.\n11.4.1 Human Resources\nTo ensure an organization has access to\nworkers with the right skills and\nknowledge, training and documentation\nof knowledge are needed. During a\nmajor contingency, people will be\nunder significant stress and may panic. \nIf the contingency is a regional disaster,\ntheir first concerns will probably be their family and property. In addition, many people will be\neither unwilling or unable to come to work. Additional hiring or temporary services can be used. \nThe use of additional personnel may introduce security vulnerabilities. \n" }, { "page_number": 139, "text": "11. Preparing for Contingencies and Disasters\n127\nThe need for computer security does not go away\nwhen an organization is processing in a\ncontingency mode. In some cases, the need may\nincrease due to sharing processing facilities,\nconcentrating resources in fewer sites, or using\nadditional contractors and consultants. Security\nshould be an important consideration when\nselecting contingency strategies.\nContingency planning, especially for emergency response, normally places the highest emphasis\non the protection of human life. \n11.4.2 Processing Capability\nStrategies for processing capability are normally grouped into five categories: hot site; cold site;\nredundancy; reciprocal agreements; and hybrids. These terms originated with recovery strategies\nfor data centers but can be applied to other platforms. \n1.\nHot site A building already equipped with processing capability and other services. \n2.\nCold site A building for housing processors that can be easily adapted for use. \n3.\nRedundant site A site equipped and configured exactly like the primary site. (Some\norganizations plan on having reduced processing capability after a disaster and use partial\nredundancy. The stocking of spare personal computers or LAN servers also provides some\nredundancy.) \n4.\nReciprocal agreement An agreement that allows two organizations to back each other up. \n(While this approach often sounds desirable, contingency planning experts note that this\nalternative has the greatest chance of failure due to problems keeping agreements and plans\nup-to-date as systems and personnel change.) \n5.\nHybrids Any combinations of the above such as using having a hot site as a backup in case\na redundant or reciprocal agreement site is damaged by a separate contingency.\nRecovery may include several stages, perhaps marked by increasing availability of processing\ncapability. Resumption planning may include contracts or the ability to place contracts to replace\nequipment.\n11.4.3 Automated Applications and Data\nNormally, the primary contingency strategy\nfor applications and data is regular backup\nand secure offsite storage. Important\ndecisions to be addressed include how often\nthe backup is performed, how often it is\nstored off-site, and how it is transported (to\nstorage, to an alternate processing site, or to\nsupport the resumption of normal operations).\n" }, { "page_number": 140, "text": "III. Operational Controls\n128\n11.4.4 Computer-Based Services\nService providers may offer contingency services. Voice communications carriers often can\nreroute calls (transparently to the user) to a new location. Data communications carriers can also\nreroute traffic. Hot sites are usually capable of receiving data and voice communications. If one\nservice provider is down, it may be possible to use another. However, the type of\ncommunications carrier lost, either local or long distance, is important. Local voice service may\nbe carried on cellular. Local data communications, especially for large volumes, is normally more\ndifficult. In addition, resuming normal operations may require another rerouting of\ncommunications services.\n11.4.5 Physical Infrastructure\nHot sites and cold sites may also offer office space in addition to processing capability support. \nOther types of contractual arrangements can be made for office space, security services, furniture,\nand more in the event of a contingency. If the contingency plan calls for moving offsite,\nprocedures need to be developed to ensure a smooth transition back to the primary operating\nfacility or to a new facility. Protection of the physical infrastructure is normally an important part\nof the emergency response plan, such as use of fire extinguishers or protecting equipment from\nwater damage.\n11.4.6 Documents and Papers\nThe primary contingency strategy is usually backup onto magnetic, optical, microfiche, paper, or\nother medium and offsite storage. Paper documents are generally harder to backup than\nelectronic ones. A supply of forms and other needed papers can be stored offsite. \n11.5\nStep 5: Implementing the Contingency Strategies\nOnce the contingency planning strategies have been selected, it is necessary to make appropriate\npreparations, document the strategies, and train employees. Many of these tasks are ongoing.\n11.5.1 Implementation\nMuch preparation is needed to implement the strategies for protecting critical functions and their\nsupporting resources. For example, one common preparation is to establish procedures for\nbacking up files and applications. Another is to establish contracts and agreements, if the\ncontingency strategy calls for them. Existing service contracts may need to be renegotiated to\nadd contingency services. Another preparation may be to purchase equipment, especially to\nsupport a redundant capability.\n" }, { "page_number": 141, "text": "11. Preparing for Contingencies and Disasters\n129\nBacking up data files and applications is a critical\npart of virtually every contingency plan. Backups\nare used, for example, to restore files after a\npersonal computer virus corrupts the files or after a\nhurricane destroys a data processing center. \nRelationship Between Contingency Plans and\nComputer Security Plans \nFor small or less complex systems, the contingency\nplan may be a part of the computer security plan. \nFor larger or more complex systems, the computer\nsecurity plan could contain a brief synopsis of the\ncontingency plan, which would be a separate\ndocument.\nIt is important to keep preparations, including\ndocumentation, up-to-date. Computer\nsystems change rapidly and so should backup\nservices and redundant equipment. Contracts\nand agreements may also need to reflect the\nchanges. If additional equipment is needed, it\nmust be maintained and periodically replaced\nwhen it is no longer dependable or no longer\nfits the organization's architecture. \nPreparation should also include formally designating people who are responsible for various tasks\nin the event of a contingency. These people are often referred to as the contingency response\nteam. This team is often composed of people who were a part of the contingency planning team.\nThere are many important implementation issues for an organization. Two of the most important\nare 1) how many plans should be developed? and 2) who prepares each plan? Both of these\nquestions revolve around the organization's overall strategy for contingency planning. The\nanswers should be documented in organization policy and procedures.\nHow Many Plans?\nSome organizations have just one plan for the\nentire organization, and others have a plan for\nevery distinct computer system, application, or\nother resource. Other approaches recommend a\nplan for each business or mission function, with\nseparate plans, as needed, for critical resources. \nThe answer to the question, therefore, depends\nupon the unique circumstances for each\norganization. But it is critical to coordinate\nbetween resource managers and functional\nmanagers who are responsible for the mission or business.\nWho Prepares the Plan?\nIf an organization decides on a centralized approach to contingency planning, it may be best to\nname a contingency planning coordinator. The coordinator prepares the plans in cooperation\nwith various functional and resource managers. Some organizations place responsibility directly\nwith the functional and resource managers.\n" }, { "page_number": 142, "text": "III. Operational Controls\n130\nContingency plan maintenance can be incorporated\ninto procedures for change management so that\nupgrades to hardware and software are reflected in\nthe plan.\n11.5.2 Documenting\nThe contingency plan needs to be written, kept up-to-date as the system and other factors change,\nand stored in a safe place. A written plan is critical during a contingency, especially if the person\nwho developed the plan is unavailable. It should clearly state in simple language the sequence of\ntasks to be performed in the event of a contingency so that someone with minimal knowledge\ncould immediately begin to execute the plan. It is generally helpful to store up-to-date copies of\nthe contingency plan in several locations, including any off-site locations, such as alternate\nprocessing sites or backup data storage facilities.\n11.5.3 Training\nAll personnel should be trained in their contingency-related duties. New personnel should be\ntrained as they join the organization, refresher training may be needed, and personnel will need to\npractice their skills.\nTraining is particularly important for effective employee response during emergencies. There is\nno time to check a manual to determine correct procedures if there is a fire. Depending on the\nnature of the emergency, there may or may not be time to protect equipment and other assets. \nPractice is necessary in order to react correctly, especially when human safety is involved. \n11.6\nStep 6: Testing and Revising \nA contingency plan should be tested\nperiodically because there will undoubtedly be\nflaws in the plan and in its implementation. \nThe plan will become dated as time passes and\nas the resources used to support critical\nfunctions change. Responsibility for keeping\nthe contingency plan current\nshould be specifically assigned. The extent and frequency of testing will vary between\norganizations and among systems. There are several types of testing, including reviews, analyses,\nand simulations of disasters. \nA review can be a simple test to check the accuracy of contingency plan documentation. For\ninstance, a reviewer could check if individuals listed are still in the organization and still have the\nresponsibilities that caused them to be included in the plan. This test can check home and work\ntelephone numbers, organizational codes, and building and room numbers. The review can\ndetermine if files can be restored from backup tapes or if employees know emergency procedures.\n" }, { "page_number": 143, "text": "11. Preparing for Contingencies and Disasters\n131\nThe results of a \"test\" often implies a grade\nassigned for a specific level of performance, or\nsimply pass or fail. However, in the case of\ncontingency planning, a test should be used to\nimprove the plan. If organizations do not use this\napproach, flaws in the plan may remain hidden and\nuncorrected.\nAn analysis may be performed on the entire\nplan or portions of it, such as emergency\nresponse procedures. It is beneficial if the\nanalysis is performed by someone who did not\nhelp develop the contingency plan but has a\ngood working knowledge of the critical\nfunction and supporting resources. The\nanalyst(s) may mentally follow the strategies in\nthe contingency plan, looking for flaws in the\nlogic or process used by the plan's developers. \nThe analyst may also interview functional\nmanagers, resource managers, and their staff to uncover missing or unworkable pieces of the plan.\nOrganizations may also arrange disaster simulations. These tests provide valuable information\nabout flaws in the contingency plan and provide practice for a real emergency. While they can be\nexpensive, these tests can also provide critical information that can be used to ensure the\ncontinuity of important functions. In general, the more critical the functions and the resources\naddressed in the contingency plan, the more cost-beneficial it is to perform a disaster simulation.\n11.7\nInterdependencies\nSince all controls help to prevent contingencies, there is an interdependency with all of the\ncontrols in the handbook. \nRisk Management provides a tool for analyzing the security costs and benefits of various\ncontingency planning options. In addition, a risk management effort can be used to help identify\ncritical resources needed to support the organization and the likely threat to those resources. It is\nnot necessary, however, to perform a risk assessment prior to contingency planning, since the\nidentification of critical resources can be performed during the contingency planning process\nitself.\nPhysical and Environmental Controls help prevent contingencies. Although many of the other\ncontrols, such as logical access controls, also prevent contingencies, the major threats that a\ncontingency plan addresses are physical and environmental threats, such as fires, loss of power,\nplumbing breaks, or natural disasters. \nIncident Handling can be viewed as a subset of contingency planning. It is the emergency\nresponse capability for various technical threats. Incident handling can also help an organization\nprevent future incidents.\nSupport and Operations in most organizations includes the periodic backing up of files. It also\n" }, { "page_number": 144, "text": "III. Operational Controls\n132\nincludes the prevention and recovery from more common contingencies, such as a disk failure or\ncorrupted data files.\nPolicy is needed to create and document the organization's approach to contingency planning. \nThe policy should explicitly assign responsibilities.\n11.8\nCost Considerations \nThe cost of developing and implementing contingency planning strategies can be significant,\nespecially if the strategy includes contracts for backup services or duplicate equipment. There are\ntoo many options to discuss cost considerations for each type. \nOne contingency cost that is often overlooked is the cost of testing a plan. Testing provides many\nbenefits and should be performed, although some of the less expensive methods (such as a review)\nmay be sufficient for less critical resources.\nReferences\nAlexander, M. ed. \"Guarding Against Computer Calamity.\" Infosecurity News. 4(6), 1993. pp.\n26-37.\n \nColeman, R. \"Six Steps to Disaster Recovery.\" Security Management. 37(2), 1993. pp. 61-62.\nDykman, C., and C. Davis, eds. Control Objectives - Controls in an Information Systems\nEnvironment: Objectives, Guidelines, and Audit Procedures, fourth edition. Carol Stream, IL:\nThe EDP Auditors Foundation, Inc., 1992 (especially Chapter 3.5).\nFites, P., and M. Kratz, Information Systems Security: A Practitioner's Reference. New York,\nNY: Van Nostrand Reinhold, 1993 (esp. Chapter 4, pp. 95-112).\nFitzGerald, J. \"Risk Ranking Contingency Plan Alternatives.\" Information Executive. 3(4), 1990.\npp. 61-63.\nHelsing, C. \"Business Impact Assessment.\" ISSA Access. 5(3), 1992, pp. 10-12.\nIsaac, I. Guide on Selecting ADP Backup Process Alternatives. Special Publication 500-124.\nGaithersburg, MD: National Bureau of Standards, November 1985.\nKabak, I., and T. Beam, \"On the Frequency and Scope of Backups.\" Information Executive, 4(2),\n1991. pp. 58-62.\n \n" }, { "page_number": 145, "text": "11. Preparing for Contingencies and Disasters\n133\nKay, R. \"What's Hot at Hotsites?\" Infosecurity News. 4(5), 1993. pp. 48-52.\nLainhart, J., and M. Donahue. Computerized Information Systems (CIS) Audit Manual: A\nGuideline to CIS Auditing in Governmental Organizations. Carol Stream, IL: The EDP Auditors\nFoundation Inc., 1992.\nNational Bureau of Standards. Guidelines for ADP Contingency Planning. Federal Information\nProcessing Standard 87. 1981.\nRhode, R., and J. Haskett. \"Disaster Recovery Planning for Academic Computing Centers.\"\nCommunications of the ACM. 33(6), 1990. pp. 652-657.\n" }, { "page_number": 146, "text": "134\n" }, { "page_number": 147, "text": " Organizations may wish to expand this to include, for example, incidents of theft.\n90\n Indeed, damage may result, despite the best efforts to the contrary.\n91\n135\nMalicious code include viruses as well as Trojan\nhorses and worms. A virus is a code segment that\nreplicates by attaching copies of itself to existing\nexecutables. A Trojan horse is a program that\nperforms a desired task, but also includes\nunexpected functions. A worm is a self-replicating\nprogram.\nChapter 12\nCOMPUTER SECURITY INCIDENT HANDLING\nComputer systems are subject to a wide range of mishaps from corrupted data files, to viruses,\nto natural disasters. Some of these mishaps can be fixed through standard operating procedures. \nFor example, frequently occurring events (e.g., a mistakenly deleted file) can usually be readily\nrepaired (e.g., by restoration from the backup file). More severe mishaps, such as outages caused\nby natural disasters, are normally addressed in an organization's contingency plan. Other\ndamaging events result from deliberate malicious technical activity (e.g., the creation of viruses\nor system hacking).\nA computer security incident can result from a\ncomputer virus, other malicious code, or a\nsystem intruder, either an insider or an\noutsider. It is used in this chapter to broadly\nrefer to those incidents resulting from\ndeliberate malicious technical activity. It can\n90\nmore generally refer to those incidents that,\nwithout technically expert response, could\nresult in severe damage. This definition of a\n91\ncomputer security incident is somewhat\nflexible and may vary by organization and computing environment.\nAlthough the threats that hackers and malicious code pose to systems and networks are well\nknown, the occurrence of such harmful events remains unpredictable. Security incidents on larger\nnetworks (e.g., the Internet), such as break-ins and service disruptions, have harmed various\norganizations' computing capabilities. When initially confronted with such incidents, most\norganizations respond in an ad hoc manner. However recurrence of similar incidents often makes\nit cost-beneficial to develop a standing capability for quick discovery of and response to such\nevents. This is especially true, since incidents can often \"spread\" when left unchecked thus\nincreasing damage and seriously harming an organization. \nIncident handling is closely related to contingency planning as well as support and operations. An\nincident handling capability may be viewed as a component of contingency planning, because it\nprovides the ability to react quickly and efficiently to disruptions in normal processing. Broadly\nspeaking, contingency planning addresses events with the potential to interrupt system operations. \nIncident handling can be considered that portion of contingency planning that responds to\n" }, { "page_number": 148, "text": "III. Operational Controls\n See NIST Special Publication 800-3, Establishing an Incident Response Capability, November 1991.\n92\n A good incident handling capability is closely linked to an organization's training and awareness program. It\n93\nwill have educated users about such incidents and what to do when they occur. This can increase the likelihood\nthat incidents will be reported early, thus helping to minimize damage.\n136\nSome organizations suffer repeated outbreaks of\nviruses because the viruses are never completely\neradicated. For example suppose two LANs,\nPersonnel and Budget, are connected, and a virus\nhas spread within each. The administrators of each\nLAN detect the virus and decide to eliminate it on\ntheir LAN. The Personnel LAN administrator first\neradicates the virus, but since the Budget LAN is\nnot yet virus-free, the Personnel LAN is reinfected. \nSomewhat later, the Budget LAN administrator\neradicates the virus. However, the virus reinfects\nthe Budget LAN from the Personnel LAN. Both\nadministrators may think all is well, but both are\nreinfected. An incident handling capability allows\norganizations to address recovery and containment\nof such incidents in a skilled, coordinated manner. \nmalicious technical threats. \nThis chapter describes how organizations can address computer security incidents (in the context\nof their larger computer security program) by developing a computer security incident handling\ncapability.92\nMany organizations handle incidents as part of their user support capability (discussed in Chapter\n14) or as a part of general system support. \n12.1\nBenefits of an Incident Handling Capability\nThe primary benefits of an incident handling capability are containing and repairing damage from\nincidents, and preventing future damage. In addition, there are less obvious side benefits related\nto establishing an incident handling capability. \n12.1.1 Containing and Repairing Damage From Incidents\nWhen left unchecked, malicious software can\nsignificantly harm an organization's\ncomputing, depending on the technology and\nits connectivity. An incident handling\ncapability provides a way for users to report\nincidents and the appropriate response and\n93\nassistance to be provided to aid in recovery. \nTechnical capabilities (e.g., trained personnel\nand virus identification software) are\nprepositioned, ready to be used as necessary. \nMoreover, the organization will have already\nmade important contacts with other\nsupportive sources (e.g., legal, technical, and\nmanagerial) to aid in containment and\nrecovery efforts. \nWithout an incident handling capability,\ncertain responses although well intentioned can actually make matters worse. In some cases,\nindividuals have unknowingly infected anti-virus software with viruses and then spread them to\n" }, { "page_number": 149, "text": "12. Incident Handling\n137\nThe sharing of incident data among organizations\ncan help at both the national and the international\nlevels to prevent and respond to breaches of\nsecurity in a timely, coordinated manner.\nother systems. When viruses spread to local area networks (LANs), most or all of the connected \ncomputers can be infected within hours. Moreover, uncoordinated efforts to rid LANs of viruses\ncan prevent their eradication. \nMany organizations use large LANs internally and also connect to public networks, such as the\nInternet. By doing so, organizations increase their exposure to threats from intruder activity,\nespecially if the organization has a high profile (e.g., perhaps it is involved in a controversial\nprogram). An incident handling capability can provide enormous benefits by responding quickly\nto suspicious activity and coordinating incident handling with responsible offices and individuals,\nas necessary. Intruder activity, whether hackers or malicious code, can often affect many systems\nlocated at many different network sites; thus, handling the incidents can be logistically complex\nand can require information from outside the organization. By planning ahead, such contacts can\nbe preestablished and the speed of response improved, thereby containing and minimizing damage. \nOther organizations may have already dealt with similar situations and may have very useful\nguidance to offer in speeding recovery and minimizing damage.\n12.1.2 Preventing Future Damage\nAn incident handling capability also assists an organization in preventing (or at least minimizing)\ndamage from future incidents. Incidents can be studied internally to gain a better understanding\nof the organizations's threats and vulnerabilities so more effective safeguards can be implemented. \nAdditionally, through outside contacts (established by the incident handling capability) early\nwarnings of threats and vulnerabilities can be provided. Mechanisms will already be in place to\nwarn users of these risks. \nThe incident handling capability allows an organization to learn from the incidents that it has\nexperienced. Data about past incidents (and the corrective measures taken) can be collected. The\ndata can be analyzed for patterns for example, which viruses are most prevalent, which\ncorrective actions are most successful, and which systems and information are being targeted by\nhackers. Vulnerabilities can also be identified in this process for example, whether damage is\noccurring to systems when a new software package or patch is used. Knowledge about the types\nof threats that are occurring and the presence of vulnerabilities can aid in identifying security\nsolutions. This information will also prove useful in creating a more effective training and\nawareness program, and thus help reduce the potential for losses. The incident handling\ncapability assists the training and awareness program by providing information to users as to (1)\nmeasures that can help avoid incidents (e.g.,\nvirus scanning) and (2) what should be done\nin case an incident does occur. \nOf course, the organization's attempts to\nprevent future losses does not occur in a\nvacuum. With a sound incident handling\n" }, { "page_number": 150, "text": "III. Operational Controls\n It is important, however, not to assume that since only n reports were made, that n is the total number of\n94\nincidents; it is not likely that all incidents will be reported.\n138\ncapability, contacts will have been established with counterparts outside the organization. This\nallows for early warning of threats and vulnerabilities that the organization may have not yet\nexperienced. Early preventative measures (generally more cost-effective than repairing damage)\ncan then be taken to reduce future losses. Data is also shared outside the organization to allow\nothers to learn from the organization's experiences. \n12.1.3 Side Benefits\nFinally, establishing an incident handling capability helps an organization in perhaps unanticipated\nways. Three are discussed here.\nUses of Threat and Vulnerability Data. Incident handling can greatly enhance the risk assessment\nprocess. An incident handling capability will allow organizations to collect threat data that may be\nuseful in their risk assessment and safeguard selection processes (e.g., in designing new systems). \nIncidents can be logged and analyzed to determine whether there is a recurring problem (or if\nother patterns are present, as are sometimes seen in hacker attacks), which would not be noticed\nif each incident were only viewed in isolation. Statistics on the numbers and types of incidents in\nthe organization can be used in the risk assessment process as an indication of vulnerabilities and\nthreats. \n94\nEnhancing Internal Communications and Organization Preparedness. Organizations often find\nthat an incident handling capability enhances internal communications and the readiness of the\norganization to respond to any type of incident, not just computer security incidents. Internal\ncommunications will be improved; management will be better organized to receive\ncommunications; and contacts within public affairs, legal staff, law enforcement, and other groups\nwill have been preestablished. The structure set up for reporting incidents can also be used for\nother purposes. \nEnhancing the Training and Awareness Program. The organization's training process can also\nbenefit from incident handling experiences. Based on incidents reported, training personnel will\nhave a better understanding of users' knowledge of security issues. Trainers can use actual\nincidents to vividly illustrate the importance of computer security. Training that is based on\ncurrent threats and controls recommended by incident handling staff provides users with\ninformation more specifically directed to their current needs thereby reducing the risks to the\norganization from incidents.\n" }, { "page_number": 151, "text": "12. Incident Handling\n139\nThe focus of a computer security incident handling\ncapability may be external as well as internal. An\nincident that affects an organization may also\naffect its trading partners, contractors, or clients. \nIn addition, an organization's computer security\nincident handling capability may be able to help\nother organizations and, therefore, help protect the\ncommunity as a whole.\nManagers need to know details about incidents,\nincluding who discovered them and how, so that\nthey can prevent similar incidents in the future. \nHowever users will not be forthcoming if they fear\nreprisal or that they will become scapegoats. \nOrganizations may need to offer incentives to\nemployees for reporting incidents and offer\nguarantees against reprisal or other adverse\nactions. It may also be useful to consider\nanonymous reporting.\n12.2\nCharacteristics of a Successful Incident Handling Capability\nA successful incident handling capability has several core characteristics: \nan understanding of the constituency it will serve; \nan educated constituency;\na means for centralized communications; \nexpertise in the requisite technologies; and\nlinks to other groups to assist in incident handling (as needed). \n12.2.1 Defining the Constituency to Be Served\nThe constituency includes computer users and\nprogram managers. Like any other customer-\nvendor relationship, the constituency will tend\nto take advantage of the capability if the\nservices rendered are valuable.\nThe constituency is not always the entire\norganization. For example, an organization\nmay use several types of computers and\nnetworks but may decide that its incident\nhandling capability is cost-justified only for its personal computer users. In doing so, the\norganization may have determined that computer viruses pose a much larger risk than other\nmalicious technical threats on other platforms. Or, a large organization composed of several sites\nmay decide that current computer security efforts at some sites do not require an incident handling\ncapability, whereas other sites do (perhaps\nbecause of the criticality of processing).\n12.2.2 Educated Constituency \nUsers need to know about, accept, and trust\nthe incident handling capability or it will not\nbe used. Through training and awareness\nprograms, users can become knowledgeable\nabout the existence of the capability and how\nto recognize and report incidents. Users trust\n" }, { "page_number": 152, "text": "III. Operational Controls\n140\nin the value of the service will build with reliable performance.\n12.2.3 Centralized Reporting and Communications \nSuccessful incident handling requires that users be able to report incidents to the incident handling\nteam in a convenient, straightforward fashion; this is referred to as centralized reporting. A\nsuccessful incident handling capability depends on timely reporting. If it is difficult or time\nconsuming to report incidents, the incident handling capability may not be fully used. Usually,\nsome form of a hotline, backed up by pagers, works well.\nCentralized communications is very useful for accessing or distributing information relevant to\nthe incident handling effort. For example, if users are linked together via a network, the incident\nhandling capability can then use the network to send out timely announcements and other\ninformation. Users can take advantage of the network to retrieve security information stored on\nservers and communicate with the incident response team via e-mail.\n12.2.4 Technical Platform and Communications Expertise\nThe technical staff members who comprise the incident handling capability need specific\nknowledge, skills, and abilities. Desirable qualifications for technical staff members may include\nthe ability to: \nwork expertly with some or all of the constituency's core technology; \nwork in a group environment;\ncommunicate effectively with different types of users, who will range from system\nadministrators to unskilled users to management to law-enforcement officials;\nbe on-call 24 hours as needed; and\ntravel on short notice (of course, this depends upon the physical location of the\nconstituency to be served).\n12.2.5 Liaison With Other Organizations\nDue to increasing computer connectivity, intruder activity on networks can affect many\norganizations, sometimes including those in foreign countries. Therefore, an organization's\nincident handling team may need to work with other teams or security groups to effectively handle\nincidents that range beyond its constituency. Additionally, the team may need to pool its\nknowledge with other teams at various times. Thus, it is vital to the success of an incident\nhandling capability that it establish ties and contacts with other related counterparts and\n" }, { "page_number": 153, "text": "12. Incident Handling\n141\nThe Forum of \nIncident Response and Security Teams\nThe 1988 Internet worm incident highlighted the\nneed for better methods for responding to and\nsharing information about incidents. It was also\nclear that any single team or \"hot line\" would\nsimply be overwhelmed. Out of this was born the\nconcept of a coalition of response teams each\nwith its own constituency, but working together to\nshare information, provide alerts, and support each\nother in the response to incidents. The Forum of\nIncident Response and Security Teams (FIRST)\nincludes teams from government, industry,\ncomputer manufacturers, and academia. NIST\nserves as the secretariat of FIRST.\nsupporting organizations.\nEspecially important to incident handling are\ncontacts with investigative agencies, such as\nfederal (e.g., the FBI), state, and local law\nenforcement. Laws that affect computer\ncrime vary among localities and states, and\nsome actions may be state (but not federal)\ncrimes. It is important for teams to be familiar\nwith current laws and to have established\ncontacts within law enforcement and\ninvestigative agencies.\nIncidents can also garner much media\nattention and can reflect quite negatively on\nan organization's image. An incident handling\ncapability may need to work closely with the\norganization's public affairs office, which is\ntrained in dealing with the news media. In\npresenting information to the press, it is important that (1) attackers are not given information\nthat would place the organization at greater risk and (2) potential legal evidence is properly\nprotected. \n12.3\nTechnical Support for Incident Handling\nIncident handling will be greatly enhanced by technical mechanisms that enable the dissemination\nof information quickly and conveniently. \n12.3.1 Communications for Centralized Reporting of Incidents\nThe technical ability to report incidents is of primary importance, since without knowledge of an\nincident, response is precluded. Fortunately, such technical mechanisms are already in place in\nmany organizations. \nFor rapid response to constituency problems, a simple telephone \"hotline\" is practical and\nconvenient. Some agencies may already have a number used for emergencies or for obtaining\nhelp with other problems; it may be practical (and cost-effective) to also use this number for\nincident handling. It may be necessary to provide 24-hour coverage for the hotline. This can be\ndone by staffing the answering center, by providing an answering service for nonoffice hours, or\nby using a combination of an answering machine and personal pagers.\n" }, { "page_number": 154, "text": "III. Operational Controls\n142\nOne way to establish a centralized reporting and\nincident response capability, while minimizing\nexpenditures, is to use an existing Help Desk. \nMany agencies already have central Help Desks for\nfielding calls about commonly used applications,\ntroubleshooting system problems, and providing\nhelp in detecting and eradicating computer viruses. \nBy expanding the capabilities of the Help Desk and\npublicizing its telephone number (or e-mail\naddress), an agency may be able to significantly\nimprove its ability to handle many different types\nof incidents at minimal cost.\nIf additional mechanisms for contacting the\nincident handling team can be provided, it may\nincrease access and thus benefit incident\nhandling efforts. A centralized e-mail address\nthat forwards mail to staff members would\npermit the constituency to conveniently\nexchange information with the team. \nProviding a fax number to users may also be\nhelpful. \n12.3.2 Rapid Communications Facilities\nSome form of rapid communications is\nessential for quickly communicating with the\nconstituency as well as with management officials and outside organizations. The team may need\nto send out security advisories or collect information quickly, thus some convenient form of\ncommunications, such as electronic mail, is generally highly desirable. With electronic mail, the\nteam can easily direct information to various subgroups within the constituency, such as system\nmanagers or network managers, and broadcast general alerts to the entire constituency as needed. \nWhen connectivity already exists, e-mail has low overhead and is easy to use. (However, it is\npossible for the e-mail system itself to be attacked, as was the case with the 1988 Internet worm.)\nAlthough there are substitutes for e-mail, they tend to increase response time. An electronic\nbulletin board system (BBS) can work well for distributing information, especially if it provides a\nconvenient user interface that encourages its use. A BBS connected to a network is more\nconvenient to access than one requiring a terminal and modem; however, the latter may be the\nonly alternative for organizations without sufficient network connectivity. In addition,\ntelephones, physical bulletin boards, and flyers can be used.\n12.3.3 Secure Communications Facilities\nIncidents can range from the trivial to those involving national security. Often when exchanging\ninformation about incidents, using encrypted communications may be advisable. This will help\nprevent the unintended distribution of incident-related information. Encryption technology is\navailable for voice, fax, and e-mail communications. \n12.4\nInterdependencies\nAn incident handling capability generally depends upon other safeguards presented in this\nhandbook. The most obvious is the strong link to other components of the contingency plan. The\nfollowing paragraphs detail the most important of these interdependencies.\n" }, { "page_number": 155, "text": "12. Incident Handling\n143\nContingency Planning. As discussed in the introduction to this chapter, an incident handling\ncapability can be viewed as the component of contingency planning that deals with responding to\ntechnical threats, such as viruses or hackers. Close coordination is necessary with other\ncontingency planning efforts, particularly when planning for contingency processing in the event\nof a serious unavailability of system resources. \nSupport and Operations. Incident handling is also closely linked to support and operations,\nespecially user support and backups. For example, for purposes of efficiency and cost savings,\nthe incident handling capability is often co-operated with a user \"help desk.\" Also, backups of\nsystem resources may need to be used when recovering from an incident. \nTraining and Awareness. The training and awareness program can benefit from lessons learned\nduring incident handling. Incident handling staff will be able to help assess the level of user\nawareness about current threats and vulnerabilities. Staff members may be able to help train\nsystem administrators, system operators, and other users and systems personnel. Knowledge of\nsecurity precautions (resulting from such training) helps reduce future incidents. It is also\nimportant that users are trained what to report and how to report it.\nRisk Management. The risk analysis process will benefit from statistics and logs showing the\nnumbers and types of incidents that have occurred and the types of controls that are effective in\npreventing incidents. This information can be used to help select appropriate security controls\nand practices. \n12.5\nCost Considerations\nThere are a number of start-up costs and funding issues to consider when planning an incident\nhandling capability. Because the success of an incident handling capability relies so heavily on\nusers' perceptions of its worth and whether they use it, it is very important that the capability be\nable to meet users' requirements. Two important funding issues are: \nPersonnel. An incident handling capability plan might call for at least one manager and one or\nmore technical staff members (or their equivalent) to accomplish program objectives. Depending\non the scope of the effort, however, full-time staff members may not be required. In some\nsituations, some staff may be needed part-time or on an on-call basis. Staff may be performing\nincident handling duties as an adjunct responsibility to their normal assignments. \n \nEducation and Training. Incident handling staff will need to keep current with computer system\nand security developments. Budget allowances need to be made, therefore, for attending\nconferences, security seminars, and other continuing-education events. If an organization is\nlocated in more than one geographic areas, funds will probably be needed for travel to other sites\nfor handling incidents. \n" }, { "page_number": 156, "text": "III. Operational Controls\n144\nReferences\nBrand, Russell L. Coping With the Threat of Computer Security Incidents: A Primer from\nPrevention Through Recovery. July 1989.\nFedeli, Alan. \"Organizing a Corporate Anti-Virus Effort.\" Proceedings of the Third Annual\nComputer VIRUS Clinic, Nationwide Computer Corp. March 1990.\nHolbrook, P., and J. Reynolds, eds. Site Security Handbook. RFC 1244 prepared for the Internet\nEngineering Task Force, 1991. FTP from csrc.nist.gov:/put/secplcy/rfc1244.txt.\nNational Institute of Standards and Technology. \"Establishing a Computer Security Incident\nResponse Capability.\" Computer Systems Laboratory Bulletin. Gaithersburg, MD. February 1992. \nPadgett, K. Establishing and Operating an Incident Response Team. Los Alamos, NM: Los\nAlamos National Laboratory, 1992. \nPethia, Rich, and Kenneth van Wyk. Computer Emergency Response - An International Problem.\n1990.\nQuarterman, John. The Matrix - Computer Networks and Conferencing Systems Worldwide.\nDigital Press, 1990.\nScherlis, William, S. Squires, and R. Pethia. Computer Emergency Response. 1989.\nSchultz, E., D. Brown, and T. Longstaff. Responding to Computer Security Incidents: Guidelines\nfor Incident Handling. University of California Technical Report UCRL-104689, 1990.\nProceedings of the Third Invitational Workshop on Computer Security Incident Response.\nAugust 1991.\nWack, John. Establishing an Incident Response Capability. Special Publication 800-3.\nGaithersburg, MD: National Institute of Standards and Technology. November 1991.\n" }, { "page_number": 157, "text": " One often-cited goal of training is changing people's attitudes. This chapter views changing attitudes as just\n95\none step toward changing behavior.\n This chapter does not discuss the specific contents of training programs. See the references for details of\n96\nsuggested course contents.\n145\nChapter 13\nAWARENESS, TRAINING, AND EDUCATION\nPeople, who are all fallible, are usually recognized as one of the weakest links in securing systems. \nThe purpose of computer security awareness, training, and education is to enhance security by:\nimproving awareness of the need to protect system resources;\ndeveloping skills and knowledge so computer users can perform their jobs more\nsecurely; and\nbuilding in-depth knowledge, as needed, to design, implement, or operate security\nprograms for organizations and systems.\nMaking computer system users aware of their security responsibilities and teaching them correct\npractices helps users change their behavior. It also supports individual accountability, which is\n95\none of the most important ways to improve computer security. Without knowing the necessary\nsecurity measures (and to how to use them), users cannot be truly accountable for their actions. \nThe importance of this training is emphasized in the Computer Security Act, which requires\ntraining for those involved with the management, use, and operation of federal computer systems. \nThis chapter first discusses the two overriding benefits of awareness, training, and education,\nnamely: (1) improving employee behavior and (2) increasing the ability to hold employees\naccountable for their actions. Next, awareness, training, and education are discussed separately,\nwith techniques used for each. Finally, the chapter presents one approach for developing a\ncomputer security awareness and training program.96\n13.1\nBehavior \nPeople are a crucial factor in ensuring the security of computer systems and valuable information\nresources. Human actions account for a far greater degree of computer-related loss than all other\nsources combined. Of such losses, the actions of an organization's insiders normally cause far\nmore harm than the actions of outsiders. (Chapter 4 discusses the major sources of computer-\nrelated loss.) \n" }, { "page_number": 158, "text": "III. Operational Controls\n146\nOne of the keys to a successful computer security\nprogram is security awareness and training. If\nemployees are not informed of applicable\norganizational policies and procedures, they cannot\nbe expected to act effectively to secure computer\nresources.\nSecurity awareness programs: (1) set the stage for\ntraining by changing organizational attitudes to\nrealize the importance of security and the adverse\nconsequences of its failure; and (2) remind users of\nthe procedures to be followed.\nThe major causes of loss due to an organization's own employees are: errors and omissions, fraud,\nand actions by disgruntled employees. One principal purpose of security awareness, training, and\neducation is to reduce errors and omissions. However, it can also reduce fraud and unauthorized\nactivity by disgruntled employees by increasing employees' knowledge of their accountability and\nthe penalties associated with such actions.\nManagement sets the example for behavior within an organization. If employees know that\nmanagement does not care about security, no training class teaching the importance of security\nand imparting valuable skills can be truly effective. This \"tone from the top\" has myriad effects an\norganization's security program.\n13.2\nAccountability\nBoth the dissemination and the enforcement\nof policy are critical issues that are\nimplemented and strengthened through\ntraining programs. Employees cannot be\nexpected to follow policies and procedures of\nwhich they are unaware. In addition,\nenforcing penalties may be difficult if users\ncan claim ignorance when caught doing something wrong.\nTraining employees may also be necessary to show that a standard of due care has been taken in\nprotecting information. Simply issuing policy, with no follow-up to implement that policy, may\nnot suffice.\nMany organizations use acknowledgment statements which state that employees have read and\nunderstand computer security requirements. (An example is provided in Chapter 10.)\n13.3\nAwareness\nAwareness stimulates and motivates those\nbeing trained to care about security and to\nremind them of important security practices. \nExplaining what happens to an organization,\nits mission, customers, and employees if\nsecurity fails motivates people to take security\nseriously.\nAwareness can take on different forms for particular audiences. Appropriate awareness for\nmanagement officials might stress management's pivotal role in establishing organizational\n" }, { "page_number": 159, "text": "13. Awareness, Training, and Education\n147\nattitudes toward security. Appropriate awareness for other groups, such as system programmers\nor information analysts, should address the need for security as it relates to their job. In today's\nsystems environment, almost everyone in an organization may have access to system resources \nand therefore may have the potential to cause harm.\nComparative Framework\nAWARENESS\nTRAINING\nEDUCATION\nAttribute:\n\"What\"\n\"How\"\n\"Why\"\nLevel:\nInformation\nKnowledge\nInsight\nObjective:\nRecognition\nSkill\nUnderstanding\nTeaching Method:\nMedia\nPractical Instruction\nTheoretical Instruction\n- Videos\n- Lecture\n- Discussion Seminar\n-Newsletters\n- Case study workshop\n- Background reading\n-Posters, etc.\n- Hands-on practice\nTest Measure:\nTrue/False\nProblem Solving\nEassay\nMultiple Choice\n(apply learning)\n(interpret learning)\n(identify learning)\nImpact Timeframe:\nShort-term\nIntermediate\nLong-term\nFigure 13.1 compares some of the differences in awareness, training, and education.\nAwareness is used to reinforce the fact that security supports the mission of the organization by\nprotecting valuable resources. If employees view security as just bothersome rules and\nprocedures, they are more likely to ignore them. In addition, they may not make needed\nsuggestions about improving security nor recognize and report security threats and vulnerabilities.\nAwareness also is used to remind people of basic security practices, such as logging off a\ncomputer system or locking doors.\nTechniques. A security awareness program can use many teaching methods, including video\n" }, { "page_number": 160, "text": "III. Operational Controls\n148\nEmployees often regard computer security as an\nobstacle to productivity. A common feeling is that\nthey are paid to produce, not to protect. To help\nmotivate employees, awareness should emphasize\nhow security, from a broader perspective,\ncontributes to productivity. The consequences of\npoor security should be explained, while avoiding\nthe fear and intimidation that employees often\nassociate with security.\ntapes, newsletters, posters, bulletin boards, flyers, demonstrations, briefings, short reminder\nnotices at log-on, talks, or lectures. Awareness is often incorporated into basic security training\nand can use any method that can change employees' attitudes. \nEffective security awareness programs need to\nbe designed with the recognition that people\ntend to practice a tuning out process (also\nknown as acclimation). For example, after a\nwhile, a security poster, no matter how well\ndesigned, will be ignored; it will, in effect,\nsimply blend into the environment. For this\nreason, awareness techniques should be\ncreative and frequently changed. \n13.4\nTraining\nThe purpose of training is to teach people the skills that will enable them to perform their jobs\nmore securely. This includes teaching people what they should do and how they should (or can)\ndo it. Training can address many levels, from basic security practices to more advanced or\nspecialized skills. It can be specific to one computer system or generic enough to address all\nsystems.\nTraining is most effective when targeted to a specific audience. This enables the training to focus\non security-related job skills and knowledge that people need performing their duties. Two types\nof audiences are general users and those who require specialized or advanced skills.\nGeneral Users. Most users need to understand good computer security practices, such as:\nprotecting the physical area and equipment (e.g., locking doors, caring for floppy\ndiskettes);\nprotecting passwords (if used) or other authentication data or tokens (e.g., never\ndivulge PINs); and \nreporting security violations or incidents (e.g., whom to call if a virus is\nsuspected).\nIn addition, general users should be taught the organization's policies for protecting information\nand computer systems and the roles and responsibilities of various organizational units with which\nthey may have to interact.\n" }, { "page_number": 161, "text": "13. Awareness, Training, and Education\n149\nOne group that has been targeted for specialized\ntraining is executives and functional managers. \nThe training for management personnel is\nspecialized (rather than advanced) because\nmanagers do not (as a general rule) need to\nunderstand the technical details of security. \nHowever, they do need to understand how to\norganize, direct, and evaluate security measures\nand programs. They also need to understand risk\nacceptance.\nIn teaching general users, care should be taken not to overburden them with unneeded details. \nThese people are the target of multiple training programs, such as those addressing safety, sexual\nharassment, and AIDS in the workplace. The training should be made useful by addressing\nsecurity issues that directly affect the users. The goal is to improve basic security practices, not\nto make everyone literate in all the jargon or philosophy of security.\nSpecialized or Advanced Training. Many groups need more advanced or more specialized\ntraining than just basic security practices. For example, managers may need to understand\nsecurity consequences and costs so they can factor security into their decisions, or system\nadministrators may need to know how to implement and use specific access control products.\nThere are many different ways to identify\nindividuals or groups who need specialized or\nadvanced training. One method is to look at\njob categories, such as executives, functional\nmanagers, or technology providers. Another\nmethod is to look at job functions, such as\nsystem design, system operation, or system\nuse. A third method is to look at the specific\ntechnology and products used, especially for\nadvanced training for user groups and training\nfor a new system. This is further discussed in\nthe section 13.6 of this chapter.\nTechniques. A security training program normally includes training classes, either strictly devoted\nto security or as added special sections or modules within existing training classes. Training may\nbe computer- or lecture-based (or both), and may include hands-on practice and case studies. \nTraining, like awareness, also happens on the job.\n13.5\nEducation\nSecurity education is more in-depth than security training and is targeted for security professionals\nand those whose jobs require expertise in security. \nTechniques. Security education is normally outside the scope of most organization awareness and\ntraining programs. It is more appropriately a part of employee career development. Security\neducation is obtained through college or graduate classes or through specialized training\nprograms. Because of this, most computer security programs focus primarily on awareness and\n" }, { "page_number": 162, "text": "III. Operational Controls\n Unfortunately, college and graduate security courses are not widely available. In addition, the courses may\n97\nonly address general security. \n This section is based on material prepared by the Department of Energy's Office of Information Management\n98\nfor its unclassified security program.\n This approach is presented to familiarize the reader with some of the important implementation issues. It is not\n99\nthe only approach to implementing an awareness and training program.\n150\nThe Computer Security Act of 1987 requires\nfederal agencies to \"provide for the mandatory\nperiodic training in computer security awareness\nand accepted computer practices of all employees\nwho are involved with the management, use, or\noperation of each federal computer system within\nor under the supervision of that agency.\" The\nscope and goals of federal computer security\nawareness and training programs must implement\nthis broad mandate. (Other federal requirements\nfor computer security training are contained in\nOMB Circular A-130, Appendix III, and OPM\nregulations.)\ntraining, as does the remainder of this chapter.97\n13.6\nImplementation98\nAn effective computer security awareness and training (CSAT) program requires proper planning,\nimplementation, maintenance, and periodic evaluation. The following seven steps constitute one\napproach for developing a CSAT program. \n99\nStep 1:\nIdentify Program Scope, Goals, and Objectives.\nStep 2:\nIdentify Training Staff.\nStep 3:\nIdentify Target Audiences.\nStep 4:\nMotivate Management and Employees.\nStep 5:\nAdminister the Program.\nStep 6:\nMaintain the Program.\nStep 7:\nEvaluate the Program.\n13.6.1 Identify Program Scope, Goals, and\nObjectives\nThe first step in developing a CSAT program\nis to determine the program's scope, goals,\nand objectives. The scope of the CSAT\nprogram should provide training to all types of\npeople who interact with computer systems. \nThe scope of the program can be an entire\norganization or a subunit. Since users need\ntraining which relates directly to their use of\n" }, { "page_number": 163, "text": "13. Awareness, Training, and Education\n151\nparticular systems, a large organizationwide program may need to be supplemented by more\nspecific programs. In addition, the organization should specifically address whether the program\napplies to employees only or also to other users of organizational systems. \nGenerally, the overall goal of a CSAT program is to sustain an appropriate level of protection for\ncomputer resources by increasing employee awareness of their computer security responsibilities\nand the ways to fulfill them. More specific goals may need to be established. Objectives should\nbe defined to meet the organization's specific goals.\n13.6.2 Identify Training Staff\nThere are many possible candidates for conducting the training including internal training\ndepartments, computer security staff, or contract services. Regardless of who is chosen, it is\nimportant that trainers have sufficient knowledge of computer security issues, principles, and\ntechniques. It is also vital that they know how to communicate information and ideas effectively.\n13.6.3 Identify Target Audiences\nNot everyone needs the same degree or type of computer security information to do their jobs. A\nCSAT program that distinguishes between groups of people, presents only the information needed\nby the particular audience, and omits irrelevant information will have the best results. Segmenting\naudiences (e.g., by their function or familiarity with the system) can also improve the effectiveness\nof a CSAT program. For larger organizations, some individuals will fit into more than one group. \nFor smaller organizations, segmenting may not be needed. The following methods are some\nexamples of ways to do this.\nSegment according to level of awareness. Individuals may be separated into groups according to\ntheir current level of awareness. This may require research to determine how well employees\nfollow computer security procedures or understand how computer security fits into their jobs.\nSegment according to general job task or function. Individuals may be grouped as data\nproviders, data processors, or data users.\nSegment according to specific job category. Many organizations assign individuals to job\ncategories. Since each job category generally has different job responsibilities, training for each\nwill be different. Examples of job categories could be general management, technology\nmanagement, applications development, or security.\nSegment according to level of computer knowledge. Computer experts may be expected to find a\nprogram containing highly technical information more valuable than one covering the management\nissues in computer security. Similarly, a computer novice would benefit more from a training\nprogram that presents introductory fundamentals.\n" }, { "page_number": 164, "text": "III. Operational Controls\n152\nEmployees and managers should be solicited to\nprovide input to the CSAT program. Individuals\nare more likely to support a program when they\nhave actively participated in its development.\nThe Federal Information Systems Security\nEducators' Association and NIST Computer\nSecurity Program Managers' Forum provide two\nmeans for federal government computer security\nprogram managers and training officers to share\ntraining ideas and materials. \nSegment according to types of technology or systems used. Security techniques used for each\noff-the-shelf product or application system will usually vary. The users of major applications will\nnormally require training specific to that application.\n13.6.4 Motivate Management and Employees\nTo successfully implement an awareness and training program, it is important to gain the support\nof management and employees. Consideration should be given to using motivational techniques\nto show management and employees how their participation in the CSAT program will benefit the\norganization.\nManagement. Motivating management normally relies upon increasing awareness. Management\nneeds to be aware of the losses that computer security can reduce and the role of training in\ncomputer security. Management commitment is necessary because of the resources used in\ndeveloping and implementing the program and also because the program affects their staff.\nEmployees. Motivation of managers alone is\nnot enough. Employees often need to be\nconvinced of the merits of computer security\nand how it relates to their jobs. Without\nappropriate training, many employees will not\nfully comprehend the value of the system\nresources with which they work. \nSome awareness techniques were discussed above. Regardless of the techniques that are used,\nemployees should feel that their cooperation will have a beneficial impact on the organization's\nfuture (and, consequently, their own).\n13.6.5 Administer the Program\nThere are several important considerations for administering the CSAT program.\nVisibility. The visibility of a CSAT program plays a key role in its success. Efforts to achieve\nhigh visibility should begin during the early\nstages of CSAT program development. \nHowever, care should be give not to promise\nwhat cannot be delivered.\nTraining Methods. The methods used in the\nCSAT program should be consistent with the\nmaterial presented and tailored to the\naudience's needs. Some training and\n" }, { "page_number": 165, "text": "13. Awareness, Training, and Education\n153\nawareness methods and techniques are listed above (in the Techniques sections). Computer\nsecurity awareness and training can be added to existing courses and presentations or taught\nseparately. On-the-job training should also be considered.\nTraining Topics. There are more topics in computer security than can be taught in any one\ncourse. Topics should be selected based on the audience's requirements.\nTraining Materials. In general, higher-quality training materials are more favorably received and\nare more expensive. Costs, however, can be minimized since training materials can often be\nobtained from other organizations. The cost of modifying materials is normally less than\ndeveloping training materials from scratch.\nTraining Presentation. Consideration should be given to the frequency of training (e.g., annually\nor as needed), the length of training presentations (e.g., 20 minutes for general presentations, one\nhour for updates or one week for an off-site class), and the style of training presentation (e.g.,\nformal presentation, informal discussion, computer-based training, humorous).\n13.6.6 Maintain the Program\nComputer technology is an ever-changing field. Efforts should be made to keep abreast of\nchanges in computer technology and security requirements. A training program that meets an\norganization's needs today may become ineffective when the organization starts to use a new\napplication or changes its environment, such as by connecting to the Internet. Likewise, an\nawareness program can become obsolete if laws or organization policies change. For example,\nthe awareness program should make employees aware of a new policy on e-mail usage. \nEmployees may discount the CSAT program, and by association the importance of computer\nsecurity, if the program does not provide current information. \n13.6.7 Evaluate the Program\nIt is often difficult to measure the effectiveness of an awareness or training program. \nNevertheless, an evaluation should attempt to ascertain how much information is retained, to what\nextent computer security procedures are being followed, and general attitudes toward computer\nsecurity. The results of such an evaluation should help identify and correct problems. Some\nevaluation methods (which can be used in conjunction with one another) are:\nUse student evaluations.\nObserve how well employees follow recommended security procedures.\nTest employees on material covered.\n" }, { "page_number": 166, "text": "III. Operational Controls\n \n The number of incidents will not necessarily go down. For example, virus-related losses may decrease\n100\nwhen users know the proper procedures to avoid infection. On the other hand, reports of incidents may go up as\nusers employ virus scanners and find more viruses. In addition, users will now know that virus incidents should\nbe reported and to whom the reports should be sent.\n154\nMonitor the number and kind of computer security incidents reported before and\nafter the program is implemented.100\n13.7\nInterdependencies\nTraining can, and in most cases should, be used to support every control in the handbook. All\ncontrols are more effective if designers, implementers, and users are thoroughly trained.\nPolicy. Training is a critical means of informing employees of the contents of and reasons for the\norganization's policies. \nSecurity Program Management. Federal agencies need to ensure that appropriate computer\nsecurity awareness and training is provided, as required under the Computer Security Act of\n1987. A security program should ensure that an organization is meeting all applicable laws and\nregulations. \nPersonnel/User Issues. Awareness, training, and education are often included with other\npersonnel/user issues. Training is often required before access is granted to a computer system.\n13.8\nCost Considerations\nThe major cost considerations in awareness, training, and education programs are: \nthe cost of preparing and updating materials, including the time of the preparer;\nthe cost of those providing the instruction; \nemployee time attending courses and lectures or watching videos; and \nthe cost of outside courses and consultants (both of which may including travel\nexpenses), including course maintenance.\nReferences\nAlexander, M. ed. \"Multimedia Means Greater Awareness.\" Infosecurity News. 4(6), 1993. pp.\n90-94.\n" }, { "page_number": 167, "text": "13. Awareness, Training, and Education\n155\nBurns, G.M. \"A Recipe for a Decentralized Security Awareness Program.\" ISSA Access. Vol. 3,\nIssue 2, 2nd Quarter 1990. pp. 12-54.\nCode of Federal Regulations. 5 CFR 930. Computer Security Training Regulation.\nFlanders, D. \"Security Awareness - A 70% Solution.\" Fourth Workshop on Computer Security\nIncident Handling, August 1992.\nIsaacson, G. \"Security Awareness: Making It Work.\" ISSA Access. 3(4), 1990. pp. 22-24.\nNational Aeronautics and Space Administration. Guidelines for Development of Computer\nSecurity Awareness and Training (CSAT) Programs. Washington, DC. NASA Guide 2410.1.\nMarch 1990.\nMaconachy, V. \"Computer Security Education, Training, and Awareness: Turning a Philosophical\nOrientation Into Practical Reality.\" Proceedings of the 12th National Computer Security\nConference. National Institute of Standards and Technology and National Computer Security\nCenter. Washington, DC. October 1989.\nMaconachy, V. \"Panel: Federal Information Systems Security Educators' Association (FISSEA).\"\nProceeding of the 15th National Computer Security Conference. National Institute of Standards\nand Technology and National Computer Security Center. Baltimore, MD. October 1992.\nSuchinsky, A. \"Determining Your Training Needs.\" Proceedings of the 13th National Computer\nSecurity Conference. National Institute of Standards and Technology and National Computer\nSecurity Center. Washington, DC. October 1990.\nTodd, M.A. and Guitian C. \"Computer Security Training Guidelines.\" Special Publication 500-\n172. Gaithersburg, MD: National Institute of Standards and Technology. November 1989. \nU.S. Department of Energy. Computer Security Awareness and Training Guideline (Vol. 1).\nWashington, DC. DOE/MA-0320. February 1988.\nWells, R.O. \"Security Awareness for the Non-Believers.\" ISSA Access. Vol. 3, Issue 2, 2nd\nQuarter 1990. pp. 10-61.\n" }, { "page_number": 168, "text": "156\n" }, { "page_number": 169, "text": "157\nSystem management and administration staff\ngenerally perform support and operations tasks\nalthough sometimes users do. Larger systems may\nhave full-time operators, system programmers, and\nsupport staff performing these tasks. Smaller\nsystems may have a part-time administrator.\nThe primary goal of computer support and\noperations is the continued and correct operation of\na computer system. One of the goals of computer\nsecurity is the availability and integrity of systems. \nThese goals are very closely linked. \nChapter 14\nSECURITY CONSIDERATIONS \nIN \nCOMPUTER SUPPORT AND OPERATIONS\nComputer support and operations refers to\neverything done to run a computer system. \nThis includes both system administration and\ntasks external to the system that support its\noperation (e.g., maintaining documentation). \nIt does not include system planning or design. \nThe support and operation of any computer\nsystem, from a three-person local area\nnetwork to a worldwide application serving\nthousands of users, is critical to maintaining the security of a system. Support and operations are\nroutine activities that enable computer systems to function correctly. These include fixing\nsoftware or hardware problems, loading and maintaining software, and helping users resolve\nproblems.\nThe failure to consider security as part of the support and operations of computer systems is, for\nmany organizations, their Achilles heel. Computer security system literature includes many\nexamples of how organizations undermined their often expensive security measures because of\npoor documentation, old user accounts, conflicting software, or poor control of maintenance\naccounts. Also, an organization's policies and procedures often fail to address many of these\nimportant issues.\nThe important security considerations within some of the major categories of support and\noperations are:\nuser support, \nsoftware support,\nconfiguration management,\nbackups,\nmedia controls,\ndocumentation, and\nmaintenance. \n" }, { "page_number": 170, "text": "III. Operational Controls\n \n In general, larger systems include mainframes, large minicomputers, and WANs. Smaller systems include\n101\nPCs and LANs. \n158\nUser support should be closely linked to the\norganization's incident handling capability. In\nmany cases, the same personnel perform these\nfunctions.\nSmall systems are especially susceptible to viruses,\nwhile networks are particularly susceptible to\nhacker attacks, which can be targeted at multiple\nsystems. System support personnel should be able\nto recognize attacks and know how to respond.\nSome special considerations are noted for larger or smaller systems.\n \n101\nThis chapter addresses the support and operations activities directly related to security. Every\ncontrol discussed in this handbook relies, in one way or another, on computer system support and\noperations. This chapter, however, focuses on areas not covered in other chapters. For example,\noperations personnel normally create user accounts on the system. This topic is covered in the\nIdentification and Authentication chapter, so it is not discussed here. Similarly, the input from\nsupport and operations staff to the security awareness and training program is covered in the\nSecurity Awareness, Training, and Education chapter. \n14.1\nUser Support\nIn many organizations, user support takes place through a Help Desk. Help Desks can support an\nentire organization, a subunit, a specific system, or a combination of these. For smaller systems,\nthe system administrator normally provides direct user support. Experienced users provide\ninformal user support on most systems.\nAn important security consideration for user\nsupport personnel is being able to recognize\nwhich problems (brought to their attention by\nusers) are security-related. For example,\nusers' inability to log onto a computer system\nmay result from the disabling of their accounts\ndue to too many failed access attempts. This\ncould indicate the presence of hackers trying to guess users' passwords. \nIn general, system support and operations staff need to be able to identify security problems,\nrespond appropriately, and inform appropriate individuals. A wide range of possible security\nproblems exist. Some will be internal to custom applications, while others apply to off-the-shelf\nproducts. Additionally, problems can be software- or hardware-based. \nThe more responsive and knowledgeable\nsystem support and operation staff personnel\nare, the less user support will be provided\ninformally. The support other users provide is\nimportant, but they may not be aware of the\n\"whole picture.\"\n" }, { "page_number": 171, "text": "14. Security Considerations in Computer Support and Operations\n \n This chapter only addresses configuration management during the operational phase. Configuration\n102\nmanagement can have extremely important security consequences during the development phase of a system.\n159\nViruses take advantage of the weak software\ncontrols in personal computers. Also, there are\npowerful utilities available for PCs that can restore\ndeleted files, find hidden files, and interface\ndirectly with PC hardware, bypassing the operating\nsystem. Some organizations use personal\ncomputers without floppy drives in order to have\nbetter control over the system.\nThere are several widely available utilities that\nlook for security problems in both networks and\nthe systems attached to them. Some utilities look\nfor and try to exploit security vulnerabilities. (This\ntype of software is further discussed in Chapter 9.)\n14.2\nSoftware Support\nSoftware is the heart of an organization's computer operations, whatever the size and complexity\nof the system. Therefore, it is essential that software function correctly and be protected from\ncorruption. There are many elements of software support.\nOne is controlling what software is used on a system. If users or systems personnel can load and\nexecute any software on a system, the system is more vulnerable to viruses, to unexpected\nsoftware interactions, and to software that may subvert or bypass security controls. One method\nof controlling software is to inspect or test software before it is loaded (e.g., to determine\ncompatibility with custom applications or identify other unforeseen interactions). This can apply\nto new software packages, to upgrades, to off-the-shelf products, or to custom software, as\ndeemed appropriate. In addition to controlling the loading and execution of new software,\norganizations should also give care to the configuration and use of powerful system utilities. \nSystem utilities can compromise the integrity of operating systems and logical access controls.\nA second element in software support can be\nto ensure that software has not been modified\nwithout proper authorization. This involves\nthe protection of software and backup copies. \nThis can be done with a combination of\nlogical and physical access controls. \nMany organizations also include a program to\nensure that software is properly licensed, as\nrequired. For example, an organization may\naudit systems for illegal copies of copyrighted\nsoftware. This problem is primarily associated\nwith PCs and LANs, but can apply to any type\nof system.\n14.3\nConfiguration\nManagement\nClosely related to software support is configuration management the process of keeping track\nof changes to the system and, if needed, approving them.\n Configuration management normally\n102\naddresses hardware, software, networking, and other changes; it can be formal or informal. The\nprimary security goal of configuration management is ensuring that changes to the system do not\n" }, { "page_number": 172, "text": "III. Operational Controls\n160\nFor networked systems, configuration management\nshould include external connections. Is the\ncomputer system connected? To what other\nsystems? In turn, to what systems are these\nsystems and organizations connected? \nUsers of smaller systems are often responsible for\ntheir own backups. However, in reality they do not\nalways perform backups regularly. Some\norganizations, therefore, task support personnel\nwith making backups periodically for smaller\nsystems, either automatically (through server\nsoftware) or manually (by visiting each machine).\nunintentionally or unknowingly diminish security. Some of the methods discussed under software\nsupport, such as inspecting and testing software changes, can be used. Chapter 9 discusses other\nmethods.\nNote that the security goal is to know what\nchanges occur, not to prevent security from\nbeing changed. There may be circumstances\nwhen security will be reduced. However, the\ndecrease in security should be the result of a\ndecision based on all appropriate factors.\nA second security goal of configuration\nmanagement is ensuring that changes to the system are reflected in other documentation, such as\nthe contingency plan. If the change is major, it may be necessary to reanalyze some or all of the\nsecurity of the system. This is discussed in Chapter 8.\n14.4\nBackups\nSupport and operations personnel and\nsometimes users back up software and data. \nThis function is critical to contingency\nplanning. Frequency of backups will depend\nupon how often data changes and how\nimportant those changes are. Program\nmanagers should be consulted to determine\nwhat backup schedule is appropriate. Also, as\na safety measure, it is useful to test that\nbackup copies are actually usable. Finally,\nbackups should be stored securely, as appropriate (discussed below).\n14.5\nMedia Controls \nMedia controls include a variety of measures to provide physical and environmental protection\nand accountability for tapes, diskettes, printouts, and other media. From a security perspective,\nmedia controls should be designed to prevent the loss of confidentiality, integrity, or availability of\ninformation, including data or software, when stored outside the system. This can include storage\nof information before it is input to the system and after it is output.\nThe extent of media control depends upon many factors, including the type of data, the quantity\nof media, and the nature of the user environment. Physical and environmental protection is used\nto prevent unauthorized individuals from accessing the media. It also protects against such\n" }, { "page_number": 173, "text": "14. Security Considerations in Computer Support and Operations\n161\nTypical markings for media could include: \nPrivacy Act Information, Company Proprietary, or\nJoe's Backup Tape. In each case, the individuals\nhandling the media must know the applicable\nhandling instructions. For example, at the Acme\nPatent Research Firm, proprietary information may\nnot leave the building except under the care of a\nsecurity officer. Also, Joe's Backup Tape should\nbe easy to find in case something happens to Joe's\nsystem.\nfactors as heat, cold, or harmful magnetic fields. When necessary, logging the use of individual\nmedia (e.g., a tape cartridge) provides detailed accountability to hold authorized people\nresponsible for their actions. \n14.5.1 Marking\nControlling media may require some form of physical labeling. The labels can be used to identify\nmedia with special handling instructions, to locate needed information, or to log media (e.g., with\nserial/control numbers or bar codes) to support accountability. Identification is often by colored\nlabels on diskettes or tapes or banner pages on printouts. \nIf labeling is used for special handling\ninstructions, it is critical that people be\nappropriately trained. The marking of PC\ninput and output is generally the responsibility\nof the user, not the system support staff. \nMarking backup diskettes can help prevent\nthem from being accidentally overwritten.\n14.5.2 Logging\nThe logging of media is used to support\naccountability. Logs can include control\nnumbers (or other tracking data), the times and dates of transfers, names and signatures of\nindividuals involved, and other relevant information. Periodic spot checks or audits may be\nconducted to determine that no controlled items have been lost and that all are in the custody of\nindividuals named in control logs. Automated media tracking systems may be helpful for\nmaintaining inventories of tape and disk libraries. \n14.5.3 Integrity Verification\nWhen electronically stored information is read into a computer system, it may be necessary to\ndetermine whether it has been read correctly or subject to any modification. The integrity of\nelectronic information can be verified using error detection and correction or, if intentional\nmodifications are a threat, cryptographic-based technologies. (See Chapter 19.) \n14.5.4 Physical Access Protection\nMedia can be stolen, destroyed, replaced with a look-alike copy, or lost. Physical access controls,\nwhich can limit these problems, include locked doors, desks, file cabinets, or safes.\nIf the media requires protection at all times, it may be necessary to actually output data to the\n" }, { "page_number": 174, "text": "III. Operational Controls\n162\nMany people throw away old diskettes, believing\nthat erasing the files on the diskette has made the\ndata unretrievable. In reality, however, erasing a\nfile simply removes the pointer to that file. The\npointer tells the computer where the file is\nphysically stored. Without this pointer, the files\nwill not appear on a directory listing. This does\nnot mean that the file was removed. Commonly\navailable utility programs can often retrieve\ninformation that is presumed deleted.\nmedia in a secure location (e.g., printing to a printer in a locked room instead of to a general-\npurpose printer in a common area).\nPhysical protection of media should be extended to backup copies stored offsite. They generally\nshould be accorded an equivalent level of protection to media containing the same information\nstored onsite. (Equivalent protection does not mean that the security measures need to be exactly\nthe same. The controls at the off-site location are quite likely to be different from the controls at\nthe regular site.) Physical access is discussed in Chapter 15.\n14.5.5 Environmental Protection\nMagnetic media, such as diskettes or magnetic tape, require environmental protection, since they\nare sensitive to temperature, liquids, magnetism, smoke, and dust. Other media (e.g., paper and\noptical storage) may have different sensitivities to environmental factors. \n14.5.6 Transmittal\nMedia control may be transferred both within the organization and to outside elements. \nPossibilities for securing such transmittal include sealed and marked envelopes, authorized\nmessenger or courier, or U.S. certified or registered mail.\n14.5.7 Disposition\nWhen media is disposed of, it may be\nimportant to ensure that information is not\nimproperly disclosed. This applies both to\nmedia that is external to a computer system\n(such as a diskette) and to media inside a\ncomputer system, such as a hard disk. The\nprocess of removing information from media\nis called sanitization.\nThree techniques are commonly used for\nmedia sanitization: overwriting, degaussing, and destruction. Overwriting is an effective method\nfor clearing data from magnetic media. As the name implies, overwriting uses a program to write\n(1s, 0s, or a combination) onto the media. Common practice is to overwrite the media three\ntimes. Overwriting should not be confused with merely deleting the pointer to a file (which\ntypically happens when a delete command is used). Overwriting requires that the media be in\nworking order. Degaussing is a method to magnetically erase data from magnetic media. Two\ntypes of degausser exist: strong permanent magnets and electric degaussers. The final method of\nsanitization is destruction of the media by shredding or burning.\n" }, { "page_number": 175, "text": "14. Security Considerations in Computer Support and Operations\n163\nOne of the most common methods hackers use to\nbreak into systems is through maintenance\naccounts that still have factory-set or easily\nguessed passwords.\n14.6\nDocumentation\nDocumentation of all aspects of computer support and operations is important to ensure\ncontinuity and consistency. Formalizing operational practices and procedures with sufficient\ndetail helps to eliminate security lapses and oversights, gives new personnel sufficiently detailed\ninstructions, and provides a quality assurance function to help ensure that operations will be\nperformed correctly and efficiently. \nThe security of a system also needs to be documented. This includes many types of\ndocumentation, such as security plans, contingency plans, risk analyses, and security policies and\nprocedures. Much of this information, particularly risk and threat analyses, has to be protected\nagainst unauthorized disclosure. Security documentation also needs to be both current and\naccessible. Accessibility should take special factors into account (such as the need to find the\ncontingency plan during a disaster).\nSecurity documentation should be designed to fulfill the needs of the different types of people\nwho use it. For this reason, many organizations separate documentation into policy and\nprocedures. A security procedures manual should be written to inform various system users how\nto do their jobs securely. A security procedures manual for systems operations and support staff\nmay address a wide variety of technical and operational concerns in considerable detail. \n14.7\nMaintenance \nSystem maintenance requires either physical or logical access to the system. Support and\noperations staff, hardware or software vendors, or third-party service providers may maintain a\nsystem. Maintenance may be performed on site, or it may be necessary to move equipment to a\nrepair site. Maintenance may also be performed remotely via communications connections. If\nsomeone who does not normally have access to the system performs maintenance, then a security\nvulnerability is introduced. \nIn some circumstances, it may be necessary to take additional precautions, such as conducting\nbackground investigations of service personnel. Supervision of maintenance personnel may\nprevent some problems, such as \"snooping around\" the physical area. However, once someone\nhas access to the system, it is very difficult for supervision to prevent damage done through the\nmaintenance process.\nMany computer systems provide maintenance\naccounts. These special log-in accounts are\nnormally preconfigured at the factory with\npre-set, widely known passwords. It is\ncritical to change these passwords or\n" }, { "page_number": 176, "text": "III. Operational Controls\n164\notherwise disable the accounts until they are needed. Procedures should be developed to ensure\nthat only authorized maintenance personnel can use these accounts. If the account is to be used\nremotely, authentication of the maintenance provider can be performed using call-back\nconfirmation. This helps ensure that remote diagnostic activities actually originate from an\nestablished phone number at the vendor's site. Other techniques can also help, including\nencryption and decryption of diagnostic communications; strong identification and authentication\ntechniques, such as tokens; and remote disconnect verification.\nLarger systems may have diagnostic ports. In addition, manufacturers of larger systems and\nthird-party providers may offer more diagnostic and support services. It is critical to ensure that\nthese ports are only used by authorized personnel and cannot be accessed by hackers.\n14.8\nInterdependencies\nThere are support and operations components in most of the controls discussed in this handbook.\nPersonnel. Most support and operations staff have special access to the system. Some\norganizations conduct background checks on individuals filling these positions to screen out\npossibly untrustworthy individuals. \nIncident Handling. Support and operations may include an organization's incident handling staff. \nEven if they are separate organizations, they need to work together to recognize and respond to\nincidents.\nContingency Planning. Support and operations normally provides technical input to contingency\nplanning and carries out the activities of making backups, updating documentation, and practicing\nresponding to contingencies.\nSecurity Awareness, Training, and Education. Support and operations staff should be trained in\nsecurity procedures and should be aware of the importance of security. In addition, they provide\ntechnical expertise needed to teach users how to secure their systems.\nPhysical and Environmental. Support and operations staff often control the immediate physical\narea around the computer system. \nTechnical Controls. The technical controls are installed, maintained, and used by support and\noperations staff. They create the user accounts, add users to access control lists, review audit\nlogs for unusual activity, control bulk encryption over telecommunications links, and perform the\ncountless operational tasks needed to use technical controls effectively. In addition, support and\noperations staff provide needed input to the selection of controls based on their knowledge of\nsystem capabilities and operational constraints.\n" }, { "page_number": 177, "text": "14. Security Considerations in Computer Support and Operations\n165\nAssurance. Support and operations staff ensure that changes to a system do not introduce\nsecurity vulnerabilities by using assurance methods to evaluate or test the changes and their effect\non the system. Operational assurance is normally performed by support and operations staff.\n14.9\nCost Considerations\nThe cost of ensuring adequate security in day-to-day support and operations is largely dependent\nupon the size and characteristics of the operating environment and the nature of the processing\nbeing performed. If sufficient support personnel are already available, it is important that they be\ntrained in the security aspects of their assigned jobs; it is usually not necessary to hire additional\nsupport and operations security specialists. Training, both initial and ongoing, is a cost of\nsuccessfully incorporating security measures into support and operations activities. \nAnother cost is that associated with creating and updating documentation to ensure that security\nconcerns are appropriately reflected in support and operations policies, procedures, and duties.\nReferences\nBicknell, Paul. \"Data Security for Personal Computers.\" Proceedings of the 15th National\nComputer Security Conference. Vol. I. National Institute of Standards and Technology and\nNational Computer Security Center. Baltimore, MD. October 1992.\nCaelli, William, Dennis Longley, and Michael Shain. Information Security Handbook. New York,\nNY: Stockton Press, 1991.\nCarnahan, Lisa J. \"A Local Area Network Security Architecture.\" Proceedings of the 15th\nNational Computer Security Conference. Vol. I. National Institute of Standards and Technology\nand National Computer Security Center. Baltimore, MD. 1992.\nCarroll, J.M. Managing Risk: A Computer-Aided Strategy. Boston, MA: Butterworths, 1984.\nChapman, D. Brent. \"Network (In)Security Through IP Packet Filtering.\" Proceedings of the 3rd\nUSENIX UNIX Security Symposium, 1992.\nCurry, David A. UNIX System Security: A Guide for Users and System Administrators. Reading,\nMA: Addison-Wesley Publishing Co., Inc., 1992.\nGarfinkel, Simson, and Gene Spafford. Practical UNIX Security. Sebastopol, CA: O'Reilly &\nAssociates, 1991.\nHolbrook, Paul, and Joyce Reynolds, eds. Site Security Handbook. Available by anonymous ftp\n" }, { "page_number": 178, "text": "III. Operational Controls\n166\nfrom nic.ddn.mil (in rfc directory).\nInternet Security for System & Network Administrators. Computer Emergency Response Team\nSecurity Seminars, CERT Coordination Center, 1993.\nMurray, W.H. \"Security Considerations for Personal Computers.\" Tutorial: Computer and\nNetwork Security. Oakland, CA: IEEE Computer Society Press, 1986.\nParker, Donna B. Managers Guide to Computer Security. Reston, VA: Reston Publishing, Inc.,\n1981.\nPfleeger, Charles P. Security in Computing. Englewood Cliffs, NJ: Prentice-Hall, Inc., 1989.\n" }, { "page_number": 179, "text": " \n This chapter draws upon work by Robert V. Jacobson, International Security Technology, Inc., funded by\n103\nthe Tennessee Valley Authority.\n167\nPhysical and environmental security controls are\nimplemented to protect the facility housing system\nresources, the system resources themselves, and\nthe facilities used to support their operation.\nChapter 15\nPHYSICAL AND ENVIRONMENTAL SECURITY\nThe term physical and environmental\nsecurity, as used in this chapter, refers to\nmeasures taken to protect systems, buildings,\nand related supporting infrastructure against\nthreats associated with their physical\nenvironment.\n Physical and environmental\n103\nsecurity controls include the following three\nbroad areas:\n1. The physical facility is usually the building, other structure, or vehicle housing the system\nand network components. Systems can be characterized, based upon their operating\nlocation, as static, mobile, or portable. Static systems are installed in structures at fixed\nlocations. Mobile systems are installed in vehicles that perform the function of a structure,\nbut not at a fixed location. Portable systems are not installed in fixed operating locations. \nThey may be operated in wide variety of locations, including buildings or vehicles, or in the\nopen. The physical characteristics of these structures and vehicles determine the level of\nsuch physical threats as fire, roof leaks, or unauthorized access.\n2. The facility's general geographic operating location determines the characteristics of\nnatural threats, which include earthquakes and flooding; man-made threats such as burglary,\ncivil disorders, or interception of transmissions and emanations; and damaging nearby\nactivities, including toxic chemical spills, explosions, fires, and electromagnetic interference\nfrom emitters, such as radars.\n3. Supporting facilities are those services (both technical and human) that underpin the\noperation of the system. The system's operation usually depends on supporting facilities such\nas electric power, heating and air conditioning, and telecommunications. The failure or\nsubstandard performance of these facilities may interrupt operation of the system and may\ncause physical damage to system hardware or stored data.\nThis chapter first discusses the benefits of physical security measures, and then presents an\noverview of common physical and environmental security controls. Physical and environmental\nsecurity measures result in many benefits, such as protecting employees. This chapter focuses on\nthe protection of computer systems from the following:\n" }, { "page_number": 180, "text": "III. Operational Controls\n168\nInterruptions in Providing Computer Services. An external threat may interrupt the scheduled\noperation of a system. The magnitude of the losses depends on the duration and timing of the\nservice interruption and the characteristics of the operations end users perform.\nPhysical Damage. If a system's hardware is damaged or destroyed, it usually has to be repaired\nor replaced. Data may be destroyed as an act of sabotage by a physical attack on data storage\nmedia (e.g., rendering the data unreadable or only partly readable). If data stored by a system for\noperational use is destroyed or corrupted, the data needs to be restored from back-up copies or\nfrom the original sources before the system can be used. The magnitude of loss from physical\ndamage depends on the cost to repair or replace the damaged hardware and data, as well as costs\narising from service interruptions.\nUnauthorized Disclosure of Information. The physical characteristics of the facility housing a\nsystem may permit an intruder to gain access both to media external to system hardware (such as\ndiskettes, tapes and printouts) and to media within system components (such as fixed disks),\ntransmission lines or display screens. All may result in loss of disclosure-sensitive information. \nLoss of Control over System Integrity. If an intruder gains access to the central processing unit, it\nis usually possible to reboot the system and bypass logical access controls. This can lead to\ninformation disclosure, fraud, replacement of system and application software, introduction of a\nTrojan horse, and more. Moreover, if such access is gained, it may be very difficult to determine\nwhat has been modified, lost, or corrupted.\nPhysical Theft. System hardware may be stolen. The magnitude of the loss is determined by the\ncosts to replace the stolen hardware and restore data stored on stolen media. Theft may also\nresult in service interruptions.\nThis chapter discusses seven major areas of physical and environmental security controls:\nphysical access controls,\nfire safety,\nsupporting utilities,\nstructural collapse,\nplumbing leaks,\ninterception of data, and\nmobile and portable systems.\n" }, { "page_number": 181, "text": "15. Physical and Environmental Security\n169\nLife Safety\nIt is important to understand that the objectives of\nphysical access controls may be in conflict with\nthose of life safety. Simply stated, life safety\nfocuses on providing easy exit from a facility,\nparticularly in an emergency, while physical\nsecurity strives to control entry. In general, life\nsafety must be given first consideration, but it is\nusually possible to achieve an effective balance\nbetween the two goals. \nFor example, it is often possible to equip\nemergency exit doors with a time delay. When\none pushes on the panic bar, a loud alarm sounds,\nand the door is released after a brief delay. The\nexpectation is that people will be deterred from\nusing such exits improperly, but will not be\nsignificantly endangered during an emergency\nevacuation.\nThere are many types of physical access controls,\nincluding badges, memory cards, guards, keys,\ntrue-floor-to-true-ceiling wall construction, fences,\nand locks.\n15.1\nPhysical Access Controls\nPhysical access controls restrict the entry and\nexit of personnel (and often equipment and\nmedia) from an area, such as an office\nbuilding, suite, data center, or room\ncontaining a LAN server. \nThe controls over physical access to the\nelements of a system can include controlled\nareas, barriers that isolate each area, entry\npoints in the barriers, and screening measures\nat each of the entry points. In addition, staff\nmembers who work in a restricted area serve\nan important role in providing physical\nsecurity, as they can be trained to challenge\npeople they do not recognize.\nPhysical access controls should address not\nonly the area containing system hardware, but\nalso locations of wiring used to connect\nelements of the system, the electric power\nservice, the air conditioning and heating plant,\ntelephone and data lines, backup media and source documents, and any other elements required\nsystem's operation. This means that all the areas in the building(s) that contain system elements\nmust be identified. \nIt is also important to review the effectiveness\nof physical access controls in each area, both\nduring normal business hours, and at other\ntimes particularly when an area may be\nunoccupied. Effectiveness depends on both\nthe characteristics of the control devices used\n(e.g., keycard-controlled doors) and the\nimplementation and operation. Statements to the effect that \"only authorized persons may enter\nthis area\" are not particularly effective. Organizations should determine whether intruders can\neasily defeat the controls, the extent to which strangers are challenged, and the effectiveness of\nother control procedures. Factors like these modify the effectiveness of physical controls.\nThe feasibility of surreptitious entry also needs to be considered. For example, it may be possible\nto go over the top of a partition that stops at the underside of a suspended ceiling or to cut a hole\n" }, { "page_number": 182, "text": "III. Operational Controls\n170\nTypes of Building Construction\nThere are four basic kinds of building construction:\n(a) light frame, (b) heavy timber, (c)\nincombustible, and (d) fire resistant. Note that the\nterm fireproof is not used because no structure can\nresist a fire indefinitely. Most houses are light\nframe, and cannot survive more than about thirty\nminutes in a fire. Heavy timber means that the\nbasic structural elements have a minimum\nthickness of four inches. When such structures\nburn, the char that forms tends to insulate the\ninterior of the timber and the structure may survive\nfor an hour or more depending on the details. \nIncombustible means that the structure members\nwill not burn. This almost always means that the\nmembers are steel. Note, however, that steel loses\nit strength at high temperatures, at which point the\nstructure collapses. Fire resistant means that the\nstructural members are incombustible and are\ninsulated. Typically, the insulation is either\nconcrete that encases steel members, or is a\nmineral wool that is sprayed onto the members. Of\ncourse, the heavier the insulation, the longer the\nstructure will resist a fire.\nNote that a building constructed of reinforced\nconcrete can still be destroyed in a fire if there is\nsufficient fuel present and fire fighting is\nineffective. The prolonged heat of a fire can cause\ndifferential expansion of the concrete which causes\nspalling. Portions of the concrete split off,\nexposing the reinforcing, and the interior of the\nconcrete is subject to additional spalling. \nFurthermore, as heated floor slabs expand outward,\nthey deform supporting columns. Thus, a\nreinforced concrete parking garage with open\nexterior walls and a relatively low fire load has a\nlow fire risk, but a similar archival record storage\nfacility with closed exterior walls and a high fire\nload has a higher risk even though the basic\nbuilding material is incombustible.\nin a plasterboard partition in a location hidden by furniture. If a door is controlled by a\ncombination lock, it may be possible to observe an authorized person entering the lock\ncombination. If keycards are not carefully controlled, an intruder may be able to steal a card left\non a desk or use a card passed back by an accomplice.\nCorrective actions can address any of the factors\nlisted above. Adding an additional barrier reduces the\nrisk to the areas behind the barrier. Enhancing the\nscreening at an entry point can reduce the number of\npenetrations. For example, a guard may provide a\nhigher level of screening than a keycard-controlled\ndoor, or an anti-passback feature can be added. \nReorganizing traffic patterns, work flow, and work\nareas may reduce the number of people who need\naccess to a restricted area. Physical modifications to\nbarriers can reduce the vulnerability to surreptitious\nentry. Intrusion detectors, such as closed-circuit\ntelevision cameras, motion detectors, and other\ndevices, can detect intruders in unoccupied spaces. \n15.2\nFire Safety Factors \nBuilding fires are a particularly important security\nthreat because of the potential for complete\ndestruction of both hardware and data, the risk to\nhuman life, and the pervasiveness of the damage. \nSmoke, corrosive gases, and high humidity from a\nlocalized fire can damage systems throughout an\nentire building. Consequently, it is important to\nevaluate the fire safety of buildings that house\nsystems. Following are important factors in\ndetermining the risks from fire. \nIgnition Sources. Fires begin because something\nsupplies enough heat to cause other materials to burn. \nTypical ignition sources are failures of electric devices\nand wiring, carelessly discarded cigarettes, improper\nstorage of materials subject to spontaneous\ncombustion, improper operation of heating devices,\nand, of course, arson.\n" }, { "page_number": 183, "text": "15. Physical and Environmental Security\n \n As discussed in this section, many variables affect fire safety and should be taken into account in selecting a\n104\nfire extinguishment system. While automatic sprinklers can be very effective, selection of a fire extinguishment\nsystem for a particular building should take into account the particular fire risk factors. Other factors may include\nrate changes from either a fire insurance carrier or a business interruption insurance carrier. Professional advice is\nrequired. \n \n Occurrences of accidental discharge are extremely rare, and, in a fire, only the sprinkler heads in the\n105\nimmediate area of the fire open and discharge water.\n171\nHalons have been identified as harmful to the\nEarth's protective ozone layer. So, under an\ninternational agreement (known as the Montreal\nProtocol), production of halons ended January 1,\n1994. In September 1992, the General Services\nAdministration issued a moratorium on halon use\nby federal agencies.\nFuel Sources. If a fire is to grow, it must have a supply of fuel, material that will burn to support\nits growth, and an adequate supply of oxygen. Once a fire becomes established, it depends on the\ncombustible materials in the building (referred to as the fire load) to support its further growth. \nThe more fuel per square meter, the more intense the fire will be. \nBuilding Operation. If a building is well maintained and operated so as to minimize the\naccumulation of fuel (such as maintaining the integrity of fire barriers), the fire risk will be\nminimized.\nBuilding Occupancy. Some occupancies are inherently more dangerous than others because of\nan above-average number of potential ignition sources. For example, a chemical warehouse may\ncontain an above-average fuel load.\nFire Detection. The more quickly a fire is detected, all other things being equal, the more easily\nit can be extinguished, minimizing damage. It is also important to accurately pinpoint the location\nof the fire.\nFire Extinguishment. A fire will burn until it consumes all of the fuel in the building or until it is\nextinguished. Fire extinguishment may be automatic, as with an automatic sprinkler system or a\nHALON discharge system, or it may be performed by people using portable extinguishers, cooling\nthe fire site with a stream of water, by limiting the supply of oxygen with a blanket of foam or\npowder, or by breaking the combustion chemical reaction chain.\nWhen properly installed, maintained, and\nprovided with an adequate supply of water,\nautomatic sprinkler systems are highly\neffective in protecting buildings and their\ncontents.\n Nonetheless, one often hears\n104\nuninformed persons speak of the water\ndamage done by sprinkler systems as a\ndisadvantage. Fires that trigger sprinkler\nsystems cause the water damage.\n In short,\n105\nsprinkler systems reduce fire damage, protect\n" }, { "page_number": 184, "text": "III. Operational Controls\n172\nthe lives of building occupants, and limit the fire damage to the building itself. All these factors\ncontribute to more rapid recovery of systems following a fire.\nEach of these factors is important when estimating the occurrence rate of fires and the amount of\ndamage that will result. The objective of a fire-safety program is to optimize these factors to\nminimize the risk of fire.\n15.3\nFailure of Supporting Utilities\nSystems and the people who operate them need to have a reasonably well-controlled operating\nenvironment. Consequently, failures of heating and air-conditioning systems will usually cause a\nservice interruption and may damage hardware. These utilities are composed of many elements,\neach of which must function properly. \nFor example, the typical air-conditioning system consists of (1) air handlers that cool and humidify\nroom air, (2) circulating pumps that send chilled water to the air handlers, (3) chillers that extract\nheat from the water, and (4) cooling towers that discharge the heat to the outside air. Each of\nthese elements has a mean-time-between-failures (MTBF) and a mean-time-to-repair (MTTR). \nUsing the MTBF and MTTR values for each of the elements of a system, one can estimate the\noccurrence rate of system failures and the range of resulting service interruptions.\nThis same line of reasoning applies to electric power distribution, heating plants, water, sewage,\nand other utilities required for system operation or staff comfort. By identifying the failure modes\nof each utility and estimating the MTBF and MTTR, necessary failure threat parameters can be\ndeveloped to calculate the resulting risk. The risk of utility failure can be reduced by substituting\nunits with lower MTBF values. MTTR can be reduced by stocking spare parts on site and\ntraining maintenance personnel. And the outages resulting from a given MTBF can be reduced by\ninstalling redundant units under the assumption that failures are distributed randomly in time. \nEach of these strategies can be evaluated by comparing the reduction in risk with the cost to\nachieve it.\n15.4\nStructural Collapse\nA building may be subjected to a load greater than it can support. Most commonly this is a result\nof an earthquake, a snow load on the roof beyond design criteria, an explosion that displaces or\ncuts structural members, or a fire that weakens structural members. Even if the structure is not\ncompletely demolished, the authorities may decide to ban its further use, sometimes even banning\nentry to remove materials. This threat applies primarily to high-rise buildings and those with large\ninterior spaces without supporting columns.\n" }, { "page_number": 185, "text": "15. Physical and Environmental Security\n \n An insider may be able to easily collect data by configuring their ethernet network interface to receive all\n106\nnetwork traffic, rather than just network traffic intended for this node. This is called the promiscuous mode.\n173\n15.5\nPlumbing Leaks\nWhile plumbing leaks do not occur every day, they can be seriously disruptive. The building's\nplumbing drawings can help locate plumbing lines that might endanger system hardware. These\nlines include hot and cold water, chilled water supply and return lines, steam lines, automatic\nsprinkler lines, fire hose standpipes, and drains. If a building includes a laboratory or\nmanufacturing spaces, there may be other lines that conduct water, corrosive or toxic chemicals,\nor gases.\nAs a rule, analysis often shows that the cost to relocate threatening lines is difficult to justify. \nHowever, the location of shutoff valves and procedures that should be followed in the event of a\nfailure must be specified. Operating and security personnel should have this information\nimmediately available for use in an emergency. In some cases, it may be possible to relocate\nsystem hardware, particularly distributed LAN hardware.\n15.6\nInterception of Data\nDepending on the type of data a system processes, there may be a significant risk if the data is\nintercepted. There are three routes of data interception: direct observation, interception of data\ntransmission, and electromagnetic interception.\nDirect Observation. System terminal and workstation display screens may be observed by\nunauthorized persons. In most cases, it is relatively easy to relocate the display to eliminate the\nexposure.\nInterception of Data Transmissions. If an interceptor can gain access to data transmission lines,\nit may be feasible to tap into the lines and read the data being transmitted. Network monitoring\ntools can be used to capture data packets. Of course, the interceptor cannot control what is\ntransmitted, and so may not be able to immediately observe data of interest. However, over a\nperiod of time there may be a serious level of disclosure. Local area networks typically broadcast\nmessages.\n Consequently, all traffic, including passwords, could be retrieved. Interceptors\n106\ncould also transmit spurious data on tapped lines, either for purposes of disruption or for fraud.\nElectromagnetic Interception. Systems routinely radiate electromagnetic energy that can be\ndetected with special-purpose radio receivers. Successful interception will depend on the signal\nstrength at the receiver location; the greater the separation between the system and the receiver,\nthe lower the success rate. TEMPEST shielding, of either equipment or rooms, can be used to\nminimize the spread of electromagnetic signals. The signal-to-noise ratio at the receiver,\n" }, { "page_number": 186, "text": "III. Operational Controls\n174\nEncryption of data files on stored media may also\nbe a cost-effective precaution against disclosure of\nconfidential information if a laptop computer is\nlost or stolen.\ndetermined in part by the number of competing emitters will also affect the success rate. The\nmore workstations of the same type in the same location performing \"random\" activity, the more\ndifficult it is to intercept a given workstation's radiation. On the other hand, the trend toward\nwireless (i.e., deliberate radiation) LAN connections may increase the likelihood of successful\ninterception.\n15.7\nMobile and Portable Systems\nThe analysis and management of risk usually has to be modified if a system is installed in a vehicle\nor is portable, such as a laptop computer. The system in a vehicle will share the risks of the\nvehicle, including accidents and theft, as well as regional and local risks. \nPortable and mobile systems share an\nincreased risk of theft and physical damage. \nIn addition, portable systems can be\n\"misplaced\" or left unattended by careless\nusers. Secure storage of laptop computers is\noften required when they are not in use.\nIf a mobile or portable system uses particularly valuable or important data, it may be appropriate\nto either store its data on a medium that can be removed from the system when it is unattended or\nto encrypt the data. In any case, the issue of how custody of mobile and portable computers are\nto be controlled should be addressed. Depending on the sensitivity of the system and its\napplication, it may be appropriate to require briefings of users and signed briefing\nacknowledgments. (See Chapter 10 for an example.)\n15.8\nApproach to Implementation\nLike other security measures, physical and environmental security controls are selected because\nthey are cost-beneficial. This does not mean that a user must conduct a detailed cost-benefit\nanalysis for the selection of every control. There are four general ways to justify the selection of\ncontrols: \n1. They are required by law or regulation. Fire exit doors with panic bars and exit lights\nare examples of security measures required by law or regulation. Presumably, the regulatory\nauthority has considered the costs and benefits and has determined that it is in the public\ninterest to require the security measure. A lawfully conducted organization has no option but\nto implement all required security measures.\n2. The cost is insignificant, but the benefit is material. A good example of this is a facility\nwith a key-locked low-traffic door to a restricted access. The cost of keeping the door\n" }, { "page_number": 187, "text": "15. Physical and Environmental Security\n175\nlocked is minimal, but there is a significant benefit. Once a significant benefit/minimal cost\nsecurity measure has been identified, no further analysis is required to justify its\nimplementation.\n3. The security measure addresses a potentially \"fatal\" security exposure but has a\nreasonable cost. Backing up system software and data is an example of this justification . \nFor most systems, the cost of making regular backup copies is modest (compared to the\ncosts of operating the system), the organization would not be able to function if the stored\ndata were lost, and the cost impact of the failure would be material. In such cases, it would\nnot be necessary to develop any further cost justification for the backup of software and data. \nHowever, this justification depends on what constitutes a modest cost, and it does not\nidentify the optimum backup schedule. Broadly speaking, a cost that does not require\nbudgeting of additional funds would qualify.\n4. The security measure is estimated to be cost-beneficial. If the cost of a potential security\nmeasure is significant, and it cannot be justified by any of the first three reasons listed above,\nthen its cost (both implementation and ongoing operation) and its benefit (reduction in future\nexpected losses) need to be analyzed to determine if it is cost-beneficial. In this context,\ncost-beneficial means that the reduction in expected loss is significantly greater than the cost\nof implementing the security measure. \nArriving at the fourth justification requires a detailed analysis. Simple rules of thumb do not\napply. Consider, for example, the threat of electric power failure and the security measures that\ncan protect against such an event. The threat parameters, rate of occurrence, and range of outage\ndurations depend on the location of the system, the details of its connection to the local electric\npower utility, the details of the internal power distribution system, and the character of other\nactivities in the building that use electric power. The system's potential losses from service\ninterruption depends on the details of the functions it performs. Two systems that are otherwise\nidentical can support functions that have quite different degrees of urgency. Thus, two systems\nmay have the same electric power failure threat and vulnerability parameters, yet entirely different\nloss potential parameters. \nFurthermore, a number of different security measures are available to address electric power\nfailures. These measures differ in both cost and performance. For example, the cost of an\nuninterruptible power supply (UPS) depends on the size of the electric load it can support, the\nnumber of minutes it can support the load, and the speed with which it assumes the load when the\nprimary power source fails. An on-site power generator could also be installed either in place of a\nUPS (accepting the fact that a power failure will cause a brief service interruption) or in order to\nprovide long-term backup to a UPS system. Design decisions include the magnitude of the load\nthe generator will support, the size of the on-site fuel supply, and the details of the facilities to\nswitch the load from the primary source or the UPS to the on-site generator.\n" }, { "page_number": 188, "text": "III. Operational Controls\n176\nThis example shows systems with a wide range of risks and a wide range of available security\nmeasures (including, of course, no action), each with its own cost factors and performance\nparameters. \n15.9\nInterdependencies\nPhysical and environmental security measures rely on and support the proper functioning of many\nof the other areas discussed in this handbook. Among the most important are the following:\nLogical Access Controls. Physical security controls augment technical means for controlling\naccess to information and processing. Even if the most advanced and best-implemented logical\naccess controls are in place, if physical security measures are inadequate, logical access controls\nmay be circumvented by directly accessing the hardware and storage media. For example, a\ncomputer system may be rebooted using different software. \nContingency Planning. A large portion of the contingency planning process involves the failure\nof physical and environmental controls. Having sound controls, therefore, can help minimize\nlosses from such contingencies. \nIdentification and Authentication (I&A). Many physical access control systems require that\npeople be identified and authenticated. Automated physical security access controls can use the\nsame types of I&A as other computer systems. In addition, it is possible to use the same tokens\n(e.g., badges) as those used for other computer-based I&A.\nOther. Physical and environmental controls are also closely linked to the activities of the local\nguard force, fire house, life safety office, and medical office. These organizations should be\nconsulted for their expertise in planning controls for the systems environment. \n15.10\nCost Considerations\nCosts associated with physical security measures range greatly. Useful generalizations about\ncosts, therefore, are difficult make. Some measures, such as keeping a door locked, may be a\ntrivial expense. Other features, such as fire-detection and -suppression systems, can be far more\ncostly. Cost considerations should include operation. For example, adding controlled-entry\ndoors requires persons using the door to stop and unlock it. Locks also require physical key\nmanagement and accounting (and rekeying when keys are lost or stolen). Often these effects will\nbe inconsequential, but they should be fully considered. As with other security measures, the\nobjective is to select those that are cost-beneficial.\n" }, { "page_number": 189, "text": "15. Physical and Environmental Security\n177\nReferences\nAlexander, M., ed. \"Secure Your Computers and Lock Your Doors.\" Infosecurity News. 4(6),\n1993. pp. 80-85.\n \nArcher, R. \"Testing: Following Strict Criteria.\" Security Dealer. 15(5), 1993. pp. 32-35.\nBreese, H., ed. The Handbook of Property Conservation. Norwood, MA: Factory Mutual\nEngineering Corp.\nChanaud, R. \"Keeping Conversations Confidential.\" Security Management. 37(3), 1993.\npp. 43-48.\nMiehl, F. \"The Ins and Outs of Door Locks.\" Security Management. 37(2), 1993. pp. 48-53.\nNational Bureau of Standards. Guidelines for ADP Physical Security and Risk Management.\nFederal Information Processing Standard Publication 31. June 1974.\nPeterson, P. \"Infosecurity and Shrinking Media.\" ISSA Access. 5(2), 1992. pp. 19-22.\n \nRoenne, G. \"Devising a Strategy Keyed to Locks.\" Security Management. 38(4), 1994.\npp. 55-56.\nZimmerman, J. \"Using Smart Cards - A Smart Move.\" Security Management. 36(1), 1992.\npp. 32-36.\n" }, { "page_number": 190, "text": "178\n" }, { "page_number": 191, "text": "179\nIV. TECHNICAL CONTROLS\n" }, { "page_number": 192, "text": "180\n" }, { "page_number": 193, "text": " \n Not all types of access control require identification and authentication.\n107\n \n Computers also use authentication to verify that a message or file has not been altered and to verify that a\n108\nmessage originated with a certain person. This chapter only addresses user authentication. The other forms of\nauthentication are addressed in the Chapter 19.\n181\nA typical user identification could be JSMITH (for\nJane Smith). This information can be known by\nsystem administrators and other system users. A\ntypical user authentication could be Jane Smith's\npassword, which is kept secret. This way system\nadministrators can set up Jane's access and see her\nactivity on the audit trail, and system users can\nsend her e-mail, but no one can pretend to be Jane.\nChapter 16\nIDENTIFICATION AND AUTHENTICATION\nFor most systems, identification and authentication (I&A) is the first line of defense. I&A is a\ntechnical measure that prevents unauthorized people (or unauthorized processes) from entering a\ncomputer system. \nI&A is a critical building block of computer security since it is the basis for most types of access\ncontrol and for establishing user accountability.\n Access control often requires that the system\n107\nbe able to identify and differentiate among users. For example, access control is often based on\nleast privilege, which refers to the granting to users of only those accesses required to perform\ntheir duties. User accountability requires the linking of activities on a computer system to specific\nindividuals and, therefore, requires the system to identify users. \nIdentification is the means by which a user\nprovides a claimed identity to the system. \nAuthentication\n is the means of establishing\n108\nthe validity of this claim. \nThis chapter discusses the basic means of\nidentification and authentication, the current\ntechnology used to provide I&A, and some\nimportant implementation issues.\nComputer systems recognize people based on the authentication data the systems receive. \nAuthentication presents several challenges: collecting authentication data, transmitting the data\nsecurely, and knowing whether the person who was originally authenticated is still the person\nusing the computer system. For example, a user may walk away from a terminal while still logged\non, and another person may start using it. \nThere are three means of authenticating a user's identity which can be used alone or in\ncombination:\nsomething the individual knows (a secret e.g., a password, Personal Identification\nNumber (PIN), or cryptographic key);\n" }, { "page_number": 194, "text": "IV. Technical Controls\n182\nFor most applications, trade-offs will have to be\nmade among security, ease of use, and ease of\nadministration, especially in modern networked\nenvironments. \nsomething the individual possesses (a token e.g., an ATM card or a smart card);\nand\nsomething the individual is (a biometric e.g., such characteristics as a voice\npattern, handwriting dynamics, or a fingerprint).\nWhile it may appear that any of these means\ncould provide strong authentication, there are\nproblems associated with each. If people\nwanted to pretend to be someone else on a\ncomputer system, they can guess or learn that\nindividual's password; they can also steal or\nfabricate tokens. Each method also has\ndrawbacks for legitimate users and system administrators: users forget passwords and may lose\ntokens, and administrative overhead for keeping track of I&A data and tokens can be substantial. \nBiometric systems have significant technical, user acceptance, and cost problems as well. \nThis section explains current I&A technologies and their benefits and drawbacks as they relate to\nthe three means of authentication. Although some of the technologies make use of cryptography\nbecause it can significantly strengthen authentication, the explanations of cryptography appear in\nChapter 19, rather than in this chapter.\n16.1\nI&A Based on Something the User Knows\nThe most common form of I&A is a user ID coupled with a password. This technique is based\nsolely on something the user knows. There are other techniques besides conventional passwords\nthat are based on knowledge, such as knowledge of a cryptographic key. \n16.1.1 Passwords\nIn general, password systems work by requiring the user to enter a user ID and password (or\npassphrase or personal identification number). The system compares the password to a previously\nstored password for that user ID. If there is a match, the user is authenticated and granted access. \nBenefits of Passwords. Passwords have been successfully providing security for computer\nsystems for a long time. They are integrated into many operating systems, and users and system\nadministrators are familiar with them. When properly managed in a controlled environment, they\ncan provide effective security. \nProblems With Passwords. The security of a password system is dependent upon keeping\npasswords secret. Unfortunately, there are many ways that the secret may be divulged. All of the\n" }, { "page_number": 195, "text": "16. Identification and Authentication\n183\nImproving Password Security\nPassword generators. If users are not allowed to\ngenerate their own passwords, they cannot pick\neasy-to-guess passwords. Some generators create\nonly pronounceable nonwords to help users\nremember them. However, users tend to write\ndown hard-to-remember passwords. \nLimits on log-in attempts. Many operating\nsystems can be configured to lock a user ID after a\nset number of failed log-in attempts. This helps to\nprevent guessing of passwords.\nPassword attributes. Users can be instructed, or\nthe system can force them, to select passwords (1)\nwith a certain minimum length, (2) with special\ncharacters, (3) that are unrelated to their user ID,\nor (4) to pick passwords which are not in an on-\nline dictionary. This makes passwords more\ndifficult to guess (but more likely to be written\ndown). \nChanging passwords. Periodic changing of\npasswords can reduce the damage done by stolen\npasswords and can make brute-force attempts to\nbreak into systems more difficult. Too frequent\nchanges, however, can be irritating to users. \nTechnical protection of the password file. \nAccess control and one-way encryption can be\nused to protect the password file itself.\nNote: Many of these techniques are discussed in FIPS 112,\nPassword Usage and FIPS 181, Automated Password Generator.\nproblems discussed below can be significantly mitigated by improving password security, as\ndiscussed in the sidebar. However, there is no fix for the problem of electronic monitoring,\nexcept to use more advanced authentication (e.g., based on cryptographic techniques or tokens). \n1. Guessing or finding passwords. If\nusers select their own passwords, they\ntend to make them easy to remember. \nThat often makes them easy to guess. \nThe names of people's children, pets, or\nfavorite sports teams are common\nexamples. On the other hand, assigned\npasswords may be difficult to remember,\nso users are more likely to write them\ndown. Many computer systems are\nshipped with administrative accounts that\nhave preset passwords. Because these\npasswords are standard, they are easily\n\"guessed.\" Although security\npractitioners have been warning about\nthis problem for years, many system\nadministrators still do not change default\npasswords. Another method of learning\npasswords is to observe someone\nentering a password or PIN. The\nobservation can be done by someone in\nthe same room or by someone some\ndistance away using binoculars. This is\noften referred to as shoulder surfing.\n2. Giving passwords away. Users may\nshare their passwords. They may give\ntheir password to a co-worker in order to\nshare files. In addition, people can be\ntricked into divulging their passwords. \nThis process is referred to as social\nengineering.\n3. Electronic monitoring. When passwords are transmitted to a computer system, they can\nbe electronically monitored. This can happen on the network used to transmit the password\nor on the computer system itself. Simple encryption of a password that will be used again\ndoes not solve this problem because encrypting the same password will create the same\nciphertext; the ciphertext becomes the password.\n" }, { "page_number": 196, "text": "IV. Technical Controls\n \n One-way encryption algorithms only provide for the encryption of data. The resulting ciphertext cannot be\n109\ndecrypted. When passwords are entered into the system, they are one-way encrypted, and the result is compared\nwith the stored ciphertext. (See the Chapter 19.)\n \n For the purpose of understanding how possession-based I&A works, it is not necessary to distinguish\n110\nwhether possession of a token in various systems is identification or authentication.\n184\n4. Accessing the password file. If the password file is not protected by strong access\ncontrols, the file can be downloaded. Password files are often protected with one-way\nencryption\n so that plain-text passwords are not available to system administrators or\n109\nhackers (if they successfully bypass access controls). Even if the file is encrypted, brute force\ncan be used to learn passwords if the file is downloaded (e.g., by encrypting English words\nand comparing them to the file). \nPasswords Used as Access Control. Some mainframe operating systems and many PC\napplications use passwords as a means of restricting access to specific resources within a system. \nInstead of using mechanisms such as access control lists (see Chapter 17), access is granted by\nentering a password. The result is a proliferation of passwords that can reduce the overall\nsecurity of a system. While the use of passwords as a means of access control is common, it is an\napproach that is often less than optimal and not cost-effective. \n16.1.2 Cryptographic Keys \nAlthough the authentication derived from the knowledge of a cryptographic key may be based\nentirely on something the user knows, it is necessary for the user to also possess (or have access\nto) something that can perform the cryptographic computations, such as a PC or a smart card. \nFor this reason, the protocols used are discussed in the Smart Tokens section of this chapter. \nHowever, it is possible to implement these types of protocols without using a smart token. \nAdditional discussion is also provided under the Single Log-in section.\n16.2\nI&A Based on Something the User Possesses\nAlthough some techniques are based solely on something the user possesses, most of the\ntechniques described in this section are combined with something the user knows. This\ncombination can provide significantly stronger security than either something the user knows or\npossesses alone.110\nObjects that a user possesses for the purpose of I&A are called tokens. This section divides\ntokens into two categories: memory tokens and smart tokens.\n" }, { "page_number": 197, "text": "16. Identification and Authentication\n185\n16.2.1 Memory Tokens\nMemory tokens store, but do not process, information. Special reader/writer devices control the\nwriting and reading of data to and from the tokens. The most common type of memory token is a\nmagnetic striped card, in which a thin stripe of magnetic material is affixed to the surface of a card\n(e.g., as on the back of credit cards). A common application of memory tokens for authentication\nto computer systems is the automatic teller machine (ATM) card. This uses a combination of\nsomething the user possesses (the card) with something the user knows (the PIN). \nSome computer systems authentication technologies are based solely on possession of a token,\nbut they are less common. Token-only systems are more likely to be used in other applications,\nsuch as for physical access. (See Chapter 15.)\nBenefits of Memory Token Systems. Memory tokens when used with PINs provide significantly\nmore security than passwords. In addition, memory cards are inexpensive to produce. For a\nhacker or other would-be masquerader to pretend to be someone else, the hacker must have both\na valid token and the corresponding PIN. This is much more difficult than obtaining a valid\npassword and user ID combination (especially since most user IDs are common knowledge). \nAnother benefit of tokens is that they can be used in support of log generation without the need\nfor the employee to key in a user ID for each transaction or other logged event since the token\ncan be scanned repeatedly. If the token is required for physical entry and exit, then people will be\nforced to remove the token when they leave the computer. This can help maintain authentication. \nProblems With Memory Token Systems. Although sophisticated technical attacks are possible\nagainst memory token systems, most of the problems associated with them relate to their cost,\nadministration, token loss, user dissatisfaction, and the compromise of PINs. Most of the\ntechniques for increasing the security of memory token systems relate to the protection of PINs. \nMany of the techniques discussed in the sidebar on Improving Password Security apply to PINs. \n1. Requires special reader. The need for a special reader increases the cost of using\nmemory tokens. The readers used for memory tokens must include both the physical unit\nthat reads the card and a processor that determines whether the card and/or the PIN entered\nwith the card is valid. If the PIN or token is validated by a processor that is not physically\nlocated with the reader, then the authentication data is vulnerable to electronic monitoring\n(although cryptography can be used to solve this problem).\n" }, { "page_number": 198, "text": "IV. Technical Controls\n186\nAttacks on memory-card systems have sometimes\nbeen quite creative. One group stole an ATM\nmachine that they installed at a local shopping\nmall. The machine collected valid account\nnumbers and corresponding PINs, which the\nthieves used to forge cards. The forged cards were\nthen used to withdraw money from legitimate\nATMs.\n2. Token loss. A lost token may prevent\nthe user from being able to log in until a\nreplacement is provided. This can\nincrease administrative overhead costs.\nThe lost token could be found by\nsomeone who wants to break into the\nsystem, or could be stolen or forged. If\nthe token is also used with a PIN, any of\nthe methods described above in password\nproblems can be used to obtain the PIN. Common methods are finding the PIN taped to the\ncard or observing the PIN being entered by the legitimate user. In addition, any information\nstored on the magnetic stripe that has not been encrypted can be read.\n3. User Dissatisfaction. In general, users want computers to be easy to use. Many users\nfind it inconvenient to carry and present a token. However, their dissatisfaction may be\nreduced if they see the need for increased security.\n16.2.2 Smart Tokens\nA smart token expands the functionality of a memory token by incorporating one or more\nintegrated circuits into the token itself. When used for authentication, a smart token is another\nexample of authentication based on something a user possesses (i.e., the token itself). A smart\ntoken typically requires a user also to provide something the user knows (i.e., a PIN or password)\nin order to \"unlock\" the smart token for use. \nThere are many different types of smart tokens. In general, smart tokens can be divided three\ndifferent ways based on physical characteristics, interface, and protocols used. These three\ndivisions are not mutually exclusive.\nPhysical Characteristics. Smart tokens can be divided into two groups: smart cards and other\ntypes of tokens. A smart card looks like a credit card, but incorporates an embedded\nmicroprocessor. Smart cards are defined by an International Standards Organization (ISO)\nstandard. Smart tokens that are not smart cards can look like calculators, keys, or other small\nportable objects. \nInterface. Smart tokens have either a manual or an electronic interface. Manual or human\ninterface tokens have displays and/or keypads to allow humans to communicate with the card. \nSmart tokens with electronic interfaces must be read by special reader/writers. Smart cards,\ndescribed above, have an electronic interface. Smart tokens that look like calculators usually have\na manual interface.\n" }, { "page_number": 199, "text": "16. Identification and Authentication\n187\nProtocol. There are many possible protocols a smart token can use for authentication. In\ngeneral, they can be divided into three categories: static password exchange, dynamic password\ngenerators, and challenge-response. \nStatic tokens work similarly to memory tokens, except that the users authenticate themselves\nto the token and then the token authenticates the user to the computer.\nA token that uses a dynamic password generator protocol creates a unique value, for\nexample, an eight-digit number, that changes periodically (e.g., every minute). If the token\nhas a manual interface, the user simply reads the current value and then types it into the\ncomputer system for authentication. If the token has an electronic interface, the transfer is\ndone automatically. If the correct value is provided, the log-in is permitted, and the user is\ngranted access to the system.\nTokens that use a challenge-response protocol work by having the computer generate a\nchallenge, such as a random string of numbers. The smart token then generates a response\nbased on the challenge. This is sent back to the computer, which authenticates the user based\non the response. The challenge-response protocol is based on cryptography. Challenge-\nresponse tokens can use either electronic or manual interfaces. \nThere are other types of protocols, some more sophisticated and some less so. The three types\ndescribed above are the most common.\nBenefits of Smart Tokens\nSmart tokens offer great flexibility and can be used to solve many authentication problems. The\nbenefits of smart tokens vary, depending on the type used. In general, they provide greater\nsecurity than memory cards. Smart tokens can solve the problem of electronic monitoring even if\nthe authentication is done across an open network by using one-time passwords. \n1. One-time passwords. Smart tokens that use either dynamic password generation or\nchallenge-response protocols can create one-time passwords. Electronic monitoring is not a\nproblem with one-time passwords because each time the user is authenticated to the\ncomputer, a different \"password\" is used. (A hacker could learn the one-time password\nthrough electronic monitoring, but would be of no value.)\n2. Reduced risk of forgery. Generally, the memory on a smart token is not readable unless\nthe PIN is entered. In addition, the tokens are more complex and, therefore, more difficult to\nforge.\n3. Multi-application. Smart tokens with electronic interfaces, such as smart cards, provide a\nway for users to access many computers using many networks with only one log-in. This is\n" }, { "page_number": 200, "text": "IV. Technical Controls\n188\nElectronic reader/writers can take many forms,\nsuch as a slot in a PC or a separate external device. \nMost human interfaces consist of a keypad and\ndisplay.\nfurther discussed in the Single Log-in section of this chapter. In addition, a single smart card\ncan be used for multiple functions, such as physical access or as a debit card.\nProblems with Smart Tokens\nLike memory tokens, most of the problems associated with smart tokens relate to their cost, the\nadministration of the system, and user dissatisfaction. Smart tokens are generally less vulnerable\nto the compromise of PINs because authentication usually takes place on the card. (It is possible,\nof course, for someone to watch a PIN being entered and steal that card.) Smart tokens cost\nmore than memory cards because they are more complex, particularly challenge-response\ncalculators.\n1. Need reader/writers or human\nintervention. Smart tokens can use either\nan electronic or a human interface. An\nelectronic interface requires a reader,\nwhich creates additional expense. \nHuman interfaces require more actions\nfrom the user. This is especially true for\nchallenge-response tokens with a manual interface, which require the user to type the\nchallenge into the smart token and the response into the computer. This can increase user\ndissatisfaction.\n2. Substantial Administration. Smart tokens, like passwords and memory tokens, require\nstrong administration. For tokens that use cryptography, this includes key management. \n(See Chapter 19.)\n16.3\nI&A Based on Something the User Is\nBiometric authentication technologies use the unique characteristics (or attributes) of an\nindividual to authenticate that person's identity. These include physiological attributes (such as\nfingerprints, hand geometry, or retina patterns) or behavioral attributes (such as voice patterns\nand hand-written signatures). Biometric authentication technologies based upon these attributes\nhave been developed for computer log-in applications.\n" }, { "page_number": 201, "text": "16. Identification and Authentication\n189\nBiometric authentication generally operates in the\nfollowing manner:\nBefore any authentication attempts, a user is\n\"enrolled\" by creating a reference profile (or\ntemplate) based on the desired physical attribute. \nThe resulting template is associated with the\nidentity of the user and stored for later use.\nWhen attempting authentication, the user's\nbiometric attribute is measured. The previously\nstored reference profile of the biometric attribute is\ncompared with the measured profile of the attribute\ntaken from the user. The result of the comparison\nis then used to either accept or reject the user.\nBiometric authentication is technically\ncomplex and expensive, and user acceptance\ncan be difficult. However, advances continue\nto be made to make the technology more\nreliable, less costly, and more user-friendly.\nBiometric systems can provide an increased\nlevel of security for computer systems, but the\ntechnology is still less mature than that of\nmemory tokens or smart tokens. \nImperfections in biometric authentication\ndevices arise from technical difficulties in\nmeasuring and profiling physical attributes as\nwell as from the somewhat variable nature of\nphysical attributes. These may change,\ndepending on various conditions. For\nexample, a person's speech pattern may change under stressful conditions or when suffering from\na sore throat or cold.\nDue to their relatively high cost, biometric systems are typically used with other authentication\nmeans in environments requiring high security.\n16.4\nImplementing I&A Systems\nSome of the important implementation issues for I&A systems include administration, maintaining\nauthentication, and single log-in. \n16.4.1 Administration\nAdministration of authentication data is a critical element for all types of authentication systems. \nThe administrative overhead associated with I&A can be significant. I&A systems need to create,\ndistribute, and store authentication data. For passwords, this includes creating passwords, issuing\nthem to users, and maintaining a password file. Token systems involve the creation and\ndistribution of tokens/PINs and data that tell the computer how to recognize valid tokens/PINs. \nFor biometric systems, this includes creating and storing profiles. \nThe administrative tasks of creating and distributing authentication data and tokens can be a\nsubstantial. Identification data has to be kept current by adding new users and deleting former\nusers. If the distribution of passwords or tokens is not controlled, system administrators will not\nknow if they have been given to someone other than the legitimate user. It is critical that the\ndistribution system ensure that authentication data is firmly linked with a given individual. Some\n" }, { "page_number": 202, "text": "IV. Technical Controls\n \n Masquerading by system administrators cannot be prevented entirely. However, controls can be set up so\n111\nthat improper actions by the system administrator can be detected in audit records.\n \n After a user signs on, the computer treats all commands originating from the user's physical device (such as\n112\na PC or terminal) as being from that user. \n \n Single log-in is somewhat of a misnomer. It is currently not feasible to have one sign-on for every\n113\ncomputer system a user might wish to access. The types of single log-in described apply mainly to groups of\nsystems (e.g., within an organization or a consortium). \n190\nOne method of looking for improperly used\naccounts is for the computer to inform users when\nthey last logged on. This allows users to check if\nsomeone else used their account. \nof these issues are discussed in Chapter 10 under User Administration. \nIn addition, I&A administrative tasks should\naddress lost or stolen passwords or tokens. It\nis often necessary to monitor systems to look\nfor stolen or shared accounts.\nAuthentication data needs to be stored\nsecurely, as discussed with regard to accessing\npassword files. The value of authentication data lies in the data's confidentiality, integrity, and\navailability. If confidentiality is compromised, someone may be able to use the information to\nmasquerade as a legitimate user. If system administrators can read the authentication file, they\ncan masquerade as another user. Many systems use encryption to hide the authentication data\nfrom the system administrators.\n If integrity is compromised, authentication data can be added\n111\nor the system can be disrupted. If availability is compromised, the system cannot authenticate\nusers, and the users may not be able to work. \n16.4.2 Maintaining Authentication\nSo far, this chapter has discussed initial authentication only. It is also possible for someone to use\na legitimate user's account after log-in.\n Many computer systems handle this problem by logging\n112\na user out or locking their display or session after a certain period of inactivity. However, these\nmethods can affect productivity and can make the computer less user-friendly. \n16.4.3 Single Log-in\nFrom an efficiency viewpoint, it is desirable for users to authenticate themselves only once and\nthen to be able to access a wide variety of applications and data available on local and remote\nsystems, even if those systems require users to authenticate themselves. This is known as single\nlog-in.\n If the access is within the same host computer, then the use of a modern access control\n113\nsystem (such as an access control list) should allow for a single log-in. If the access is across\nmultiple platforms, then the issue is more complicated, as discussed below. There are three main\n" }, { "page_number": 203, "text": "16. Identification and Authentication\n191\nKerberos and SPX are examples of network\nauthentication server protocols. They both use\ncryptography to authenticate users to computers on\nnetworks.\ntechniques that can provide single log-in across multiple computers: host-to-host authentication,\nauthentication servers, and user-to-host authentication.\nHost-to-Host Authentication. Under a host-to-host authentication approach, users authenticate\nthemselves once to a host computer. That computer then authenticates itself to other computers\nand vouches for the specific user. Host-to-host authentication can be done by passing an\nidentification, a password, or by a challenge-response mechanism or other one-time password\nscheme. Under this approach, it is necessary for the computers to recognize each other and to\ntrust each other.\nAuthentication Servers. When using\nauthentication server, the users authenticate\nthemselves to a special host computer (the\nauthentication server). This computer then\nauthenticates the user to other host computers\nthe user wants to access. Under this\napproach, it is necessary for the computers to\ntrust the authentication server. (The authentication server need not be a separate computer,\nalthough in some environments this may be a cost-effective way to increase the security of the\nserver.) Authentication servers can be distributed geographically or logically, as needed, to\nreduce workload.\nUser-to-Host. A user-to-host authentication approach requires the user to log-in to each host\ncomputer. However, a smart token (such as a smart card) can contain all authentication data and\nperform that service for the user. To users, it looks as though they were only authenticated once. \n16.5\nInterdependencies\nThere are many interdependencies among I&A and other controls. Several of them have been\ndiscussed in the chapter.\nLogical Access Controls. Access controls are needed to protect the authentication database. \nI&A is often the basis for access controls. Dial-back modems and firewalls, discussed in Chapter\n17, can help prevent hackers from trying to log-in.\nAudit. I&A is necessary if an audit log is going to be used for individual accountability.\nCryptography. Cryptography provides two basic services to I&A: it protects the confidentiality\nof authentication data, and it provides protocols for proving knowledge and/or possession of a\ntoken without having to transmit data that could be replayed to gain access to a computer system. \n" }, { "page_number": 204, "text": "IV. Technical Controls\n192\n16.6\nCost Considerations\nIn general, passwords are the least expensive authentication technique and generally the least\nsecure. They are already embedded in many systems. Memory tokens are less expensive than\nsmart tokens, but have less functionality. Smart tokens with a human interface do not require\nreaders, but are more inconvenient to use. Biometrics tend to be the most expensive.\nFor I&A systems, the cost of administration is often underestimated. Just because a system\ncomes with a password system does not mean that using it is free. For example, there is\nsignificant overhead to administering the I&A system. \nReferences\nAlexander, M., ed. \"Keeping the Bad Guys Off-Line.\" Infosecurity News. 4(6), 1993. pp. 54-65.\nAmerican Bankers Association. American National Standard for Financial Institution Sign-On\nAuthentication for Wholesale Financial Transactions. ANSI X9.26-1990. Washington, DC,\nFebruary 28, 1990.\nCCITT Recommendation X.509. The Directory - Authentication Framework. November 1988\n(Developed in collaboration, and technically aligned, with ISO 9594-8).\nDepartment of Defense. Password Management Guideline. CSC-STD-002-85. April 12, 1985.\nFeldmeier, David C., and Philip R. Kam. \"UNIX Password Security - Ten Years Later.\" Crypto\n'89 Abstracts. Santa Barbara, CA: Crypto '89 Conference, August 20-24, 1989. \nHaykin, Martha E., and Robert B. J. Warnar. Smart Card Technology: New Methods for\nComputer Access Control. Special Publication 500-157. Gaithersburg, MD: National Institute of\nStandards and Technology, September 1988.\nKay, R. \"Whatever Happened to Biometrics?\" Infosecurity News. 4(5), 1993. pp. 60-62.\nNational Bureau of Standards. Password Usage. Federal Information Processing Standard\nPublication 112. May 30, 1985.\nNational Institute of Standards and Technology. Automated Password Generator. Federal\nInformation Processing Standard Publication 181. October, 1993.\nNational Institute of Standards and Technology. Guideline for the Use of Advanced\nAuthentication Technology Alternatives. Federal Information Processing Standard Publication\n" }, { "page_number": 205, "text": "16. Identification and Authentication\n193\n190. October, 1994.\nSalamone, S. \"Internetwork Security: Unsafe at Any Node?\" Data Communications. 22(12),\n1993. pp. 61-68.\nSherman, R. \"Biometric Futures.\" Computers and Security. 11(2), 1992. pp. 128-133.\nSmid, Miles, James Dray, and Robert B. J. Warnar. \"A Token-Based Access Control System for\nComputer Networks.\" Proceedings of the 12th National Commuter Security Conference.\nNational Institute of Standards and Technology, October 1989.\nSteiner, J.O., C. Neuman, and J. Schiller. \"Kerberos: An Authentication Service for Open\nNetwork Systems.\" Proceedings Winter USENIX. Dallas, Texas, February 1988. pp. 191-202.\nTroy, Eugene F. Security for Dial-Up Lines. Special Publication 500-137, Gaithersburg, MD:\nNational Bureau of Standards, May 1986.\n" }, { "page_number": 206, "text": "194\n" }, { "page_number": 207, "text": " \n The term computer resources includes information as well as system resources, such as programs,\n114\nsubroutines, and hardware (e.g., modems, communications lines).\n \n Users need not be actual human users. They could include, for example, a program or another computer\n115\nrequesting use of a system resource. \n195\nLogical access controls provide a technical means\nof controlling what information users can utilize,\nthe programs they can run, and the modifications\nthey can make. \nThe term access is often confused with\nauthorization and authentication. \nAccess is the ability to do something with a\ncomputer resource. This usually refers to a\ntechnical ability (e.g., read, create, modify, or\ndelete a file, execute a program, or use an external\nconnection).\nAuthorization is the permission to use a computer\nresource. Permission is granted, directly or\nindirectly, by the application or system owner.\nAuthentication is proving (to some reasonable\ndegree) that users are who they claim to be. \nChapter 17\nLOGICAL ACCESS CONTROL\nOn many multiuser systems, requirements for\nusing (and prohibitions against the use of)\nvarious computer resources\n vary\n114\nconsiderably. Typically, for example, some\ninformation must be accessible to all users,115\nsome may be needed by several groups or\ndepartments, and some should be accessed by\nonly a few individuals. While it is obvious that users must have access to the information they\nneed to do their jobs, it may also be required to deny access to non-job-related information. It\nmay also be important to control the kind of access that is afforded (e.g., the ability for the\naverage user to execute, but not change, system programs). These types of access restrictions\nenforce policy and help ensure that unauthorized actions are not taken. \nAccess is the ability to do something with a\ncomputer resource (e.g., use, change, or\nview). Access control is the means by which\nthe ability is explicitly enabled or restricted in\nsome way (usually through physical and\nsystem-based controls). Computer-based\naccess controls are called logical access\ncontrols. Logical access controls can\nprescribe not only who or what (e.g., in the\ncase of a process) is to have access to a\nspecific system resource but also the type of\naccess that is permitted. These controls may\nbe built into the operating system, may be\nincorporated into applications programs or\nmajor utilities (e.g., database management\nsystems or communications systems), or may\nbe implemented through add-on security packages. Logical access controls may be implemented\ninternally to the computer system being protected or may be implemented in external devices.\n" }, { "page_number": 208, "text": "IV. Technical Controls\n196\nControlling access is normally thought of as\napplying to human users (e.g., will technical access\nbe provided for user JSMITH to the file\n\"payroll.dat\") but access can be provided to other\ncomputer systems. Also, access controls are often\nincorrectly thought of as only applying to files. \nHowever, they also protect other system resources\nsuch as the ability to place an outgoing long-\ndistance phone call though a system modem (as\nwell as, perhaps, the information that can be sent\nover such a call). Access controls can also apply\nto specific functions within an application and to\nspecific fields of a file.\nWhen determining what kind of technical access to\nallow to specific data, programs, devices, and\nresources, it is important to consider who will have\naccess and what kind of access they will be\nallowed. It may be desirable for everyone in the\norganization to have access to some information on\nthe system, such as the data displayed on an\norganization's daily calendar of nonconfidential\nmeetings. The program that formats and displays\nthe calendar, however, might be modifiable by\nonly a very few system administrators, while the\noperating system controlling that program might be\ndirectly accessible by still fewer.\nLogical access controls can help protect: \noperating systems and other\nsystem software from\nunauthorized modification or\nmanipulation (and thereby help\nensure the system's integrity\nand availability);\nthe integrity and availability of\ninformation by restricting the\nnumber of users and processes\nwith access; and\nconfidential information from\nbeing disclosed to unauthorized individuals.\nThis chapter first discusses basic criteria that can be used to decide whether a particular user\nshould be granted access to a particular system resource. It then reviews the use of these criteria\nby those who set policy (usually system-specific policy), commonly used technical mechanisms\nfor implementing logical access control, and issues related to administration of access controls. \n17.1\nAccess Criteria\nIn deciding whether to permit someone to use\na system resource logical access controls\nexamine whether the user is authorized for\nthe type of access requested. (Note that this\ninquiry is usually distinct from the question of\nwhether the user is authorized to use the\nsystem at all, which is usually addressed in an\nidentification and authentication process.)\nThe system uses various criteria to determine\nif a request for access will be granted. They\nare typically used in some combination. Many\nof the advantages and complexities involved in implementing and managing access control are\nrelated to the different kinds of user accesses supported. \n" }, { "page_number": 209, "text": "17. Logical Access Controls\n197\nMany systems already support a small number of\nspecial-purpose roles, such as System\nAdministrator or Operator. For example, an\nindividual who is logged on in the role of a System\nAdministrator can perform operations that would\nbe denied to the same individual acting in the role\nof an ordinary user. \nRecently, the use of roles has been expanded\nbeyond system tasks to application-oriented\nactivities. For example, a user in a company could\nhave an Order Taking role, and would be able to\ncollect and enter customer billing information,\ncheck on availability of particular items, request\nshipment of items, and issue invoices. In addition,\nthere could be an Accounts Receivable role, which\nwould receive payments and credit them to\nparticular invoices. A Shipping role, could then be\nresponsible for shipping products and updating the\ninventory. To provide additional security,\nconstraints could be imposed so a single user\nwould never be simultaneously authorized to\nassume all three roles. Constraints of this kind are\nsometimes referred to as separation of duty\nconstraints.\n17.1.1 Identity\nIt is probably fair to say that the majority of access controls are based upon the identity of the user\n(either human or process), which is usually obtained through identification and authentication\n(I&A). (See Chapter 16.) The identity is usually unique, to support individual accountability, but\ncan be a group identification or can even be anonymous. For example, public information\ndissemination systems may serve a large group called \"researchers\" in which the individual\nresearchers are not known.\n17.1.2 Roles \nAccess to information may also be controlled\nby the job assignment or function (i.e., the\nrole) of the user who is seeking access. \nExamples of roles include data entry clerk,\npurchase officer, project leader, programmer,\nand technical editor. Access rights are\ngrouped by role name, and the use of\nresources is restricted to individuals\nauthorized to assume the associated role. An\nindividual may be authorized for more than\none role, but may be required to act in only a\nsingle role at a time. Changing roles may\nrequire logging out and then in again, or\nentering a role-changing command. Note that\nuse of roles is not the same as shared-use\naccounts. An individual may be assigned a\nstandard set of rights of a shipping department\ndata entry clerk, for example, but the account\nwould still be tied to that individual's identity\nto allow for auditing. (See Chapter 18.)\nThe use of roles can be a very effective way of\nproviding access control. The process of\ndefining roles should be based on a thorough analysis of how an organization operates and should\ninclude input from a wide spectrum of users in an organization. \n17.1.3 Location\nAccess to particular system resources may also be based upon physical or logical location. For\nexample, in a prison, all users in areas to which prisoners are physically permitted may be limited\nto read-only access. Changing or deleting is limited to areas to which prisoners are denied\n" }, { "page_number": 210, "text": "IV. Technical Controls\n198\nphysical access. The same authorized users (e.g., prison guards) would operate under\nsignificantly different logical access controls, depending upon their physical location. Similarly,\nusers can be restricted based upon network addresses (e.g., users from sites within a given\norganization may be permitted greater access than those from outside).\n17.1.4 Time\nTime-of-day or day-of-week restrictions are common limitations on access. For example, use of\nconfidential personnel files may be allowed only during normal working hours and maybe denied\nbefore 8:00 a.m. and after 6:00 p.m. and all day during weekends and holidays. \n17.1.5 Transaction\nAnother approach to access control can be used by organizations handling transactions (e.g.,\naccount inquiries). Phone calls may first be answered by a computer that requests that callers key\nin their account number and perhaps a PIN. Some routine transactions can then be made directly,\nbut more complex ones may require human intervention. In such cases, the computer, which\nalready knows the account number, can grant a clerk, for example, access to a particular account\nfor the duration of the transaction. When completed, the access authorization is terminated. \nThis means that users have no choice in which accounts they have access to, and can reduce the\npotential for mischief. It also eliminates employee browsing of accounts (e.g., those of celebrities\nor their neighbors) and can thereby heighten privacy.\n17.1.6 Service Constraints\nService constraints refer to those restrictions that depend upon the parameters that may arise\nduring use of the application or that are preestablished by the resource owner/manager. For\nexample, a particular software package may only be licensed by the organization for five users at a\ntime. Access would be denied for a sixth user, even if the user were otherwise authorized to use\nthe application. Another type of service constraint is based upon application content or numerical\nthresholds. For example, an ATM machine may restrict transfers of money between accounts to\ncertain dollar limits or may limit maximum ATM withdrawals to $500 per day. Access may also\nbe selectively permitted based on the type of service requested. For example, users of computers\non a network may be permitted to exchange electronic mail but may not be allowed to log in to\neach others' computers.\n17.1.7 Common Access Modes\nIn addition to considering criteria for when access should occur, it is also necessary to consider\nthe types of access, or access modes. The concept of access modes is fundamental to access\ncontrol. Common access modes, which can be used in both operating or application systems,\n" }, { "page_number": 211, "text": "17. Logical Access Controls\n \n These access modes are described generically; exact definitions and capabilities will vary from\n116\nimplementation to implementation. Readers are advised to consult their system and application documentation.\n \n \"Deleting\" information does not necessarily physically remove the data from the storage media. This can\n117\nhave serious implications for information that must be kept confidential. See \"Disposition of Sensitive Automated\nInformation,\" CSL Bulletin, NIST, October 1992.\n199\ninclude the following:116\nRead access provides users with the capability to view information in a system resource (such\nas a file, certain records, certain fields, or some combination thereof), but not to alter it, such\nas delete from, add to, or modify in any way. One must assume that information can be\ncopied and printed if it can be read (although perhaps only manually, such as by using a print\nscreen function and retyping the information into another file).\nWrite access allows users to add to, modify, or delete information in system resources (e.g.,\nfiles, records, programs). Normally user have read access to anything they have write access\nto. \nExecute privilege allows users to run programs. \nDelete access allows users to erase system resources (e.g., files, records, fields, programs).\n \n117\nNote that if users have write access but not delete access, they could overwrite the field or\nfile with gibberish or otherwise inaccurate information and, in effect, delete the information. \nOther specialized access modes (more often found in applications) include: \nCreate access allows users to create new files, records, or fields.\nSearch access allows users to list the files in a directory.\nOf course, these criteria can be used in conjunction with one another. For example, an\norganization may give authorized individuals write access to an application at any time from\nwithin the office but only read access during normal working hours if they dial-in.\nDepending upon the technical mechanisms available to implement logical access control, a wide\nvariety of access permissions and restrictions are possible. No discussion can present all\npossibilities. \n17.2\nPolicy: The Impetus for Access Controls\nLogical access controls are a technical means of implementing policy decisions. Policy is made by\n" }, { "page_number": 212, "text": "IV. Technical Controls\n \n Some policies may not be technically implementable; appropriate technical controls may simply not exist.\n118\n200\nA few simple examples of specific policy issues are\nprovided below; it is important to recognize,\nhowever, that comprehensive system-specific\npolicy is significantly more complex.\n1. The director of an organization's personnel\noffice could decide that all clerks can update all\nfiles, to increase the efficiency of the office. Or\nthe director could decide that clerks can only view\nand update specific files, to help prevent\ninformation browsing.\n2. In a disbursing office, a single individual is\nusually prohibited from both requesting and\nauthorizing that a particular payment be made. \nThis is a policy decision taken to reduce the\nlikelihood of embezzlement and fraud. \n3. Decisions may also be made regarding access to\nthe system itself. In the government, for example,\nthe senior information resources management\nofficial may decide that agency systems that\nprocess information protected by the Privacy Act\nmay not be used to process public-access database\napplications. \na management official responsible for a particular system, application, subsystem, or group of\nsystems. The development of an access control policy may not be an easy endeavor. It requires\nbalancing the often-competing interests of security, operational requirements, and user-\nfriendliness. In addition, technical constraints have to be considered. \nThis chapter discusses issues relating to the\ntechnical implementation of logical access\ncontrols not the actual policy decisions as to\nwho should have what type of access. These\ndecisions are typically included in system-\nspecific policy, as discussed in Chapters 5 and\n10.\nOnce these policy decisions have been made,\nthey will be implemented (or enforced)\nthrough logical access controls. In doing so,\nit is important to realize that the capabilities of\nvarious types of technical mechanisms (for\nlogical access control) vary greatly.\n \n118\n17.3\nTechnical\nImplementation Mechanisms\nMany mechanisms have been developed to\nprovide internal and external access controls,\nand they vary significantly in terms of\nprecision, sophistication, and cost. These\nmethods are not mutually exclusive and are\noften employed in combination. Managers\nneed to analyze their organization's protection requirements to select the most appropriate, cost-\neffective logical access controls.\n17.3.1 Internal Access Controls \nInternal access controls are a logical means of separating what defined users (or user groups) can\nor cannot do with system resources. Five methods of internal access control are discussed in this\nsection: passwords, encryption, access control lists, constrained user interfaces, and labels.\n" }, { "page_number": 213, "text": "17. Logical Access Controls\n \n Note that this password is normally in addition to the one supplied initially to log onto the system. \n119\n201\nThe use of passwords as a means of access control\ncan result in a proliferation of passwords that can\nreduce overall security.\n17.3.1.1 Passwords \nPasswords are most often associated with user authentication. (See Chapter 16.) However, they\nare also used to protect data and applications on many systems, including PCs. For instance, an\naccounting application may require a password to access certain financial data or to invoke a\nrestricted application (or function of an application).119\nPassword-based access control is often\ninexpensive because it is already included in a\nlarge variety of applications. However, users\nmay find it difficult to remember additional\napplication passwords, which, if written down\nor poorly chosen, can lead to their\ncompromise. Password-based access controls for PC applications are often easy to circumvent if\nthe user has access to the operating system (and knowledge of what to do). As discussed in\nChapter 16, there are other disadvantages to using passwords. \n17.3.1.2 Encryption \nAnother mechanism that can be used for logical access control is encryption. Encrypted\ninformation can only be decrypted by those possessing the appropriate cryptographic key. This is\nespecially useful if strong physical access controls cannot be provided, such as for laptops or\nfloppy diskettes. Thus, for example, if information is encrypted on a laptop computer, and the\nlaptop is stolen, the information cannot be accessed. While encryption can provide strong access\ncontrol, it is accompanied by the need for strong key management. Use of encryption may also\naffect availability. For example, lost or stolen keys or read/write errors may prevent the\ndecryption of the information. (See the cryptography chapter.)\n17.3.1.3 Access Control Lists \nAccess Control Lists (ACLs) refer to a register of: (1) users (including groups, machines,\nprocesses) who have been given permission to use a particular system resource, and (2) the types\nof access they have been permitted.\nACLs vary considerably in their capability and flexibility. Some only allow specifications for\ncertain pre-set groups (e.g., owner, group, and world) while more advanced ACLs allow much\nmore flexibility, such as user-defined groups. Also, more advanced ACLs can be used to\nexplicitly deny access to a particular individual or group. With more advanced ACLs, access can\nbe at the discretion of the policymaker (and implemented by the security administrator) or\n" }, { "page_number": 214, "text": "IV. Technical Controls\n202\nExample of Elementary ACL for the file \"payroll\":\nOwner: PAYMANAGER\nAccess: Read, Write, Execute, Delete\nGroup: COMPENSATION-OFFICE\nAccess: Read, Write, Execute, Delete\n\"World\"\nAccess: None\nSince one would presume that no one would have\naccess without being granted access, why would it\nbe desirable to explicitly deny access? Consider a\nsituation in which a group name has already been\nestablished for 50 employees. If it were desired to\nexclude five of the individuals from that group, it\nwould be easier for the access control\nadministrator to simply grant access to that group\nand take it away from the five rather than grant\naccess to 45 people. Or, consider the case of a\ncomplex application in which many groups of\nusers are defined. It may be desired, for some\nreason, to prohibit Ms. X from generating a\nparticular report (perhaps she is under\ninvestigation). In a situation in which group names\nare used (and perhaps modified by others), this\nexplicit denial may be a safety check to restrict\nMs. X's access in case someone were to redefine\na group (with access to the report generation\nfunction) to include Ms. X. She would still be\ndenied access. \nindividual user, depending upon how the controls are technically implemented. \nElementary ACLs. Elementary ACLs (e.g., \"permission bits\") are a widely available means of\nproviding access control on multiuser systems. In this scheme, a short, predefined list of the\naccess rights to files or other system resources is maintained. \nElementary ACLs are typically based on the\nconcepts of owner, group, and world. For\neach of these, a set of access modes (typically\nchosen from read, write, execute, and delete)\nis specified by the owner (or custodian) of the\nresource. The owner is usually its creator,\nthough in some cases, ownership of resources\nmay be automatically assigned to project\nadministrators, regardless of the identity of the\ncreator. File owners often have all privileges\nfor their resources. \nIn addition to the privileges assigned to the owner, each resource is associated with a named\ngroup of users. Users who are members of the group can be granted modes of access distinct\nfrom nonmembers, who belong to the rest of the \"world\" that includes all of the system's users. \nUser groups may be arranged according to\ndepartments, projects, or other ways\nappropriate for the particular organization. \nFor example, groups may be established for\nmembers of the Personnel and Accounting\ndepartments. The system administrator is\nnormally responsible for technically\nmaintaining and changing the membership of a\ngroup, based upon input from the\nowners/custodians of the particular resources\nto which the groups may be granted access. \nAs the name implies, however, the technology\nis not particularly flexible. It may not be\npossible to explicitly deny access to an\nindividual who is a member of the file's group. \nAlso, it may not be possible for two groups to\neasily share information (without exposing it\nto the \"world\"), since the list is predefined to\nonly include one group. If two groups wish to\nshare information, an owner may make the file\n" }, { "page_number": 215, "text": "17. Logical Access Controls\n203\nExample of Advanced ACL for the file \"payroll\"\nPAYMGR:\nR,\nW, E,\nD\nJ. Anderson: \n R, W, E,\n-\nL. Carnahan:\n -,\n-,\n-,\n-\nB. Guttman: R,\nW, E,\n-\nE. Roback:\nR,\nW, E,\n-\nH. Smith: \nR,\n-,\n-,\n-\nPAY-OFFICE: \nR,\n-,\n-,\n-\nWORLD:\n-,\n-,\n-,\n-\nMenu-driven systems are a common constrained\nuser interface, where different users are provided\ndifferent menus on the same system. \navailable to be read by \"world.\" This may disclose information that should be restricted. \nUnfortunately, elementary ACLs have no mechanism to easily permit such sharing. \nAdvanced ACLs. Like elementary ACLs, advanced ACLs provide a form of access control based\nupon a logical registry. They do, however, provide finer precision in control. \nAdvanced ACLs can be very useful in many\ncomplex information sharing situations. They\nprovide a great deal of flexibility in\nimplementing system-specific policy and allow\nfor customization to meet the security\nrequirements of functional managers. Their\nflexibility also makes them more of a\nchallenge to manage. The rules for\ndetermining access in the face of apparently\nconflicting ACL entries are not uniform across\nall implementations and can be confusing to\nsecurity administrators. When such systems\nare introduced, they should be coupled with training to ensure their correct use. \n17.3.1.4 Constrained User Interfaces \nOften used in conjunction with ACLs are constrained user interfaces, which restrict users' access\nto specific functions by never allowing them to request the use of information, functions, or other\nspecific system resources for which they do not have access. Three major types exist: (1) menus,\n(2) database views, and (3) physically constrained user interfaces.\n \nConstrained user interfaces can provide a\nform of access control that closely models\nhow an organization operates. Many systems\nallow administrators to restrict users' ability to\nuse the operating system or application system\ndirectly. Users can only execute commands\nthat are provided by the administrator, typically in the form of a menu. Another means of\nrestricting users is through restricted shells which limit the system commands the user can invoke. \nThe use of menus and shells can often make the system easier to use and can help reduce errors. \nDatabase views is a mechanism for restricting user access to data contained in a database. It may\nbe necessary to allow a user to access a database, but that user may not need access to all the data\nin the database (e.g., not all fields of a record nor all records in the database). Views can be used\nto enforce complex access requirements that are often needed in database situations, such as those\nbased on the content of a field. For example, consider the situation where clerks maintain\n" }, { "page_number": 216, "text": "IV. Technical Controls\n204\nData Categorization\nOne tool that is used to increase the ease of\nsecurity labelling is categorizing data by similar\nprotection requirements. For example, a label\ncould be developed for \"organization proprietary\ndata.\" This label would mark information that can\nbe disclosed only to the organization's employees. \nAnother label, \"public data\" could be used to mark\ninformation that is available to anyone. \nFor systems with stringent security requirements\n(such as those processing national security\ninformation), labels may be useful in access\ncontrol. \npersonnel records in a database. Clerks are assigned a range of clients based upon last name (e.g.,\nA-C, D-G). Instead of granting a user access to all records, the view can grant the user access to\nthe record based upon the first letter of the last name field. \nPhysically constrained user interfaces can also limit a user's abilities. A common example is an\nATM machine, which provides only a limited number of physical buttons to select options; no\nalphabetic keyboard is usually present. \n17.3.1.5 Security Labels \nA security label is a designation assigned to a\nresource (such as a file). Labels can be used\nfor a variety of purposes, including controlling\naccess, specifying protective measures, or\nindicating additional handling instructions. In\nmany implementations, once this designator\nhas been set, it cannot be changed (except\nperhaps under carefully controlled conditions\nthat are subject to auditing). \nWhen used for access control, labels are also assigned to user sessions. Users are permitted to\ninitiate sessions with specific labels only. For example, a file bearing the label \"Organization\nProprietary Information\" would not be accessible (readable) except during user sessions with the\ncorresponding label. Moreover, only a restricted set of users would be able to initiate such\nsessions. The labels of the session and those of the files accessed during the session are used, in\nturn, to label output from the session. This ensures that information is uniformly protected\nthroughout its life on the system. \nLabels are a very strong form of access\ncontrol; however, they are often inflexible and\ncan be expensive to administer. Unlike\npermission bits or access control lists, labels\ncannot ordinarily be changed. Since labels are\npermanently linked to specific information,\ndata cannot be disclosed by a user copying\ninformation and changing the access to that file so that the information is more accessible than the\noriginal owner intended. By removing users' ability to arbitrarily designate the accessibility of files\nthey own, opportunities for certain kinds of human errors and malicious software problems are\neliminated. In the example above, it would not be possible to copy Organization Proprietary\nInformation into a file with a different label. This prevents inappropriate disclosure, but can\ninterfere with legitimate extraction of some information. \n" }, { "page_number": 217, "text": "17. Logical Access Controls\n \n Typically PPDs are found only in serial communications streams.\n120\n \n Private network is somewhat of a misnomer. Private does not mean that the organization's network is\n121\ntotally inaccessible to outsiders or prohibits use of the outside network from insiders (or the network would be\ndisconnected). It also does not mean that all the information on the network requires confidentiality protection. It\ndoes mean that a network (or part of a network) is, in some way, separated from another network. \n \n Questions frequently arise as to whether secure gateways help prevent the spread of viruses. In general,\n122\nhaving a gateway scan transmitted files for viruses requires more system overhead than is practical, especially\nsince the scanning would have to handle many different file formats. However, secure gateways may reduce the\nspread of network worms. \n205\nOne of the most common PPDs is the dial-back\nmodem. A typical dial-back modem sequence\nfollows: a user calls the dial-back modem and\nenters a password. The modem hangs up on the\nuser and performs a table lookup for the password\nprovided. If the password is found, the modem\nplaces a return call to the user (at a previously\nspecified number) to initiate the session. The\nreturn call itself also helps to protect against the\nuse of lost or compromised accounts. This is,\nhowever, not always the case. Malicious hackers\ncan use such advance functions as call forwarding\nto reroute calls.\nLabels are well suited for consistently and uniformly enforcing access restrictions, although their\nadministration and inflexibility can be a significant deterrent to their use.\n17.3.2 External Access Controls\nExternal access controls are a means of\ncontrolling interactions between the system\nand outside people, systems, and services. \nExternal access controls use a wide variety of\nmethods, often including a separate physical\ndevice (e.g., a computer) that is between the\nsystem being protected and a network.\n17.3.2.1 Port Protection Devices \nFitted to a communications port of a host\ncomputer, a port protection device (PPD)\nauthorizes access to the port itself, prior to\nand independent of the computer's own access control functions. A PPD can be a separate device\nin the communications stream,\n or it may be incorporated into a communications device (e.g., a\n120\nmodem). PPDs typically require a separate authenticator, such as a password, in order to access\nthe communications port.\n17.3.2.2 Secure Gateways/Firewalls \nOften called firewalls, secure gateways block or filter access between two networks, often\nbetween a private\n network and a larger, more public network such as the Internet, which attract\n121\nmalicious hackers. Secure gateways allow internal users to connect to external networks and at\nthe same time prevent malicious hackers from compromising the internal systems.\n \n122\nSome secure gateways are set up to allow all traffic to pass through except for specific traffic\n" }, { "page_number": 218, "text": "IV. Technical Controls\n \n RPC, or Remote Procedure Call, is the service used to implement NFS.\n123\n206\nTypes of Secure Gateways\nThere are many types of secure gateways. Some of\nthe most common are packet filtering (or\nscreening) routers, proxy hosts, bastion hosts, dual-\nhomed gateways, and screened-host gateways. \nAn example of host-based authentication is the\nNetwork File System (NFS) which allows a server\nto make file systems/directories available to\nspecific machines. \nwhich has known or suspected vulnerabilities or security problems, such as remote log-in services. \nOther secure gateways are set up to disallow all traffic except for specific types, such as e-mail. \nSome secure gateways can make access-control decisions based on the location of the requester. \nThere are several technical approaches and mechanisms used to support secure gateways. \nBecause gateways provide security by\nrestricting services or traffic, they can affect a\nsystem's usage. For this reason, firewall\nexperts always emphasize the need for policy,\nso that appropriate officials decide how the\norganization will balance operational needs\nand security.\nIn addition to reducing the risks from\nmalicious hackers, secure gateways have several other benefits. They can reduce internal system\nsecurity overhead, since they allow an organization to concentrate security efforts on a limited\nnumber of machines. (This is similar to putting a guard on the first floor of a building instead of\nneeding a guard on every floor.) \nA second benefit is the centralization of services. A secure gateway can be used to provide a\ncentral management point for various services, such as advanced authentication (discussed in\nChapter 16), e-mail, or public dissemination of information. Having a central management point\ncan reduce system overhead and improve service.\n17.3.2.3 Host-Based Authentication \nHost-based authentication grants access based\nupon the identity of the host originating the\nrequest, instead of the identity of the user\nmaking the request. Many network\napplications in use today use host-based\nauthentication to determine whether access is\nallowed. Under certain circumstances it is\nfairly easy to masquerade as the legitimate host, especially if the masquerading host is physically\nlocated close to the host being impersonated. Security measures to protect against misuse of\nsome host-based authentication systems are available (e.g., Secure RPC\n uses DES to provide a\n123\nmore secure identification of the client host).\n17.4\nAdministration of Access Controls\n" }, { "page_number": 219, "text": "17. Logical Access Controls\n \n As discussed in the policy section earlier in this chapter, those decisions are usually the responsibility of the\n124\napplicable application manager or cognizant management official. See also the discussion of system-specific\npolicy in Chapters 5 and 10.\n207\nSystem and Security Administration\nThe administration of systems and security\nrequires access to advanced functions (such as\nsetting up a user account). The individuals who\ntechnically set up and modify who has access to\nwhat are very powerful users on the system; they\nare often called system or security administrators. \nOn some systems, these users are referred to as\nhaving privileged accounts. \nThe type of access of these accounts varies\nconsiderably. Some administrator privileges, for\nexample, may allow an individual to administer\nonly one application or subsystem, while a higher\nlevel of privileges may allow for oversight and\nestablishment of subsystem administrators. \nNormally, users who are security administrators\nhave two accounts: one for regular use and one for\nsecurity use. This can help protect the security\naccount from compromise. Furthermore,\nadditional I&A precautions, such as ensuring that\nadministrator passwords are robust and changed\nregularly, are important to minimize opportunities\nfor unauthorized individuals to gain access to these\nfunctions.\nOne of the most complex and challenging aspects of access control, administration involves\nimplementing, monitoring, modifying, testing, and terminating user accesses on the system. These\ncan be demanding tasks, even though they typically do not include making the actual decisions as\nto the type of access each user may have.\n Decisions regarding accesses should be guided by\n124\norganizational policy, employee job descriptions and tasks, information sensitivity, user \"need-to-\nknow\" determinations, and many other factors. \nThere are three basic approaches to\nadministering access controls: centralized,\ndecentralized, or a combination of these. \nEach has relative advantages and\ndisadvantages. Which is most appropriate in a\ngiven situation will depend upon the particular\norganization and its circumstances.\n17.4.1 Centralized Administration \nUsing centralized administration, one office or\nindividual is responsible for configuring access\ncontrols. As users' information processing\nneeds change, their accesses can be modified\nonly through the central office, usually after\nrequests have been approved by the\nappropriate official. This allows very strict\ncontrol over information, because the ability\nto make changes resides with very few\nindividuals. Each user's account can be\ncentrally monitored, and closing all accesses\nfor any user can be easily accomplished if that\nindividual leaves the organization. Since\nrelatively few individuals oversee the process,\nconsistent and uniform procedures and criteria\nare usually not difficult to enforce. However,\nwhen changes are needed quickly, going\nthrough a central administration office can be frustrating and time-consuming. \n" }, { "page_number": 220, "text": "IV. Technical Controls\n \n Without necessary review mechanisms, central administration does not a priori preclude this.\n125\n \n For example, logical access controls within an application block User A from viewing File F. However, if\n126\noperating systems access controls do not also block User A from viewing File F, User A can use a utility program\n(or another application) to view the file. \n208\n17.4.2 Decentralized Administration \nIn decentralized administration, access is directly controlled by the owners or creators of the files,\noften the functional manager. This keeps control in the hands of those most accountable for the\ninformation, most familiar with it and its uses, and best able to judge who needs what kind of\naccess. This may lead, however, to a lack of consistency among owners/creators as to procedures\nand criteria for granting user accesses and capabilities. Also, when requests are not processed\ncentrally, it may be much more difficult to form a systemwide composite view of all user accesses\non the system at any given time. Different application or data owners may inadvertently\nimplement combinations of accesses that introduce conflicts of interest or that are in some other\nway not in the organization's best interest.\n It may also be difficult to ensure that all accesses are\n125\nproperly terminated when an employee transfers internally or leaves an organization.\n17.4.3 Hybrid Approach\nA hybrid approach combines centralized and decentralized administration. One typical\narrangement is that central administration is responsible for the broadest and most basic accesses,\nand the owners/creators of files control types of accesses or changes in users' abilities for the files\nunder their control. The main disadvantage to a hybrid approach is adequately defining which\naccesses should be assignable locally and which should be assignable centrally.\n17.5\nCoordinating Access Controls\nIt is vital that access controls protecting a system work together. At a minimum, three basic types\nof access controls should be considered: physical, operating system, and application. In general,\naccess controls within an application are the most specific. However, for application access\ncontrols to be fully effective they need to be supported by operating system access controls. \nOtherwise access can be made to application resources without going through the application.\n \n126\nOperating system and application access controls need to be supported by physical access\ncontrols.\n17.6\nInterdependencies\nLogical access controls are closely related to many other controls. Several of them have been\ndiscussed in the chapter.\n" }, { "page_number": 221, "text": "17. Logical Access Controls\n209\nPolicy and Personnel. The most fundamental interdependencies of logical access control are with\npolicy and personnel. Logical access controls are the technical implementation of system-specific\nand organizational policy, which stipulates who should be able to access what kinds of\ninformation, applications, and functions. These decisions are normally based on the principles of\nseparation of duties and least privilege.\nAudit Trails. As discussed earlier, logical access controls can be difficult to implement correctly. \nAlso, it is sometimes not possible to make logical access control as precise, or fine-grained, as\nwould be ideal for an organization. In such situations, users may either deliberately or\ninadvertently abuse their access. For example, access controls cannot prevent a user from\nmodifying data the user is authorized to modify, even if the modification is incorrect. Auditing\nprovides a way to identify abuse of access permissions. It also provides a means to review the\nactions of system or security administrators.\nIdentification and Authentication. In most logical access control scenarios, the identity of the\nuser must be established before an access control decision can be made. The access control\nprocess then associates the permissible forms of accesses with that identity. This means that\naccess control can only be as effective as the I&A process employed for the system. \nPhysical Access Control. Most systems can be compromised if someone can physically access the\nmachine (i.e., CPU or other major components) by, for example, restarting the system with\ndifferent software. Logical access controls are, therefore, dependent on physical access controls\n(with the exception of encryption, which can depend solely on the strength of the algorithm and\nthe secrecy of the key).\n17.7\nCost Considerations\nIncorporating logical access controls into a computer system involves the purchase or use of\naccess control mechanisms, their implementation, and changes in user behavior.\nDirect Costs. Among the direct costs associated with the use of logical access controls are the\npurchase and support of hardware, operating systems, and applications that provide the controls,\nand any add-on security packages. The most significant personnel cost in relation to logical\naccess control is usually for administration (e.g., initially determining, assigning, and keeping\naccess rights up to date). Label-based access control is available in a limited number of\ncommercial products, but at greater cost and with less variety of selection. Role-based systems\nare becoming more available, but there are significant costs involved in customizing these systems\nfor a particular organization. Training users to understand and use an access control system is\nanother necessary cost.\nIndirect Costs. The primary indirect cost associated with introducing logical access controls into\n" }, { "page_number": 222, "text": "IV. Technical Controls\n210\na computer system is the effect on user productivity. There may be additional overhead involved\nin having individual users properly determine (when under their control) the protection attributes\nof information. Another indirect cost that may arise results from users not being able to\nimmediately access information necessary to accomplish their jobs because the permissions were\nincorrectly assigned (or have changed). This situation is familiar to most organizations that put\nstrong emphasis on logical access controls.\nReferences\nAbrams, M.D., et al. A Generalized Framework for Access Control: An Informal Description.\nMcLean, VA: Mitre Corporation, 1990.\nBaldwin, R.W. \"Naming and Grouping Privileges to Simplify Security Management in Large\nDatabases.\" 1990 IEEE Symposium on Security and Privacy Proceedings. Oakland, CA: IEEE\nComputer Society Press, May 1990. pp. 116-132.\nCaelli, William, Dennis Longley, and Michael Shain. Information Security Handbook. New York,\nNY: Stockton Press, 1991.\nCheswick, William, and Steven Bellovin. Firewalls and Internet Security. Reading, MA: Addison-\nWesley Publishing Company, 1994.\nCurry, D. Improving the Security of Your UNIX System, ITSTD-721-FR-90-21. Menlo Park, CA:\nSRI International, 1990. \n \nDinkel, Charles. Secure Data Network System Access Control Documents. NISTIR 90-4259.\nGaithersburg, MD: National Institute of Standards and Technology, 1990.\nFites, P., and M. Kratz. Information Systems Security: A Practitioner's Reference. New York,\nNY: Van Nostrand Reinhold, 1993. Especially Chapters 1, 9, and 12.\nGarfinkel, S., and Spafford, G. \"UNIX Security Checklist.\" Practical UNIX Security. Sebastopol,\nCA: O'Riley & Associates. Inc., 1991. pp. 401-413. \n \nGasser, Morrie. Building a Secure Computer System. New York, NY: Van Nostrand Reinhold,\n1988.\nHaykin, M., and R. Warner. Smart Card Technology: New Methods for Computer Access\nControl. Spec Pub 500-157. Gaithersburg, MD: National Institute of Standards and Technology,\n1988. \n" }, { "page_number": 223, "text": "17. Logical Access Controls\n211\nLandwehr, C., C. Heitmeyer, and J. McLean. \"A Security Model for Military Message Systems.\"\nACM Transactions on Computer Systems, Vol. 2, No. 3, August 1984.\nNational Bureau of Standards. Guidelines for Security of Computer Applications. Federal\nInformation Processing Standard Publication 73. June 1980.\nPfleeger, Charles. Security in Computing. Englewood Cliffs, NJ: Prentice-Hall, Inc., 1989.\nPresident's Council on Integrity and Efficiency. Review of General Controls in Federal Computer\nSystems. Washington, DC: President's Council on Integrity and Efficiency, October 1988.\nS. Salamone, \"Internetwork Security: Unsafe at Any Node?\" Data Communications. 22(12),\n1993. pp. 61-68.\nSandhu, R. \"Transaction Control Expressions for Separation of Duty.\" Fourth Annual Computer\nSecurity Applications Conference Proceedings. Orlando, FL, December 1988, pp. 282-286.\nThomsen, D.J. \"Role-based Application Design and Enforcement.\" Fourth IFIP Workshop on\nDatabase Security Proceedings. International Federation for Information Processing, Halifax,\nEngland, September 1990.\nT. Whiting. \"Understanding VAX/VMS Security.\" Computers and Security. 11(8), 1992. pp.\n695-698.\n" }, { "page_number": 224, "text": "212\n" }, { "page_number": 225, "text": " \n Some security experts make a distinction between an audit trail and an audit log as follows: a log is a\n127\nrecord of events made by a particular software package, and an audit trail is an entire history of an event, possibly\nusing several logs. However, common usage within the security community does not make use of this definition. \nTherefore, this document does not distinguish between trails and logs.\n \n The type and amount of detail recorded by audit trails vary by both the technical capability of the logging\n128\napplication and the managerial decisions. Therefore, when we state that \"audit trails can...,\" the reader should be\naware that capabilities vary widely.\n213\nThe Difference Between \nAudit Trails and Auditing \nAn audit trail is a series of records of computer\nevents, about an operating system, an application,\nor user activities. A computer system may have\nseveral audit trails, each devoted to a particular\ntype of activity. \nAuditing is the review and analysis of\nmanagement, operational, and technical controls. \nThe auditor can obtain valuable information about\nactivity on a computer system from the audit trail. \nAudit trails improve the auditability of the\ncomputer system. Auditing is discussed in the\nassurance chapter.\nAn event is any action that happens on a computer\nsystem. Examples include logging into a system,\nexecuting a program, and opening a file.\nChapter 18\nAUDIT TRAILS\nAudit trails maintain a record of system\nactivity both by system and application\nprocesses and by user activity of systems and\napplications.\n In conjunction with\n127\nappropriate tools and procedures, audit trails\ncan assist in detecting security violations,\nperformance problems, and flaws in\napplications.\n \n128\nAudit trails may be used as either a support\nfor regular system operations or a kind of\ninsurance policy or as both of these. As\ninsurance, audit trails are maintained but are\nnot used unless needed, such as after a system\noutage. As a support for operations, audit\ntrails are used to help system administrators\nensure that the system or resources have not\nbeen harmed by hackers, insiders, or technical problems.\nThis chapter focuses on audit trails as a technical control, rather than the process of security\nauditing, which is a review and analysis of the security of a system as discussed in Chapter 9. This\nchapter discusses the benefits and objectives of audit trails, the types of audit trails, and some\ncommon implementation issues.\n18.1\nBenefits and Objectives\n \nAudit trails can provide a means to help\naccomplish several security-related objectives,\nincluding individual accountability,\n" }, { "page_number": 226, "text": "IV. Technical Controls\n \n For a fuller discussion of changing employee behavior, see Chapter 13.\n129\n214\nreconstruction of events, intrusion detection, and problem analysis.\n18.1.1 Individual Accountability\nAudit trails are a technical mechanism that help managers maintain individual accountability. By\nadvising users that they are personally accountable for their actions, which are tracked by an audit\ntrail that logs user activities, managers can help promote proper user behavior.\n Users are less\n129\nlikely to attempt to circumvent security policy if they know that their actions will be recorded in\nan audit log.\nFor example, audit trails can be used in concert with access controls to identify and provide\ninformation about users suspected of improper modification of data (e.g., introducing errors into a\ndatabase). An audit trail may record \"before\" and \"after\" versions of records. (Depending upon\nthe size of the file and the capabilities of the audit logging tools, this may be very resource-\nintensive.) Comparisons can then be made between the actual changes made to records and what\nwas expected. This can help management determine if errors were made by the user, by the\nsystem or application software, or by some other source.\nAudit trails work in concert with logical access controls, which restrict use of system resources. \nGranting users access to particular resources usually means that they need that access to\naccomplish their job. Authorized access, of course, can be misused, which is where audit trail\nanalysis is useful. While users cannot be prevented from using resources to which they have\nlegitimate access authorization, audit trail analysis is used to examine their actions. For example,\nconsider a personnel office in which users have access to those personnel records for which they\nare responsible. Audit trails can reveal that an individual is printing far more records than the\naverage user, which could indicate the selling of personal data. Another example may be an\nengineer who is using a computer for the design of a new product. Audit trail analysis could\nreveal that an outgoing modem was used extensively by the engineer the week before quitting. \nThis could be used to investigate whether proprietary data files were sent to an unauthorized\nparty. \n18.1.2 Reconstruction of Events\nAudit trails can also be used to reconstruct events after a problem has occurred. Damage can be\nmore easily assessed by reviewing audit trails of system activity to pinpoint how, when, and why\nnormal operations ceased. Audit trail analysis can often distinguish between operator-induced\nerrors (during which the system may have performed exactly as instructed) or system-created\nerrors (e.g., arising from a poorly tested piece of replacement code). If, for example, a system\nfails or the integrity of a file (either program or data) is questioned, an analysis of the audit trail\n" }, { "page_number": 227, "text": "18. Audit Trails\n \n Viruses and worms of forms of malicious code. A virus is a code segment that replicates by attaching\n130\ncopies of itself to existing executables. A worm is a self-replicating program.\n215\nIntrusion detection refers to the process of\nidentifying attempts to penetrate a system and gain\nunauthorized access.\ncan reconstruct the series of steps taken by the system, the users, and the application. Knowledge\nof the conditions that existed at the time of, for example, a system crash, can be useful in avoiding\nfuture outages. Additionally, if a technical problem occurs (e.g., the corruption of a data file)\naudit trails can aid in the recovery process (e.g., by using the record of changes made to\nreconstruct the file). \n18.1.3 Intrusion Detection\nIf audit trails have been designed and\nimplemented to record appropriate\ninformation, they can assist in intrusion\ndetection. Although normally thought of as a\nreal-time effort, intrusions can be detected in real time, by examining audit records as they are\ncreated (or through the use of other kinds of warning flags/notices), or after the fact (e.g., by\nexamining audit records in a batch process). \nReal-time intrusion detection is primarily aimed at outsiders attempting to gain unauthorized\naccess to the system. It may also be used to detect changes in the system's performance indicative\nof, for example, a virus or worm attack.\n There may be difficulties in implementing real-time\n130\nauditing, including unacceptable system performance.\nAfter-the-fact identification may indicate that unauthorized access was attempted (or was\nsuccessful). Attention can then be given to damage assessment or reviewing controls that were\nattacked. \n18.1.4 Problem Analysis\nAudit trails may also be used as on-line tools to help identify problems other than intrusions as\nthey occur. This is often referred to as real-time auditing or monitoring. If a system or\napplication is deemed to be critical to an organization's business or mission, real-time auditing\nmay be implemented to monitor the status of these processes (although, as noted above, there can\nbe difficulties with real-time analysis). An analysis of the audit trails may be able to verify that the\nsystem operated normally (i.e., that an error may have resulted from operator error, as opposed to\na system-originated error). Such use of audit trails may be complemented by system performance\nlogs. For example, a significant increase in the use of system resources (e.g., disk file space or\noutgoing modem use) could indicate a security problem. \n" }, { "page_number": 228, "text": "IV. Technical Controls\n \n The Department of Justice has advised that an ambiguity in U.S. law makes it unclear whether keystroke\n131\nmonitoring is considered equivalent to an unauthorized telephone wiretap. The ambiguity results from the fact\nthat current laws were written years before such concerns as keystroke monitoring or system intruders became\nprevalent. Additionally, no legal precedent has been set to determine whether keystroke monitoring is legal or\nillegal. System administrators conducting such monitoring might be subject to criminal and civil liabilities. The\nDepartment of Justice advises system administrators to protect themselves by giving notice to system users if\nkeystroke monitoring is being conducted. Notice should include agency/organization policy statements, training\non the subject, and a banner notice on each system being monitored. [NIST, CSL Bulletin, March 1993]\n216\n18.2\nAudit Trails and Logs\nA system can maintain several different audit trails concurrently. There are typically two kinds of\naudit records, (1) an event-oriented log and (2) a record of every keystroke, often called\nkeystroke monitoring. Event-based logs usually contain records describing system events,\napplication events, or user events. \nAn audit trail should include sufficient information to establish what events occurred and who (or\nwhat) caused them. In general, an event record should specify when the event occurred, the user\nID associated with the event, the program or command used to initiate the event, and the result. \nDate and time can help determine if the user was a masquerader or the actual person specified. \n18.2.1 Keystroke Monitoring131\nKeystroke monitoring is the process used to view or record both the keystrokes entered by a\ncomputer user and the computer's response during an interactive session. Keystroke monitoring\nis usually considered a special case of audit trails. Examples of keystroke monitoring would\ninclude viewing characters as they are typed by users, reading users' electronic mail, and viewing\nother recorded information typed by users. \nSome forms of routine system maintenance may record user keystrokes. This could constitute\nkeystroke monitoring if the keystrokes are preserved along with the user identification so that an\nadministrator could determine the keystrokes entered by specific users. Keystroke monitoring is\nconducted in an effort to protect systems and data from intruders who access the systems without\nauthority or in excess of their assigned authority. Monitoring keystrokes typed by intruders can\nhelp administrators assess and repair damage caused by intruders. \n18.2.2 Audit Events\nSystem audit records are generally used to monitor and fine-tune system performance. \nApplication audit trails may be used to discern flaws in applications, or violations of security\npolicy committed within an application. User audits records are generally used to hold\nindividuals accountable for their actions. An analysis of user audit records may expose a variety\n" }, { "page_number": 229, "text": "18. Audit Trails\n217\nSample System Log File Showing Authentication Messages\nJan 27 17:14:04 host1 login: ROOT LOGIN console\nJan 27 17:15:04 host1 shutdown: reboot by root\nJan 27 17:18:38 host1 login: ROOT LOGIN console\nJan 27 17:19:37 host1 reboot: rebooted by root\nJan 28 09:46:53 host1 su: 'su root' succeeded for user1 on /dev/ttyp0\nJan 28 09:47:35 host1 shutdown: reboot by user1\nJan 28 09:53:24 host1 su: 'su root' succeeded for user1 on /dev/ttyp1\nFeb 12 08:53:22 host1 su: 'su root' succeeded for user1 on /dev/ttyp1\nFeb 17 08:57:50 host1 date: set by user1\nFeb 17 13:22:52 host1 su: 'su root' succeeded for user1 on /dev/ttyp0\nApplication-Level Audit Record for a Mail Delivery System\nApr 9 11:20:22 host1 AA06370: from=, size=3355, class=0\nApr 9 11:20:23 host1 AA06370: to=, delay=00:00:02,\nstat=Sent\nApr 9 11:59:51 host1 AA06436: from=, size=1424, class=0\nApr 9 11:59:52 host1 AA06436: to=, delay=00:00:02,\nstat=Sent\nApr 9 12:43:52 host1 AA06441: from=, size=2077, class=0\nApr 9 12:43:53 host1 AA06441: to=, delay=00:00:01,\nstat=Sent\nof security violations, which might range from simple browsing to attempts to plant Trojan horses\nor gain unauthorized privileges.\nThe system itself enforces certain aspects of policy (particularly system-specific policy) such as\naccess to files and access to the system itself. Monitoring the alteration of systems configuration\nfiles that implement the policy is important. If special accesses (e.g., security administrator\naccess) have to be used to alter configuration files, the system should generate audit records\nwhenever these accesses are used. \nSometimes a finer level of detail than system audit trails is required. Application audit trails can\nprovide this greater level of recorded detail. If an application is critical, it can be desirable to\nrecord not only who invoked the application, but certain details specific to each use. For\nexample, consider an e-mail application. It may be desirable to record who sent mail, as well as to\nwhom they sent mail and the length of messages. Another example would be that of a database\napplication. It may be useful to record who accessed what database as well as the individual rows\nor columns of a table that were read (or changed or deleted), instead of just recording the\nexecution of the database program.\n" }, { "page_number": 230, "text": "IV. Technical Controls\n \n In general, audit logging can have privacy implications. Users should be aware of applicable privacy laws,\n132\nregulations, and policies that may apply in such situations.\n218\nUser Log Showing a Chronological List of Commands Executed by Users\nrcp user1 ttyp0 0.02 secs Fri Apr 8 16:02\nls user1 ttyp0 0.14 secs Fri Apr 8 16:01\nclear user1 ttyp0 0.05 secs Fri Apr 8 16:01\nrpcinfo user1 ttyp0 0.20 secs Fri Apr 8 16:01\nnroff user2 ttyp2 0.75 secs Fri Apr 8 16:00\nsh user2 ttyp2 0.02 secs Fri Apr 8 16:00\nmv user2 ttyp2 0.02 secs Fri Apr 8 16:00\nsh user2 ttyp2 0.03 secs Fri Apr 8 16:00\ncol user2 ttyp2 0.09 secs Fri Apr 8 16:00\nman user2 ttyp2 0.14 secs Fri Apr 8 15:57\nA system audit trail should be able to identify\nfailed log-on attempts, especially if the system\ndoes not limit the number of failed log-on\nattempts. Unfortunately, some system-level audit\ntrails cannot detect attempted log-ons, and\ntherefore, cannot log them for later review. These\naudit trails can only monitor and log successful\nlog-ons and subsequent activity. To effectively\ndetect intrusion, a record of failed log-on attempts\nis required.\nA user audit trail monitors and logs user activity in a system or application by recording events\ninitiated by the user (e.g., access of a file, record or field, use of a modem).\nFlexibility is a critical feature of audit trails. Ideally (from a security point of view), a system\nadministrator would have the ability to monitor all system and user activity, but could choose to\nlog only certain functions at the system level, and within certain applications. The decision of\nhow much to log and how much to review should be a function of application/data sensitivity and\nshould be decided by each functional manager/application owner with guidance from the system\nadministrator and the computer security manager/officer, weighing the costs and benefits of the\nlogging.132\n18.2.2.1 System-Level Audit Trails \nIf a system-level audit capability exists, the\naudit trail should capture, at a minimum, any\nattempt to log on (successful or unsuccessful),\nthe log-on ID, date and time of each log-on\nattempt, date and time of each log-off, the\ndevices used, and the function(s) performed\nonce logged on (e.g., the applications that the\nuser tried, successfully or unsuccessfully, to\ninvoke). System-level logging also typically\nincludes information that is not specifically\nsecurity-related, such as system operations, cost-accounting charges, and network performance.\n" }, { "page_number": 231, "text": "18. Audit Trails\n219\nAudit Logs for Physical Access\nPhysical access control systems (e.g., a card/key\nentry system or an alarm system) use software and\naudit trails similar to general-purpose computers. \nThe following are examples of criteria that may be\nused in selecting which events to log: \nThe date and time the access was attempted or\nmade should be logged, as should the gate or door\nthrough which the access was attempted or made,\nand the individual (or user ID) making the attempt\nto access the gate or door.\nInvalid attempts should be monitored and logged\nby noncomputer audit trails just as they are for\ncomputer-system audit trails. Management should\nbe made aware if someone attempts to gain access\nduring unauthorized hours.\nLogged information should also include attempts\nto add, modify, or delete physical access privileges\n(e.g., granting a new employee access to the\nbuilding or granting transferred employees access\nto their new office [and, of course, deleting their\nold access, as applicable]). \nAs with system and application audit trails,\nauditing of noncomputer functions can be\nimplemented to send messages to security\npersonnel indicating valid or invalid attempts to\ngain access to controlled spaces. In order not to\ndesensitize a guard or monitor, all access should\nnot result in messages being sent to a screen. Only\nexceptions, such as failed access attempts, should\nbe highlighted to those monitoring access.\n18.2.2.2 Application-Level Audit Trails \nSystem-level audit trails may not be able to track and log events within applications, or may not\nbe able to provide the level of detail needed by application or data owners, the system\nadministrator, or the computer security manager. In general, application-level audit trails monitor\nand log user activities, including data files opened and closed, specific actions, such as reading,\nediting, and deleting records or fields, and printing reports. Some applications may be sensitive\nenough from a data availability, confidentiality, and/or integrity perspective that a \"before\" and\n\"after\" picture of each modified record (or the\ndata element(s) changed within a record)\nshould be captured by the audit trail.\n18.2.2.3 User Audit Trails \nUser audit trails can usually log:\nall commands directly initiated\nby the user;\nall identification and\nauthentication attempts; and\nfiles and resources accessed.\nIt is most useful if options and parameters are\nalso recorded from commands. It is much\nmore useful to know that a user tried to delete\na log file (e.g., to hide unauthorized actions)\nthan to know the user merely issued the delete\ncommand, possibly for a personal data file.\n18.3 Implementation Issues \nAudit trail data requires protection, since the\ndata should be available for use when needed\nand is not useful if it is not accurate. Also, the\nbest planned and implemented audit trail is of\nlimited value without timely review of the\nlogged data. Audit trails may be reviewed\nperiodically, as needed (often triggered by\noccurrence of a security event), automatically\nin realtime, or in some combination of these. \nSystem managers and administrators, with\n" }, { "page_number": 232, "text": "IV. Technical Controls\n220\nguidance from computer security personnel, should determine how long audit trail data will be\nmaintained either on the system or in archive files. \nFollowing are examples of implementation issues that may have to be addressed when using audit\ntrails. \n18.3.1 Protecting Audit Trail Data\nAccess to on-line audit logs should be strictly controlled. Computer security managers and\nsystem administrators or managers should have access for review purposes; however, security\nand/or administration personnel who maintain logical access functions may have no need for\naccess to audit logs. \nIt is particularly important to ensure the integrity of audit trail data against modification. One\nway to do this is to use digital signatures. (See Chapter 19.) Another way is to use write-once\ndevices. The audit trail files needs to be protected since, for example, intruders may try to \"cover\ntheir tracks\" by modifying audit trail records. Audit trail records should be protected by strong\naccess controls to help prevent unauthorized access. The integrity of audit trail information may\nbe particularly important when legal issues arise, such as when audit trails are used as legal\nevidence. (This may, for example, require daily printing and signing of the logs.) Questions of\nsuch legal issues should be directed to the cognizant legal counsel. \nThe confidentiality of audit trail information may also be protected, for example, if the audit trail\nis recording information about users that may be disclosure-sensitive such as transaction data\ncontaining personal information (e.g., \"before\" and \"after\" records of modification to income tax\ndata). Strong access controls and encryption can be particularly effective in preserving\nconfidentiality.\n18.3.2 Review of Audit Trails\nAudit trails can be used to review what occurred after an event, for periodic reviews, and for real-\ntime analysis. Reviewers should know what to look for to be effective in spotting unusual\nactivity. They need to understand what normal activity looks like. Audit trail review can be\neasier if the audit trail function can be queried by user ID, terminal ID, application name, date and\ntime, or some other set of parameters to run reports of selected information.\nAudit Trail Review After an Event. Following a known system or application software problem, a\nknown violation of existing requirements by a user, or some unexplained system or user problem,\nthe appropriate system-level or application-level administrator should review the audit trails. \nReview by the application/data owner would normally involve a separate report, based upon audit\ntrail data, to determine if their resources are being misused. \n" }, { "page_number": 233, "text": "18. Audit Trails\n \n This is similar to keystroke monitoring, though, and may be legally restricted. \n133\n221\nPeriodic Review of Audit Trail Data. Application owners, data owners, system administrators,\ndata processing function managers, and computer security managers should determine how much\nreview of audit trail records is necessary, based on the importance of identifying unauthorized\nactivities. This determination should have a direct correlation to the frequency of periodic\nreviews of audit trail data. \nReal-Time Audit Analysis. Traditionally, audit trails are analyzed in a batch mode at regular\nintervals (e.g., daily). Audit records are archived during that interval for later analysis. Audit\nanalysis tools can also be used in a real-time, or near real-time fashion. Such intrusion detection\ntools are based on audit reduction, attack signature, and variance techniques. Manual review of\naudit records in real time is almost never feasible on large multiuser systems due to the volume of\nrecords generated. However, it might be possible to view all records associated with a particular\nuser or application, and view them in real time.133\n18.3.3 Tools for Audit Trail Analysis\nMany types of tools have been developed to help to reduce the amount of information contained\nin audit records, as well as to distill useful information from the raw data. Especially on larger\nsystems, audit trail software can create very large files, which can be extremely difficult to analyze\nmanually. The use of automated tools is likely to be the difference between unused audit trail data\nand a robust program. Some of the types of tools include:\nAudit reduction tools are preprocessors designed to reduce the volume of audit records to\nfacilitate manual review. Before a security review, these tools can remove many audit records\nknown to have little security significance. (This alone may cut in half the number of records in the\naudit trail.) These tools generally remove records generated by specified classes of events, such\nas records generated by nightly backups might be removed.\nTrends/variance-detection tools look for anomalies in user or system behavior. It is possible to\nconstruct more sophisticated processors that monitor usage trends and detect major variations. \nFor example, if a user typically logs in at 9 a.m., but appears at 4:30 a.m. one morning, this may\nindicate a security problem that may need to be investigated.\nAttack signature-detection tools look for an attack signature, which is a specific sequence of\nevents indicative of an unauthorized access attempt. A simple example would be repeated failed\nlog-in attempts. \n" }, { "page_number": 234, "text": "IV. Technical Controls\n222\n18.4 Interdependencies\nThe ability to audit supports many of the controls presented in this handbook. The following\nparagraphs describe some of the most important interdependencies.\nPolicy. The most fundamental interdependency of audit trails is with policy. Policy dictates who\nis authorized access to what system resources. Therefore it specifies, directly or indirectly, what\nviolations of policy should be identified through audit trails. \nAssurance. System auditing is an important aspect of operational assurance. The data recorded\ninto an audit trail is used to support a system audit. The analysis of audit trail data and the\nprocess of auditing systems are closely linked; in some cases, they may even be the same thing. In\nmost cases, the analysis of audit trail data is a critical part of maintaining operational assurance.\nIdentification and Authentication. Audit trails are tools often used to help hold users accountable\nfor their actions. To be held accountable, the users must be known to the system (usually\naccomplished through the identification and authentication process). However, as mentioned\nearlier, audit trails record events and associate them with the perceived user (i.e., the user ID). If\na user is impersonated, the audit trail will establish events but not the identity of the user. \nLogical Access Control. Logical access controls restrict the use of system resources to\nauthorized users. Audit trails complement this activity in two ways. First, they may be used to\nidentify breakdowns in logical access controls or to verify that access control restrictions are\nbehaving as expected, for example, if a particular user is erroneously included in a group\npermitted access to a file. Second, audit trails are used to audit use of resources by those who\nhave legitimate access. Additionally, to protect audit trail files, access controls are used to ensure\nthat audit trails are not modified.\nContingency Planning. Audit trails assist in contingency planning by leaving a record of activities\nperformed on the system or within a specific application. In the event of a technical malfunction,\nthis log can be used to help reconstruct the state of the system (or specific files).\nIncident Response. If a security incident occurs, such as hacking, audit records and other\nintrusion detection methods can be used to help determine the extent of the incident. For\nexample, was just one file browsed, or was a Trojan horse planted to collect passwords? \nCryptography. Digital signatures can be used to protect audit trails from undetected\nmodification. (This does not prevent deletion or modification of the audit trail, but will provide\nan alert that the audit trail has been altered.) Digital signatures can also be used in conjunction\nwith adding secure time stamps to audit records. Encryption can be used if confidentiality of\naudit trail information is important. \n" }, { "page_number": 235, "text": "18. Audit Trails\n223\n18.5 Cost Considerations\nAudit trails involve many costs. First, some system overhead is incurred recording the audit trail. \nAdditional system overhead will be incurred storing and processing the records. The more\ndetailed the records, the more overhead is required. Another cost involves human and machine\ntime required to do the analysis. This can be minimized by using tools to perform most of the\nanalysis. Many simple analyzers can be constructed quickly (and cheaply) from system utilities,\nbut they are limited to audit reduction and identifying particularly sensitive events. More complex\ntools that identify trends or sequences of events are slowly becoming available as off-the-shelf\nsoftware. (If complex tools are not available for a system, development may be prohibitively\nexpensive. Some intrusion detection systems, for example, have taken years to develop.)\nThe final cost of audit trails is the cost of investigating anomalous events. If the system is\nidentifying too many events as suspicious, administrators may spend undue time reconstructing\nevents and questioning personnel.\nReferences\nFites, P., and M. Kratz. Information Systems Security: A Practitioner's Reference. New York:\nVan Nostrand Reinhold, 1993, (especially Chapter 12, pp. 331 - 350).\nKim, G., and E. Spafford, \"Monitoring File System Integrity on UNIX Platforms.\" Infosecurity\nNews. 4(4), 1993. pp. 21-22. \n \nLunt, T. \"Automated Audit Trail Analysis for Intrusion Detection,\" Computer Audit Update,\nApril 1992. pp. 2-8.\nNational Computer Security Center. A Guide to Understanding Audit in Trusted Systems.\nNCSC-TG-001, Version-2. Ft. Meade, MD, 1988. \nNational Institute of Standards and Technology. \"Guidance on the Legality of Keystroke\nMonitoring.\" CSL Bulletin. March 1993.\nPhillips, P. W. \"New Approach Identifies Malicious System Activity.\" Signal. 46(7), 1992. pp.\n65-66.\nRuthberg, Z., et al. Guide to Auditing for Controls and Security: A System Development Life\nCycle Approach. Special Publication 500-153. Gaithersburg, MD: National Bureau of Standards,\n1988. \nStoll, Clifford. The Cuckoo's Egg. New York, NY: Doubleday, 1989.\n" }, { "page_number": 236, "text": "224\n" }, { "page_number": 237, "text": "225\nCryptography is traditionally associated only with\nkeeping data secret. However, modern\ncryptography can be used to provide many security\nservices, such as electronic signatures and ensuring\nthat data has not been modified. \nThere are two basic types of cryptography: \"secret\nkey\" and \"public key.\"\nChapter 19\nCRYPTOGRAPHY\nCryptography is a branch of mathematics based on the transformation of data. It provides an\nimportant tool for protecting information and is used in many aspects of computer security. For\nexample, cryptography can help provide data confidentiality, integrity, electronic signatures, and\nadvanced user authentication. Although modern cryptography relies upon advanced mathematics,\nusers can reap its benefits without understanding its mathematical underpinnings. \nThis chapter describes cryptography as a tool\nfor satisfying a wide spectrum of computer\nsecurity needs and requirements. It describes\nfundamental aspects of the basic\ncryptographic technologies and some specific\nways cryptography can be applied to improve\nsecurity. The chapter also explores some of\nthe important issues that should be considered\nwhen incorporating cryptography into computer systems. \n19.1 Basic Cryptographic Technologies\nCryptography relies upon two basic components: an algorithm (or cryptographic methodology)\nand a key. In modern cryptographic systems, algorithms are complex mathematical formulae and\nkeys are strings of bits. For two parties to communicate, they must use the same algorithm (or\nalgorithms that are designed to work together). In some cases, they must also use the same key. \nMany cryptographic keys must be kept secret;\nsometimes algorithms are also kept secret.\nThere are two basic types of cryptography:\nsecret key systems (also called symmetric\nsystems) and public key systems (also called\nasymmetric systems). Table 19.1 compares some of the distinct features of secret and public key\nsystems. Both types of systems offer advantages and disadvantages. Often, the two are combined\nto form a hybrid system to exploit the strengths of each type. To determine which type of\ncryptography best meets its needs, an organization first has to identify its security requirements\nand operating environment. \n" }, { "page_number": 238, "text": "IV. Technical Controls\n226\nSECRET KEY\nPUBLIC KEY\nDISTINCT FEATURES\nCRYPTOGRAPHY\nCRYPTOGRAPHY\nNUMBER OF KEYS\nSingle key.\nPair of keys.\nTYPES OF KEYS\nKey is secret.\nOne key is private, and\none key is public.\nPROTECTION OF KEYS\nDisclosure and\nDisclosure and\nmodification.\nmodification for private\nkeys and modification\nfor public keys.\nRELATIVE SPEEDS\nFaster.\nSlower.\nTable 19.1\nSecret key cryptography has been in use for\ncenturies. Early forms merely transposed the\nwritten characters to hide the message.\n19.1.1 Secret Key Cryptography\nIn secret key cryptography, two (or more) parties share the same key, and that key is used to\nencrypt and decrypt data. As the name implies, secret key cryptography relies on keeping the key\nsecret. If the key is compromised, the security offered by cryptography is severely reduced or\neliminated. Secret key cryptography assumes that the parties who share a key rely upon each\nother not to disclose the key and protect it against modification. \nThe best known secret key system is the Data\nEncryption Standard (DES), published by\nNIST as Federal Information Processing\nStandard (FIPS) 46-2. Although the\nadequacy of DES has at times been\nquestioned, these claims remain\nunsubstantiated, and DES remains strong. It is the most widely accepted, publicly available\ncryptographic system today. The American National Standards Institute (ANSI) has adopted\nDES as the basis for encryption, integrity, access control, and key management standards. \nThe Escrowed Encryption Standard, published as FIPS 185, also makes use of a secret key\nsystem. (See the discussion of Key Escrow Encryption in this chapter.)\n" }, { "page_number": 239, "text": "19. Cryptography\n227\nPublic key cryptography is a modern invention and\nrequires the use of advanced mathematics.\nSecret key systems are often used for bulk data\nencryption and public key systems for automated\nkey distribution. \n19.1.2 Public Key Cryptography\nWhereas secret key cryptography uses a single\nkey shared by two (or more) parties, public\nkey cryptography uses a pair of keys for each\nparty. One of the keys of the pair is \"public\"\nand the other is \"private.\" The public key can\nbe made known to other parties; the private key must be kept confidential and must be known\nonly to its owner. Both keys, however, need to be protected against modification.\nPublic key cryptography is particularly useful when the parties wishing to communicate cannot\nrely upon each other or do not share a common key. There are several public key cryptographic\nsystems. One of the first public key systems is RSA, which can provide many different security\nservices. The Digital Signature Standard (DSS), described later in the chapter, is another example\nof a public key system.\n19.1.3 Hybrid Cryptographic Systems\nPublic and secret key cryptography have relative advantages and disadvantages. Although public\nkey cryptography does not require users to share a common key, secret key cryptography is much\nfaster: equivalent implementations of secret\nkey cryptography can run 1,000 to 10,000 times\nfaster than public key cryptography.\nTo maximize the advantages and minimize the\ndisadvantages of both secret and public key\ncryptography, a computer system can use both\ntypes in a complementary manner, with each performing different functions. Typically, the speed\nadvantage of secret key cryptography means that it is used for encrypting data. Public key\ncryptography is used for applications that are less demanding to a computer system's resources,\nsuch as encrypting the keys used by secret key cryptography (for distribution) or to sign\nmessages. \n19.1.4 Key Escrow \nBecause cryptography can provide extremely strong encryption, it can thwart the government's\nefforts to lawfully perform electronic surveillance. For example, if strong cryptography is used to\nencrypt a phone conversation, a court-authorized wiretap will not be effective. To meet the needs\nof the government and to provide privacy, the federal government has adopted voluntary key\nescrow cryptography. This technology allows the use of strong encryption, but also allows the\ngovernment when legally authorized to obtain decryption keys held by escrow agents. NIST has\npublished the Escrowed Encryption Standard as FIPS 185. Under the Federal Government's\n" }, { "page_number": 240, "text": "IV. Technical Controls\n \n The originator does not have to be the original creator of the data. It can also be a guardian or custodian of\n134\nthe data.\n \n Plaintext can be intelligible to a human (e.g., a novel) or to a machine (e.g., executable code).\n135\n228\nvoluntary key escrow initiative, the decryption keys are split into parts and given to separate\nescrow authorities. Access to one part of the key does not help decrypt the data; both keys must\nbe obtained.\n19.2 Uses of Cryptography\nCryptography is used to protect data both inside and outside the boundaries of a computer\nsystem. Outside the computer system, cryptography is sometimes the only way to protect data.\nWhile in a computer system, data is normally protected with logical and physical access controls\n(perhaps supplemented by cryptography). However, when in transit across communications lines\nor resident on someone else's computer, data cannot be protected by the originator's\n logical or\n134\nphysical access controls. Cryptography provides a solution by protecting data even when the data\nis no longer in the control of the\noriginator.\n19.2.1 Data Encryption\nOne of the best ways to obtain cost-\neffective data confidentiality is\nthrough the use of encryption. \nEncryption transforms intelligible\ndata, called plaintext,\n into an\n135\nunintelligible form, called ciphertext. \nThis process is reversed through the\nprocess of decryption. Once data is\nencrypted, the ciphertext does not\nhave to be protected against\ndisclosure. However, if ciphertext is\nmodified, it will not decrypt\ncorrectly.\nBoth secret key and public key\ncryptography can be used for data encryption although not all public key algorithms provide for\ndata encryption. \nTo use a secret key algorithm, data is encrypted using a key. The same key must be used to\n" }, { "page_number": 241, "text": "19. Cryptography\n229\ndecrypt the data. \nWhen public key cryptography is\nused for encryption, any party\nmay use any other party's public\nkey to encrypt a message;\nhowever, only the party with the\ncorresponding private key can\ndecrypt, and thus read, the\nmessage. \nSince secret key encryption is\ntypically much faster, it is\nnormally used for encrypting\nlarger amounts of data. \n19.2.2 Integrity\nIn computer systems, it is not\nalways possible for humans to scan information to determine if data has been erased, added, or\nmodified. Even if scanning were possible, the individual may have no way of knowing what the\ncorrect data should be. For example, \"do\" may be changed to \"do not,\" or $1,000 may be\nchanged to $10,000. It is therefore desirable to have an automated means of detecting both\nintentional and unintentional modifications of data. \nWhile error detecting codes have long been used in communications protocols (e.g., parity bits),\nthese are more effective in detecting (and correcting) unintentional modifications. They can be\ndefeated by adversaries. Cryptography can effectively detect both intentional and unintentional\nmodification; however, cryptography does not protect files from being modified. Both secret key\nand public key cryptography can be used to ensure integrity. Although newer public key methods\nmay offer more flexibility than the older secret key method, secret key integrity verification\nsystems have been successfully integrated into many applications.\nWhen secret key cryptography is used, a message authentication code (MAC) is calculated from\nand appended to the data. To verify that the data has not been modified at a later time, any party\nwith access to the correct secret key can recalculate the MAC. The new MAC is compared with\nthe original MAC, and if they are identical, the verifier has confidence that the data has not been\nmodified by an unauthorized party. FIPS 113, Computer Data Authentication, specifies a\nstandard technique for calculating a MAC for integrity verification. \nPublic key cryptography verifies integrity by using of public key signatures and secure hashes. A\nsecure hash algorithm is used to create a message digest. The message digest, called a hash, is a\n" }, { "page_number": 242, "text": "IV. Technical Controls\n \n Sometimes a secure hash is used for integrity verification. However, this can be defeated if the hash is not\n136\nstored in a secure location, since it may be possible for someone to change the message and then replace the old\nhash with a new one based on the modified message.\n \n Electronic signatures rely on the secrecy of the keys and the link or binding between the owner of the key\n137\nand the key itself. If a key is compromised (by theft, coercion, or trickery), then the electronic originator of a\nmessage may not be the same as the owner of the key. Although the binding of cryptographic keys to actual\npeople is a significant problem, it does not necessarily make electronic signatures less secure than written\nsignatures. Trickery and coercion are problems for written signatures as well. In addition, written signatures are\neasily forged.\n \n The strength of these mechanisms relative to electronic signatures varies depending on the specific\n138\nimplementation; however, in general, electronic signatures are stronger and more flexible. These mechanisms\nmay be used in conjunction with electronic signatures or separately, depending upon the system's specific needs\nand limitations. \n230\nWhat Is an Electronic Signature? \nAn electronic signature is a cryptographic\nmechanism that performs a similar function to a\nwritten signature. It is used to verify the origin and\ncontents of a message. For example, a recipient of\ndata (e.g., an e-mail message) can verify who\nsigned the data and that the data was not modified\nafter being signed. This also means that the\noriginator (e.g., sender of an e-mail message)\ncannot falsely deny having signed the data. \nshort form of the message that changes if the message is modified. The hash is then signed with a\nprivate key. Anyone can recalculate the hash and use the corresponding public key to verify the\nintegrity of the message.\n \n136\n19.2.3 Electronic Signatures\nToday's computer systems store and process\nincreasing numbers of paper-based documents\nin electronic form. Having documents in\nelectronic form permits rapid processing and\ntransmission and improves overall efficiency. \nHowever, approval of a paper document has\ntraditionally been indicated by a written\nsignature. What is needed, therefore, is the\nelectronic equivalent of a written signature\nthat can be recognized as having the same\nlegal status as a written signature. In addition\nto the integrity protections, discussed above,\ncryptography can provide a means of linking a\ndocument with a particular person, as is done with a written signature. Electronic signatures can\nuse either secret key or public key cryptography; however, public key methods are generally\neasier to use. \nCryptographic signatures provide extremely strong proof that a message has not been altered and\nwas signed by a specific key.\n However, there are other mechanisms besides cryptographic-\n137\nbased electronic signatures that perform a similar function. These mechanisms provide some\nassurance of the origin of a message, some verification of the message's integrity, or both.\n \n138\n" }, { "page_number": 243, "text": "19. Cryptography\n231\nSystems incorporating message authentication\ntechnology have been approved for use by the\nfederal government as a replacement for written\nsignatures on electronic documents.\nExamination of the transmission path of a message. When messages are sent across a\nnetwork, such as the Internet, the message source and the physical path of the message are\nrecorded as a part of the message. These can be examined electronically or manually to\nhelp ascertain the origin of a message.\nUse of a value-added network provider. If two or more parties are communicating via a\nthird party network, the network provider may be able to provide assurance that messages\noriginate from a given source and have not been modified. \nAcknowledgment statements. The recipient of an electronic message may confirm the\nmessage's origin and contents by sending back an acknowledgement statement.\nUse of audit trails. Audit trails can track the sending of messages and their contents for\nlater reference.\nSimply taking a digital picture of a written signature does not provide adequate security. Such a\ndigitized written signature could easily be copied from one electronic document to another with\nno way to determine whether it is legitimate. Electronic signatures, on the other hand, are unique\nto the message being signed and will not verify if they are copied to another document.\n19.2.3.1 Secret Key Electronic Signatures \nAn electronic signature can be implemented\nusing secret key message authentication codes\n(MACs). For example, if two parties share a\nsecret key, and one party receives data with a\nMAC that is correctly verified using the\nshared key, that party may assume that the other party signed the data. This assumes, however,\nthat the two parties trust each other. Thus, through the use of a MAC, in addition to data\nintegrity, a form of electronic signature is obtained. Using additional controls, such as key\nnotarization and key attributes, it is possible to provide an electronic signature even if the two\nparties do not trust each other. \n19.2.3.2 Public Key Electronic Signatures \nAnother type of electronic signature called a digital signature is implemented using public key\ncryptography. Data is electronically signed by applying the originator's private key to the data. \n(The exact mathematical process for doing this is not important for this discussion.) To increase\nthe speed of the process, the private key is applied to a shorter form of the data, called a \"hash\" or\n\"message digest,\" rather than to the entire set of data. The resulting digital signature can be\nstored or transmitted along with the data. The signature can be verified by any party using the\npublic key of the signer. This feature is very useful, for example, when distributing signed copies\n" }, { "page_number": 244, "text": "IV. Technical Controls\n232\nApplicable security standards provide a common\nlevel of security and interoperability among users.\nof virus-free software. Any recipient\ncan verify that the program remains\nvirus-free. If the signature verifies\nproperly, then the verifier has\nconfidence that the data was not\nmodified after being signed and that\nthe owner of the public key was the\nsigner. \nNIST has published standards for a\ndigital signature and a secure hash for\nuse by the federal government in FIPS\n186, Digital Signature Standard and\nFIPS 180, Secure Hash Standard.\n19.2.4 User Authentication\nCryptography can increase security in\nuser authentication techniques. As\ndiscussed in Chapter 16, cryptography\nis the basis for several advanced authentication methods. Instead of communicating passwords\nover an open network, authentication can be performed by demonstrating knowledge of a\ncryptographic key. Using these methods, a one-time password, which is not susceptible to\neavesdropping, can be used. User authentication can use either secret or public key cryptography.\n19.3 Implementation Issues\nThis section explores several important issues that should be considered when using (e.g.,\ndesigning, implementing, integrating) cryptography in a computer system.\n19.3.1 Selecting Design and Implementation Standards \nNIST and other organizations have developed numerous standards for designing, implementing,\nand using cryptography and for integrating it\ninto automated systems. By using these\nstandards, organizations can reduce costs and\nprotect their investments in technology. \nStandards provide solutions that have been\naccepted by a wide community and that have\nbeen reviewed by experts in relevant areas. \nStandards help ensure interoperability among different vendors' equipment, thus allowing an\n" }, { "page_number": 245, "text": "19. Cryptography\n233\norganization to select from among various products in order to find cost-effective equipment.\nManagers and users of computer systems will have to select among various standards when\ndeciding to use cryptography. Their selection should be based on cost-effectiveness analysis,\ntrends in the standard's acceptance, and interoperability requirements. In addition, each standard\nshould be carefully analyzed to determine if it is applicable to the organization and the desired\napplication. For example, the Data Encryption Standard and the Escrowed Encryption Standard\nare both applicable to certain applications involving communications of data over commercial\nmodems. Some federal standards are mandatory for federal computer systems, including DES\n(FIPS 46-2) and the DSS (FIPS 181). \n19.3.2 Deciding on Hardware vs. Software Implementations \nThe trade-offs among security, cost, simplicity, efficiency, and ease of implementation need to be\nstudied by managers acquiring various security products meeting a standard. Cryptography can\nbe implemented in either hardware or software. Each has its related costs and benefits. \nIn general, software is less expensive and slower than hardware, although for large applications,\nhardware may be less expensive. In addition, software may be less secure, since it is more easily\nmodified or bypassed than equivalent hardware products. Tamper resistance is usually considered\nbetter in hardware. \nIn many cases, cryptography is implemented in a hardware device (e.g., electronic chip, ROM-\nprotected processor) but is controlled by software. This software requires integrity protection to\nensure that the hardware device is provided with correct information (i.e., controls, data) and is\nnot bypassed. Thus, a hybrid solution is generally provided, even when the basic cryptography is\nimplemented in hardware. Effective security requires the correct management of the entire hybrid\nsolution.\n19.3.3 Managing Keys\nThe proper management of cryptographic keys is essential to the effective use of cryptography for\nsecurity. Ultimately, the security of information protected by cryptography directly depends upon\nthe protection afforded to keys. \nAll keys need to be protected against modification, and secret keys and private keys need\nprotection against unauthorized disclosure. Key management involves the procedures and\nprotocols, both manual and automated, used throughout the entire life cycle of the keys. This\nincludes the generation, distribution, storage, entry, use, destruction, and archiving of\ncryptographic keys. \nWith secret key cryptography, the secret key(s) should be securely distributed (i.e., safeguarded\n" }, { "page_number": 246, "text": "IV. Technical Controls\n \n In some cases, the key may be bound to a position or an organization, rather than to an individual user.\n139\n234\nFIPS 140-1, Security Requirements for\nCryptographic Modules, specifies the physical and\nlogical security requirements for cryptographic\nmodules. The standard defines four security levels\nfor cryptographic modules, with each level\nproviding a significant increase in security over the\npreceding level. The four levels allow for cost-\neffective solutions that are appropriate for different\ndegrees of data sensitivity and different application\nenvironments. The user can select the best module\nfor any given application or system, avoiding the\ncost of unnecessary security features.\nagainst unauthorized replacement, modification, and disclosure) to the parties wishing to\ncommunicate. Depending upon the number and location of users, this task may not be trivial. \nAutomated techniques for generating and distributing cryptographic keys can ease overhead costs\nof key management, but some resources have to be devoted to this task. FIPS 171, Key\nManagement Using ANSI X9.17, provides key management solutions for a variety of operational\nenvironments. \nPublic key cryptography users also have to satisfy certain key management requirements. For\nexample, since a private-public key pair is associated with (i.e., generated or held by) a specific\nuser, it is necessary to bind the public part of the key pair to the user.\n \n139\nIn a small community of users, public keys and their \"owners\" can be strongly bound by simply\nexchanging public keys (e.g., putting them on a CD-ROM or other media). However, conducting\nelectronic business on a larger scale, potentially involving geographically and organizationally\ndistributed users, necessitates a means for obtaining public keys electronically with a high degree\nof confidence in their integrity and binding to individuals. The support for the binding between a\nkey and its owner is generally referred to as a public key infrastructure.\nUsers also need to be able enter the community of key holders, generate keys (or have them\ngenerated on their behalf), disseminate public keys, revoke keys (in case, for example, of\ncompromise of the private key), and change keys. In addition, it may be necessary to build in\ntime/date stamping and to archive keys for verification of old signatures.\n19.3.4 Security of Cryptographic Modules\nCryptography is typically implemented in a\nmodule of software, firmware, hardware, or\nsome combination thereof. This module\ncontains the cryptographic algorithm(s),\ncertain control parameters, and temporary\nstorage facilities for the key(s) being used by\nthe algorithm(s). The proper functioning of\nthe cryptography requires the secure design,\nimplementation, and use of the cryptographic\nmodule. This includes protecting the module\nagainst tampering. \n" }, { "page_number": 247, "text": "19. Cryptography\n235\n19.3.5 Applying Cryptography to Networks\nThe use of cryptography within networking applications often requires special considerations. In\nthese applications, the suitability of a cryptographic module may depend on its capability for\nhandling special requirements imposed by locally attached communications equipment or by the\nnetwork protocols and software.\nEncrypted information, MACs, or digital signatures may require transparent communications\nprotocols or equipment to avoid being misinterpreted by the communications equipment or\nsoftware as control information. It may be necessary to format the encrypted information, MAC,\nor digital signature to ensure that it does not confuse the communications equipment or software. \nIt is essential that cryptography satisfy the requirements imposed by the communications\nequipment and does not interfere with the proper and efficient operation of the network.\nData is encrypted on a network using either link or end-to-end encryption. In general, link\nencryption is performed by service providers, such as a data communications provider. Link\nencryption encrypts all of the data along a communications path (e.g., a satellite link, telephone\ncircuit, or T1 line). Since link encryption also encrypts routing data, communications nodes need\nto decrypt the data to continue routing. End-to-end encryption is generally performed by the end-\nuser organization. Although data remains encrypted when being passed through a network,\nrouting information remains visible. It is possible to combine both types of encryption.\n19.3.6 Complying with Export Rules\nThe U.S. Government controls the export of cryptographic implementations. The rules governing\nexport can be quite complex, since they consider multiple factors. In addition, cryptography is a\nrapidly changing field, and rules may change from time to time. Questions concerning the export\nof a particular implementation should be addressed to appropriate legal counsel.\n19.4 Interdependencies\nThere are many interdependencies among cryptography and other security controls highlighted in\nthis handbook. Cryptography both depends on other security safeguards and assists in providing\nthem.\nPhysical Security. Physical protection of a cryptographic module is required to prevent or at\nleast detect physical replacement or modification of the cryptographic system and the keys\nwithin it. In many environments (e.g., open offices, portable computers), the cryptographic\nmodule itself has to provide the desired levels of physical security. In other environments (e.g.,\nclosed communications facilities, steel-encased Cash-Issuing Terminals), a cryptographic module\nmay be safely employed within a secured facility.\n" }, { "page_number": 248, "text": "IV. Technical Controls\n236\nNIST maintains validation programs for several of\nits cryptographic standards.\nUser Authentication. Cryptography can be used both to protect passwords that are stored in\ncomputer systems and to protect passwords that are communicated between computers. \nFurthermore, cryptographic-based authentication techniques may be used in conjunction with, or\nin place of, password-based techniques to provide stronger authentication of users.\nLogical Access Control. In many cases, cryptographic software may be embedded within a host\nsystem, and it may not be feasible to provide extensive physical protection to the host system. In\nthese cases, logical access control may provide a means of isolating the cryptographic software\nfrom other parts of the host system and for protecting the cryptographic software from tampering\nand the keys from replacement or disclosure. The use of such controls should provide the\nequivalent of physical protection.\nAudit Trails. Cryptography may play a useful role in audit trails. For example, audit records may\nneed to be signed. Cryptography may also be needed to protect audit records stored on computer\nsystems from disclosure or modification. Audit trails are also used to help support electronic\nsignatures.\nAssurance. Assurance that a cryptographic module is properly and securely implemented is\nessential to the effective use of cryptography. NIST maintains validation programs for several of\nits standards for cryptography. Vendors can have their products validated for conformance to the\nstandard through a rigorous set of tests. Such\ntesting provides increased assurance that a\nmodule meets stated standards, and system\ndesigners, integrators, and users can have\ngreater confidence that validated products\nconform to accepted standards.\nA cryptographic system should be monitored and periodically audited to ensure that it is satisfying\nits security objectives. All parameters associated with correct operation of the cryptographic\nsystem should be reviewed, and operation of the system itself should be periodically tested and the\nresults audited. Certain information, such as secret keys or private keys in public key systems,\nshould not be subject to audit. However, nonsecret or nonprivate keys could be used in a\nsimulated audit procedure.\n19.5 Cost Considerations\nUsing cryptography to protect information has both direct and indirect costs. Cost is determined\nin part by product availability; a wide variety of products exist for implementing cryptography in\nintegrated circuits, add-on boards or adapters, and stand-alone units. \n" }, { "page_number": 249, "text": "19. Cryptography\n237\n19.5.1 Direct Costs\nThe direct costs of cryptography include:\nAcquiring or implementing the cryptographic module and integrating it into the computer\nsystem. The medium (i.e., hardware, software, firmware, or combination) and various\nother issues such as level of security, logical and physical configuration, and special\nprocessing requirements will have an impact on cost.\nManaging the cryptography and, in particular, managing the cryptographic keys, which\nincludes key generation, distribution, archiving, and disposition, as well as security\nmeasures to protect the keys, as appropriate.\n19.5.2 Indirect Costs\nThe indirect costs of cryptography include:\nA decrease in system or network performance, resulting from the additional overhead of\napplying cryptographic protection to stored or communicated data.\nChanges in the way users interact with the system, resulting from more stringent security\nenforcement. However, cryptography can be made nearly transparent to the users so that\nthe impact is minimal.\nReferences\nAlexander, M., ed. \"Protecting Data With Secret Codes,\" Infosecurity News. 4(6), 1993. pp.\n72-78.\nAmerican Bankers Association. American National Standard for Financial Institution Key\nManagement (Wholesale). ANSI X9.17-1985. Washington, DC., 1985.\nDenning, P., and D. Denning, \"The Clipper and Capstone Encryption Systems.\" American\nScientist. 81(4), 1993. pp. 319-323.\nDiffie, W., and M. Hellman. \"New Directions in Cryptography.\" IEEE Transactions on\nInformation Theory. Vol. IT-22, No. 6, November 1976. pp. 644-654.\nDuncan, R. \"Encryption ABCs.\" Infosecurity News. 5(2), 1994. pp. 36-41.\nInternational Organization for Standardization. Information Processing Systems - Open Systems\n" }, { "page_number": 250, "text": "IV. Technical Controls\n238\nInterconnection Reference Model - Part 2: Security Architecture. ISO 7498/2. 1988.\nMeyer, C.H., and S. M. Matyas. Cryptography: A New Dimension in Computer Data Security.\nNew York, NY: John Wiley & Sons, 1982. \nNechvatal, James. Public-Key Cryptography. Special Publication 800-2. Gaithersburg, MD:\nNational Institute of Standards and Technology, April 1991.\nNational Bureau of Standards. Computer Data Authentication. Federal Information Processing\nStandard Publication 113. May 30, 1985.\nNational Institute of Standards and Technology. \"Advanced Authentication Technology.\"\nComputer Systems Laboratory Bulletin. November 1991.\nNational Institute of Standards and Technology. Data Encryption Standard. Federal Information\nProcessing Standard Publication 46-2. December 30, 1993. \nNational Institute of Standards and Technology. \"Digital Signature Standard.\" Computer Systems\nLaboratory Bulletin. January 1993.\nNational Institute of Standards and Technology. Digital Signature Standard. Federal Information\nProcessing Standard Publication 186. May 1994.\nNational Institute of Standards and Technology. Escrowed Encryption Standard. Federal\nInformation Processing Standard Publication 185. 1994. \nNational Institute of Standards and Technology. Key Management Using ANSI X9.17. Federal\nInformation Processing Standard Publication 171. April 27, 1992. \nNational Institute of Standards and Technology. Secure Hash Standard. Federal Information\nProcessing Standard Publication 180. May 11, 1993. \nNational Institute of Standards and Technology. Security Requirements for Cryptographic\nModules. Federal Information Processing Standard Publication 140-1. January 11, 1994.\nRivest, R., A. Shamir, and L. Adleman. \"A Method for Obtaining Digital Signatures and\nPublic-Key Cryptosystems.\" Communications of the ACM., Vol. 21, No. 2, 1978. pp. 120-126.\nSaltman, Roy G., ed. Good Security Practices for Electronic Commerce, Including Electronic\nData interchange. Special Publication 800-9. Gaithersburg, MD: National Institute of Standards\nand Technology. December 1993.\n" }, { "page_number": 251, "text": "19. Cryptography\n239\nSchneier, B. \"A Taxonomy of Encryption Algorithms.\" Computer Security Journal. 9(1), 1193.\npp. 39-60.\n \nSchneier, B. \"Four Crypto Standards.\" Infosecurity News. 4(2), 1993. pp. 38-39.\nSchneier, B. Applied Cryptography: Protocols, Algorithms, and Source Code in C. New York,\nNY: John Wiley & Sons, Inc., 1994.\nU.S. Congress, Office of Technology Assessment. \"Security Safeguards and Practices.\"\nDefending Secrets, Sharing Data: New Locks and Keys for Electronic Information. Washington,\nDC: 1987, pp. 54-72. \n" }, { "page_number": 252, "text": "240\n \n" }, { "page_number": 253, "text": "241\nV. EXAMPLE\n" }, { "page_number": 254, "text": "242\n" }, { "page_number": 255, "text": " \n While this chapter draws upon many actual systems, details and characteristics were changed and merged. \n140\nAlthough the chapter is arranged around an agency, the case study could also apply to a large division or office\nwithin an agency.\n243\nThis example can be used to help understand how\nsecurity issues are examined, how some potential\nsolutions are analyzed, how their cost and benefits\nare weighed, and ultimately how management\naccepts responsibility for risks. \nChapter 20\nASSESSING AND MITIGATING THE RISKS\nTO A HYPOTHETICAL COMPUTER SYSTEM\nThis chapter illustrates how a hypothetical government agency (HGA) deals with computer\nsecurity issues in its operating environment.\n It follows the evolution of HGA's initiation of an\n140\nassessment of the threats to its computer security system all the way through to HGA's\nrecommendations for mitigating those risks. In the real world, many solutions exist for computer\nsecurity problems. No single solution can solve similar security problems in all environments. \nLikewise, the solutions presented in this example may not be appropriate for all environments. \nThis case study is provided for illustrative\npurposes only, and should not be construed as\nguidance or specific recommendations to\nsolving specific security issues. Because a\ncomprehensive example attempting to\nillustrate all handbook topics would be\ninordinately long, this example necessarily\nsimplifies the issues presented and omits many\ndetails. For instance, to highlight the similarities and differences among controls in the different\nprocessing environments, it addresses some of the major types of processing platforms linked\ntogether in a distributed system: personal computers, local-area networks, wide-area networks,\nand mainframes; it does not show how to secure these platforms. \nThis section also highlights the importance of management's acceptance of a particular level of\nrisk—this will, of course, vary from organization to organization. It is management's prerogative\nto decide what level of risk is appropriate, given operating and budget environments and other\napplicable factors.\n20.1 Initiating the Risk Assessment\nHGA has information systems that comprise and are intertwined with several different kinds of\nassets valuable enough to merit protection. HGA's systems play a key role in transferring U.S.\nGovernment funds to individuals in the form of paychecks; hence, financial resources are among\nthe assets associated with HGA's systems. The system components owned and operated by HGA\n" }, { "page_number": 256, "text": "V. Example\n244\nare also assets, as are personnel information, contracting and procurement documents, draft\nregulations, internal correspondence, and a variety of other day-to-day business documents,\nmemos, and reports. HGA's assets include intangible elements as well, such as reputation of the\nagency and the confidence of its employees that personal information will be handled properly and\nthat the wages will be paid on time. \nA recent change in the directorship of HGA has brought in a new management team. Among the\nnew Chief Information Officer's first actions was appointing a Computer Security Program\nManager who immediately initiated a comprehensive risk analysis to assess the soundness of\nHGA's computer security program in protecting the agency's assets and its compliance with\nfederal directives. This analysis drew upon prior risk assessments, threat studies, and applicable\ninternal control reports. The Computer Security Program Manager also established a timetable\nfor periodic reassessments. \nSince the wide-area network and mainframe used by HGA are owned and operated by other\norganizations, they were not treated in the risk assessment as HGA's assets. And although HGA's\npersonnel, buildings, and facilities are essential assets, the Computer Security Program Manager\nconsidered them to be outside the scope of the risk analysis.\nAfter examining HGA's computer system, the risk assessment team identified specific threats to\nHGA's assets, reviewed HGA's and national safeguards against those threats, identified the\nvulnerabilities of those policies, and recommended specific actions for mitigating the remaining\nrisks to HGA's computer security. The following sections provide highlights from the risk\nassessment. The assessment addressed many other issues at the programmatic and system levels. \nHowever, this chapter focuses on security issues related to the time and attendance application. \n(Other issues are discussed in Chapter 6.)\n20.2 HGA's Computer System\nHGA relies on the distributed computer systems and networks shown in Figure 20.1. They\nconsist of a collection of components, some of which are systems in their own right. Some belong\nto HGA, but others are owned and operated by other organizations. This section describes these\ncomponents, their role in the overall distributed system architecture, and how they are used by\nHGA.\n20.2.1 System Architecture\nMost of HGA's staff (a mix of clerical, technical, and managerial staff) are provided with personal\ncomputers (PCs) located in their offices. Each PC includes hard-disk and floppy-disk drives.\nThe PCs are connected to a local area network (LAN) so that users can exchange and share \n" }, { "page_number": 257, "text": "20. Assessing and Mitigating the Risks to a Hypothetical Computer System\n245\n" }, { "page_number": 258, "text": "V. Example\n246\ninformation. The central component of the LAN is a LAN server, a more powerful computer that\nacts as an intermediary between PCs on the network and provides a large volume of disk storage\nfor shared information, including shared application programs. The server provides logical access\ncontrols on potentially sharable information via elementary access control lists. These access\ncontrols can be used to limit user access to various files and programs stored on the server. Some\nprograms stored on the server can be retrieved via the LAN and executed on a PC; others can\nonly be executed on the server.\nTo initiate a session on the network or execute programs on the server, users at a PC must log\ninto the server and provide a user identifier and password known to the server. Then they may use\nfiles to which they have access. \nOne of the applications supported by the server is electronic mail (e-mail), which can be used by\nall PC users. Other programs that run on the server can only be executed by a limited set of PC\nusers.\nSeveral printers, distributed throughout HGA's building complex, are connected to the LAN. \nUsers at PCs may direct printouts to whichever printer is most convenient for their use. \nSince HGA must frequently communicate with industry, the LAN also provides a connection to\nthe Internet via a router. The router is a network interface device that translates between the\nprotocols and addresses associated with the LAN and the Internet. The router also performs\nnetwork packet filtering, a form of network access control, and has recently been configured to\ndisallow non–e-mail (e.g., file transfer, remote log-in) between LAN and Internet computers.\nThe LAN server also has connections to several other devices.\nA modem pool is provided so that HGA's employees on travel can \"dial up\" via the\npublic switched (telephone) network and read or send e-mail. To initiate a dial-up\nsession, a user must successfully log in. During dial-up sessions, the LAN server\nprovides access only to e-mail facilities; no other functions can be invoked.\nA special console is provided for the server administrators who configure the\nserver, establish and delete user accounts, and have other special privileges needed\nfor administrative and maintenance functions. These functions can only be invoked\nfrom the administrator console; that is, they cannot be invoked from a PC on the\nnetwork or from a dial-up session.\nA connection to a government agency X.25-based wide-area network (WAN) is\nprovided so that information can be transferred to or from other agency systems. \nOne of the other hosts on the WAN is a large multiagency mainframe system. This\nmainframe is used to collect and process information from a large number of\n" }, { "page_number": 259, "text": "20. Assessing and Mitigating the Risks to a Hypothetical Computer System\n247\nagencies while providing a range of access controls.\n20.2.2 System Operational Authority/Ownership\nThe system components contained within the large dashed rectangle shown in Figure 20.1 are\nmanaged and operated by an organization within HGA known as the Computer Operations Group\n(COG). This group includes the PCs, LAN, server, console, printers, modem pool, and router. \nThe WAN is owned and operated by a large commercial telecommunications company that\nprovides WAN services under a government contract. The mainframe is owned and operated by a\nfederal agency that acts as a service provider for HGA and other agencies connected to the WAN.\n20.2.3 System Applications\nPCs on HGA's LAN are used for word processing, data manipulation, and other common\napplications, including spreadsheet and project management tools. Many of these tasks are\nconcerned with data that are sensitive with respect to confidentiality or integrity. Some of these\ndocuments and data also need to be available in a timely manner.\nThe mainframe also provides storage and retrieval services for other databases belonging to\nindividual agencies. For example, several agencies, including HGA, store their personnel\ndatabases on the mainframe; these databases contain dates of service, leave balances, salary and\nW-2 information, and so forth. \nIn addition to their time and attendance application, HGA's PCs and the LAN server are used to\nmanipulate other kinds of information that may be sensitive with respect to confidentiality or\nintegrity, including personnel-related correspondence and draft contracting documents.\n20.3 Threats to HGA's Assets\nDifferent assets of HGA are subject to different kinds of threats. Some threats are considered less\nlikely than others, and the potential impact of different threats may vary greatly. The likelihood of\nthreats is generally difficult to estimate accurately. Both HGA and the risk assessment's authors\nhave attempted to the extent possible to base these estimates on historical data, but have also tried\nto anticipate new trends stimulated by emerging technologies (e.g., external networks).\n20.3.1 Payroll Fraud\nAs for most large organizations that control financial assets, attempts at fraud and embezzlement\nare likely to occur. Historically, attempts at payroll fraud have almost always come from within\nHGA or the other agencies that operate systems on which HGA depends. Although HGA has\nthwarted many of these attempts, and some have involved relatively small sums of money, it\n" }, { "page_number": 260, "text": "V. Example\n248\nconsiders preventing financial fraud to be a critical computer security priority, particularly in light\nof the potential financial losses and the risks of damage to its reputation with Congress, the\npublic, and other federal agencies.\nAttempts to defraud HGA have included the following:\nSubmitting fraudulent time sheets for hours or days not worked, or for pay periods\nfollowing termination or transfer of employment. The former may take the form of\noverreporting compensatory or overtime hours worked, or underreporting\nvacation or sick leave taken. Alternatively, attempts have been made to modify\ntime sheet data after being entered and approved for submission to payroll.\nFalsifying or modifying dates or data on which one's \"years of service\"\ncomputations are based, thereby becoming eligible for retirement earlier than\nallowed, or increasing one's pension amount.\nCreating employee records and time sheets for fictitious personnel, and attempting\nto obtain their paychecks, particularly after arranging for direct deposit.\n20.3.2 Payroll Errors\nOf greater likelihood, but of perhaps lesser potential impact on HGA, are errors in the entry of\ntime and attendance data; failure to enter information describing new employees, terminations,\nand transfers in a timely manner; accidental corruption or loss of time and attendance data; or\nerrors in interagency coordination and processing of personnel transfers.\nErrors of these kinds can cause financial difficulties for employees and accounting problems for\nHGA. If an employee's vacation or sick leave balance became negative erroneously during the\nlast pay period of the year, the employee's last paycheck would be automatically reduced. An\nindividual who transfers between HGA and another agency may risk receiving duplicate\npaychecks or no paychecks for the pay periods immediately following the transfer. Errors of this\nsort that occur near the end of the year can lead to errors in W-2 forms and subsequent difficulties\nwith the tax collection agencies. \n20.3.3 Interruption of Operations\nHGA's building facilities and physical plant are several decades old and are frequently under repair\nor renovation. As a result, power, air conditioning, and LAN or WAN connectivity for the server\nare typically interrupted several times a year for periods of up to one work day. For example, on\nseveral occasions, construction workers have inadvertently severed power or network cables. \nFires, floods, storms, and other natural disasters can also interrupt computer operations, as can\nequipment malfunctions.\n" }, { "page_number": 261, "text": "20. Assessing and Mitigating the Risks to a Hypothetical Computer System\n249\nAnother threat of small likelihood, but significant potential impact, is that of a malicious or\ndisgruntled employee or outsider seeking to disrupt time-critical processing (e.g., payroll) by\ndeleting necessary inputs or system accounts, misconfiguring access controls, planting computer\nviruses, or stealing or sabotaging computers or related equipment. Such interruptions, depending\nupon when they occur, can prevent time and attendance data from getting processed and\ntransferred to the mainframe before the payroll processing deadline. \n20.3.4 Disclosure or Brokerage of Information\nOther kinds of threats may be stimulated by the growing market for information about an\norganization's employees or internal activities. Individuals who have legitimate work-related\nreasons for access to the master employee database may attempt to disclose such information to\nother employees or contractors or to sell it to private investigators, employment recruiters, the\npress, or other organizations. HGA considers such threats to be moderately likely and of low to\nhigh potential impact, depending on the type of information involved. \n20.3.5 Network-Related Threats\nMost of the human threats of concern to HGA originate from insiders. Nevertheless, HGA also\nrecognizes the need to protect its assets from outsiders. Such attacks may serve many different\npurposes and pose a broad spectrum of risks, including unauthorized disclosure or modification of\ninformation, unauthorized use of services and assets, or unauthorized denial of services. \nAs shown in Figure 20.1, HGA's systems are connected to the three external networks: (1) the\nInternet, (2) the Interagency WAN, and (3) the public-switched (telephone) network. Although\nthese networks are a source of security risks, connectivity with them is essential to HGA's mission\nand to the productivity of its employees; connectivity cannot be terminated simply because of\nsecurity risks.\nIn each of the past few years before establishing its current set of network safeguards, HGA had\ndetected several attempts by outsiders to penetrate its systems. Most, but not all of these, have\ncome from the Internet, and those that succeeded did so by learning or guessing user account\npasswords. In two cases, the attacker deleted or corrupted significant amounts of data, most of\nwhich were later restored from backup files. In most cases, HGA could detect no ill effects of the\nattack, but concluded that the attacker may have browsed through some files. HGA also\nconceded that its systems did not have audit logging capabilities sufficient to track an attacker's\nactivities. Hence, for most of these attacks, HGA could not accurately gauge the extent of\npenetration.\nIn one case, an attacker made use of a bug in an e-mail utility and succeeded in acquiring System\nAdministrator privileges on the server—a significant breach. HGA found no evidence that the\nattacker attempted to exploit these privileges before being discovered two days later. When the\n" }, { "page_number": 262, "text": "V. Example\n250\nattack was detected, COG immediately contacted the HGA's Incident Handling Team, and was\ntold that a bug fix had been distributed by the server vendor several months earlier. To its\nembarrassment, COG discovered that it had already received the fix, which it then promptly\ninstalled. It now believes that no subsequent attacks of the same nature have succeeded.\nAlthough HGA has no evidence that it has been significantly harmed to date by attacks via\nexternal networks, it believes that these attacks have great potential to inflict damage. HGA's\nmanagement considers itself lucky that such attacks have not harmed HGA's reputation and the\nconfidence of the citizens its serves. It also believes the likelihood of such attacks via external\nnetworks will increase in the future.\n20.3.6 Other Threats\nHGA's systems also are exposed to several other threats that, for reasons of space, cannot be fully\nenumerated here. Examples of threats and HGA's assessment of their probabilities and impacts\ninclude those listed in Table 20.1.\n20.4 Current Security Measures\nHGA has numerous policies and procedures for protecting its assets against the above threats. \nThese are articulated in HGA's Computer Security Manual, which implements and synthesizes the\nrequirements of many federal directives, such as Appendix III to OMB Circular A-130, the\nComputer Security Act of 1987, and the Privacy Act. The manual also includes policies for\nautomated financial systems, such as those based on OMB Circulars A-123 and A-127, as well as\nthe Federal Managers' Financial Integrity Act.\nSeveral examples of those policies follow, as they apply generally to the use and administration of\nHGA's computer system and specifically to security issues related to time and attendance, payroll,\nand continuity of operations.\n20.4.1 General Use and Administration of HGA's Computer System\nHGA's Computer Operations Group (COG) is responsible for controlling, administering, and\nmaintaining the computer resources owned and operated by HGA. These functions are depicted\nin Figure 20.1 enclosed in the large, dashed rectangle. Only individuals holding the job title\nSystem Administrator are authorized to establish log-in IDs and passwords on multiuser HGA\nsystems (e.g., the LAN server). Only HGA's employees and contract personnel may use the\nsystem, and only after receiving written authorization from the department supervisor (or, in the\ncase of contractors, the contracting officer) to whom these individuals report. \nCOG issues copies of all relevant security policies and procedures to new users. Before activating\n" }, { "page_number": 263, "text": "20. Assessing and Mitigating the Risks to a Hypothetical Computer System\n251\nExamples of Threats to HGA Systems\nPotential Threat \nProbability\nImpact\nAccidental Loss/Release of \nMedium\nLow/Medium\nDisclosure-Sensitive Information\nAccidental Destruction of \nHigh\nMedium\nInformation\nLoss of Information due to \nMedium\nMedium\nVirus Contamination\nMisuse of \nLow\nLow\nSystem Resources \nTheft\nHigh\nMedium\nUnauthorized Access to \nMedium\nMedium\nTelecommunications Resources*\nNatural Disaster\nLow\nHigh\n____________\n HGA operates a PBX system, which may be vulnerable to (1) hacker disruptions of PBX availability\n*\nand, consequently, agency operations, (2) unauthorized access to outgoing phone lines for long-distance\nservices, (3) unauthorized access to stored voice-mail messages, and (4) surreptitious access to otherwise\nprivate conversations/data transmissions. \nTable 20.1\na system account for a new users, COG requires that they (1) attend a security awareness and\ntraining course or complete an interactive computer-aided-instruction training session and (2) sign\nan acknowledgment form indicating that they understand their security responsibilities.\nAuthorized users are assigned a secret log-in ID and password, which they must not share with\nanyone else. They are expected to comply with all of HGA's password selection and security\nprocedures (e.g., periodically changing passwords). Users who fail to do so are subject to a range\nof penalties.\nUsers creating data that are sensitive with respect to disclosure or modification are expected to\nmake effective use of the automated access control mechanisms available on HGA computers to\nreduce the risk of exposure to unauthorized individuals. (Appropriate training and education are\nin place to help users do this.) In general, access to disclosure-sensitive information is to be\ngranted only to individuals whose jobs require it.\n" }, { "page_number": 264, "text": "V. Example\n252\n20.4.2 Protection Against Payroll Fraud and Errors: Time and Attendance Application\nThe time and attendance application plays a major role in protecting against payroll fraud and\nerrors. Since the time and attendance application is a component of a larger automated payroll\nprocess, many of its functional and security requirements have been derived from both\ngovernmentwide and HGA-specific policies related to payroll and leave. For example, HGA must\nprotect personal information in accordance with the Privacy Act. Depending on the specific type\nof information, it should normally be viewable only by the individual concerned, the individual's\nsupervisors, and personnel and payroll department employees. Such information should also be\ntimely and accurate.\nEach week, employees must sign and submit a time sheet that identifies the number of hours they\nhave worked and the amount of leave they have taken. The Time and Attendance Clerk enters the\ndata for a given group of employees and runs an application on the LAN server to verify the data's\nvalidity and to ensure that only authorized users with access to the Time and Attendance Clerk's\nfunctions can enter time and attendance data. The application performs these security checks by\nusing the LAN server's access control and identification and authentication (I&A) mechanisms. \nThe application compares the data with a limited database of employee information to detect\nincorrect employee identifiers, implausible numbers of hours worked, and so forth. After\ncorrecting any detected errors, the clerk runs another application that formats the time and\nattendance data into a report, flagging exception/out-of-bound conditions (e.g., negative leave\nbalances). \nDepartment supervisors are responsible for reviewing the correctness of the time sheets of the\nemployees under their supervision and indicating their approval by initialing the time sheets. If\nthey detect significant irregularities and indications of fraud in such data, they must report their\nfindings to the Payroll Office before submitting the time sheets for processing. In keeping with\nthe principle of separation of duty, all data on time sheets and corrections on the sheets that may\naffect pay, leave, retirement, or other benefits of an individual must be reviewed for validity by at\nleast two authorized individuals (other than the affected individual).\nProtection Against Unauthorized Execution\nOnly users with access to Time and Attendance Supervisor functions may approve and submit\ntime and attendance data — or subsequent corrections thereof — to the mainframe. Supervisors\nmay not approve their own time and attendance data.\nOnly the System Administrator has been granted access to assign a special access control privilege\nto server programs. As a result, the server's operating system is designed to prevent a bogus time\nand attendance application created by any other user from communicating with the WAN and,\nhence, with the mainframe.\n" }, { "page_number": 265, "text": "20. Assessing and Mitigating the Risks to a Hypothetical Computer System\n \n Technically, Systems Administrators may still have the ability to do so. This highlights the importance of\n141\nadequate managerial reviews, auditing, and personnel background checks. \n253\nThe time and attendance application is supposed to be configured so that the clerk and supervisor\nfunctions can only be carried out from specific PCs attached to the LAN and only during normal\nworking hours. Administrators are not authorized to exercise functions of the time and\nattendance application apart from those concerned with configuring the accounts, passwords, and\naccess permissions for clerks and supervisors. Administrators are expressly prohibited by policy\nfrom entering, modifying, or submitting time and attendance data via the time and attendance\napplication or other mechanisms.141\nProtection against unauthorized execution of the time and attendance application depends on I&A\nand access controls. While the time and attendance application is accessible from any PC, unlike\nmost programs run by PC users, it does not execute directly on the PC's processor. Instead, it\nexecutes on the server, while the PC behaves as a terminal, relaying the user's keystrokes to the\nserver and displaying text and graphics sent from the server. The reason for this approach is that\ncommon PC systems do not provide I&A and access controls and, therefore, cannot protect\nagainst unauthorized time and attendance program execution. Any individual who has access to\nthe PC could run any program stored there. \nAnother possible approach is for the time and attendance program to perform I&A and access\ncontrol on its own by requesting and validating a password before beginning each time and \nattendance session. This approach, however, can be defeated easily by a moderately skilled\nprogramming attack, and was judged inadequate by HGA during the application's early design\nphase.\nRecall that the server is a more powerful computer equipped with a multiuser operating system\nthat includes password-based I&A and access controls. Designing the time and attendance\napplication program so that it executes on the server under the control of the server's operating\nsystem provides a more effective safeguard against unauthorized execution than executing it on\nthe user's PC.\nProtection Against Payroll Errors\nThe frequency of data entry errors is reduced by having Time and Attendance clerks enter each\ntime sheet into the time and attendance application twice. If the two copies are identical, both are\nconsidered error free, and the record is accepted for subsequent review and approval by a\nsupervisor. If the copies are not identical, the discrepancies are displayed, and for each\ndiscrepancy, the clerk determines which copy is correct. The clerk then incorporates the\ncorrections into one of the copies, which is then accepted for further processing. If the clerk\n" }, { "page_number": 266, "text": "V. Example\n254\nmakes the same data-entry error twice, then the two copies will match, and one will be accepted\nas correct, even though it is erroneous. To reduce this risk, the time and attendance application\ncould be configured to require that the two copies be entered by different clerks.\nIn addition, each department has one or more Time and Attendance Supervisors who are\nauthorized to review these reports for accuracy and to approve them by running another server\nprogram that is part of the time and attendance application. The data are then subjected to a\ncollection of \"sanity checks\" to detect entries whose values are outside expected ranges. Potential\nanomalies are displayed to the supervisor prior to allowing approval; if errors are identified, the\ndata are returned to a clerk for additional examination and corrections.\nWhen a supervisor approves the time and attendance data, this application logs into the\ninteragency mainframe via the WAN and transfers the data to a payroll database on the\nmainframe. The mainframe later prints paychecks or, using a pool of modems that can send data\nover phone lines, it may transfer the funds electronically into employee-designated bank accounts. \nWithheld taxes and contributions are also transferred electronically in this manner.\nThe Director of Personnel is responsible for ensuring that forms describing significant\npayroll-related personnel actions are provided to the Payroll Office at least one week before the\npayroll processing date for the first affected pay period. These actions include hiring,\nterminations, transfers, leaves of absences and returns from such, and pay raises.\nThe Manager of the Payroll Office is responsible for establishing and maintaining controls\nadequate to ensure that the amounts of pay, leave, and other benefits reported on pay stubs and\nrecorded in permanent records and those distributed electronically are accurate and consistent\nwith time and attendance data and with other information provided by the Personnel Department. \nIn particular, paychecks must never be provided to anyone who is not a bona fide, active-status\nemployee of HGA. Moreover, the pay of any employee who terminates employment, who\ntransfers, or who goes on leave without pay must be suspended as of the effective date of such\naction; that is, extra paychecks or excess pay must not be dispersed. \nProtection Against Accidental Corruption or Loss of Payroll Data\nThe same mechanisms used to protect against fraudulent modification are used to protect against\naccidental corruption of time and attendance data — namely, the access-control features of the\nserver and mainframe operating systems.\nCOG's nightly backups of the server's disks protect against loss of time and attendance data. To a\nlimited extent, HGA also relies on mainframe administrative personnel to back up time and\nattendance data stored on the mainframe, even though HGA has no direct control over these\nindividuals. As additional protection against loss of data at the mainframe, HGA retains copies of\nall time and attendance data on line on the server for at least one year, at which time the data are\n" }, { "page_number": 267, "text": "20. Assessing and Mitigating the Risks to a Hypothetical Computer System\n255\narchived and kept for three years. The server's access controls for the on-line files are\nautomatically set to read-only access by the time and attendance application at the time of\nsubmission to the mainframe. The integrity of time and attendance data will be protected by\ndigital signatures as they are implemented. \nThe WAN's communications protocols also protect against loss of data during transmission from\nthe server to the mainframe (e.g., error checking). In addition, the mainframe payroll application\nincludes a program that is automatically run 24 hours before paychecks and pay stubs are printed. \nThis program produces a report identifying agencies from whom time and attendance data for the\ncurrent pay period were expected but not received. Payroll department staff are responsible for\nreviewing the reports and immediately notifying agencies that need to submit or resubmit time and\nattendance data. If time and attendance input or other related information is not available on a\ntimely basis, pay, leave, and other benefits are temporarily calculated based on information\nestimated from prior pay periods.\n20.4.3 Protection Against Interruption of Operations\nHGA's policies regarding continuity of operations are derived from requirements stated in OMB\nCircular A-130. HGA requires various organizations within it to develop contingency plans, test\nthem annually, and establish appropriate administrative and operational procedures for supporting\nthem. The plans must identify the facilities, equipment, supplies, procedures, and personnel\nneeded to ensure reasonable continuity of operations under a broad range of adverse\ncircumstances.\nCOG Contingency Planning\nCOG is responsible for developing and maintaining a contingency plan that sets forth the\nprocedures and facilities to be used when physical plant failures, natural disasters, or major\nequipment malfunctions occur sufficient to disrupt the normal use of HGA's PCs, LAN, server,\nrouter, printers, and other associated equipment. \nThe plan prioritizes applications that rely on these resources, indicating those that should be\nsuspended if available automated functions or capacities are temporarily degraded. COG\npersonnel have identified system software and hardware components that are compatible with\nthose used by two nearby agencies. HGA has signed an agreement with those agencies, whereby\nthey have committed to reserving spare computational and storage capacities sufficient to support\nHGA's system-based operations for a few days during an emergency.\nNo communication devices or network interfaces may be connected to HGA's systems without\nwritten approval of the COG Manager. The COG staff is responsible for installing all known\nsecurity-related software patches in a timely manner and for maintaining spare or redundant PCs,\nservers, storage devices, and LAN interfaces to ensure that at least 100 people can simultaneously\n" }, { "page_number": 268, "text": "V. Example\n256\nperform word processing tasks at all times.\nTo protect against accidental corruption or loss of data, COG personnel back up the LAN server's\ndisks onto magnetic tape every night and transport the tapes weekly to a sister agency for storage. \nHGA's policies also stipulate that all PC users are responsible for backing up weekly any\nsignificant data stored on their PC's local hard disks. For the past several years, COG has issued a\nyearly memorandum reminding PC users of this responsibility. COG also strongly encourages\nthem to store significant data on the LAN server instead of on their PC's hard disk so that such\ndata will be backed up automatically during COG's LAN server backups.\nTo prevent more limited computer equipment malfunctions from interrupting routine business\noperations, COG maintains an inventory of approximately ten fully equipped spare PC's, a spare\nLAN server, and several spare disk drives for the server. COG also keeps thousands of feet of\nLAN cable on hand. If a segment of the LAN cable that runs through the ceilings and walls of\nHGA's buildings fails or is accidentally severed, COG technicians will run temporary LAN cabling\nalong the floors of hallways and offices, typically restoring service within a few hours for as long\nas needed until the cable failure is located and repaired.\nTo protect against PC virus contamination, HGA authorizes only System Administrators\napproved by the COG Manager to install licensed, copyrighted PC software packages that appear\non the COG-approved list. PC software applications are generally installed only on the server. \n(These stipulations are part of an HGA assurance strategy that relies on the quality of the\nengineering practices of vendors to provide software that is adequately robust and trustworthy.) \nOnly the COG Manager is authorized to add packages to the approved list. COG procedures also\nstipulate that every month System Administrators should run virus-detection and other\nsecurity-configuration validation utilities on the server and, on a spot-check basis, on a number of\nPCs. If they find a virus, they must immediately notify the agency team that handles computer\nsecurity incidents. \nCOG is also responsible for reviewing audit logs generated by the server, identifying audit records\nindicative of security violations, and reporting such indications to the Incident-Handling Team. \nThe COG Manager assigns these duties to specific members of the staff and ensures that they are\nimplemented as intended. \nThe COG Manager is responsible for assessing adverse circumstances and for providing\nrecommendations to HGA's Director. Based on these and other sources of input, the Director\nwill determine whether the circumstances are dire enough to merit activating various sets of\nprocedures called for in the contingency plan. \nDivision Contingency Planning\nHGA's divisions also must develop and maintain their own contingency plans. The plans must\n" }, { "page_number": 269, "text": "20. Assessing and Mitigating the Risks to a Hypothetical Computer System\n257\nidentify critical business functions, the system resources and applications on which they depend,\nand the maximum acceptable periods of interruption that these functions can tolerate without\nsignificant reduction in HGA's ability to fulfill its mission. The head of each division is responsible\nfor ensuring that the division's contingency plan and associated support activities are adequate.\nFor each major application used by multiple divisions, a chief of a single division must be\ndesignated as the application owner. The designated official (supported by his or her staff) is\nresponsible for addressing that application in the contingency plan and for coordinating with other\ndivisions that use the application.\nIf a division relies exclusively on computer resources maintained by COG (e.g., the LAN), it need\nnot duplicate COG's contingency plan, but is responsible for reviewing the adequacy of that plan. \nIf COG's plan does not adequately address the division's needs, the division must communicate its\nconcerns to the COG Director. In either situation, the division must make known the criticality of\nits applications to the COG. If the division relies on computer resources or services that are not\nprovided by COG, the division is responsible for (1) developing its own contingency plan or (2)\nensuring that the contingency plans of other organizations (e.g., the WAN service provider)\nprovide adequate protection against service disruptions.\n20.4.4 Protection Against Disclosure or Brokerage of Information \nHGA's protection against information disclosure is based on a need-to-know policy and on\npersonnel hiring and screening practices. The need-to-know policy states that time and\nattendance information should be made accessible only to HGA employees and contractors whose\nassigned professional responsibilities require it. Such information must be protected against\naccess from all other individuals, including other HGA employees. Appropriate hiring and\nscreening practices can lessen the risk that an untrustworthy individual will be assigned such\nresponsibilities. \nThe need-to-know policy is supported by a collection of physical, procedural, and automated\nsafeguards, including the following:\nTime and attendance paper documents are must be stored securely when not in\nuse, particularly during evenings and on weekends. Approved storage containers\ninclude locked file cabinets and desk drawers—to which only the owner has the\nkeys. While storage in a container is preferable, it is also permissible to leave time\nand attendance documents on top of a desk or other exposed surface in a locked\noffice (with the realization that the guard force has keys to the office). (This is a\njudgment left to local discretion.) Similar rules apply to disclosure-sensitive\ninformation stored on floppy disks and other removable magnetic media.\nEvery HGA PC is equipped with a key lock that, when locked, disables the PC. \n" }, { "page_number": 270, "text": "V. Example\n \n Although not discussed in this example, recognize that technical \"spoofing\" can occur.\n142\n258\nWhen information is stored on a PC's local hard disk, the user to whom that PC\nwas assigned is expected to (1) lock the PC at the conclusion of each work day\nand (2) lock the office in which the PC is located.\nThe LAN server operating system's access controls provide extensive features for\ncontrolling access to files. These include group-oriented controls that allow teams\nof users to be assigned to named groups by the System Administrator. Group\nmembers are then allowed access to sensitive files not accessible to nonmembers. \nEach user can be assigned to several groups according to need to know. (The\nreliable functioning of these controls is assumed, perhaps incorrectly, by HGA.) \nAll PC users undergo security awareness training when first provided accounts on\nthe LAN server. Among other things, the training stresses the necessity of\nprotecting passwords. It also instructs users to log off the server before going\nhome at night or before leaving the PC unattended for periods exceeding an hour.\n20.4.5 Protection Against Network-Related Threats\nHGA's current set of external network safeguards has only been in place for a few months. The\nbasic approach is to tightly restrict the kinds of external network interactions that can occur by\nfunneling all traffic to and from external networks through two interfaces that filter out\nunauthorized kinds of interactions. As indicated in Figure 20.1, the two interfaces are the\nnetwork router and the LAN server. The only kinds of interactions that these interfaces allow are\n(1) e-mail and (2) data transfers from the server to the mainframe controlled by a few special\napplications (e.g., the time and attendance application).\nFigure 20.1 shows that the network router is the only direct interface between the LAN and the\nInternet. The router is a dedicated special-purpose computer that translates between the\nprotocols and addresses associated with the LAN and the Internet. Internet protocols, unlike\nthose used on the WAN, specify that packets of information coming from or going to the Internet\nmust carry an indicator of the kind of service that is being requested or used to process the\ninformation. This makes it possible for the router to distinguish e-mail packets from other kinds\nof packets—for example, those associated with a remote log-in request.\n The router has been\n142\nconfigured by COG to discard all packets coming from or going to the Internet, except those\nassociated with e-mail. COG personnel believe that the router effectively eliminates\nInternet-based attacks on HGA user accounts because it disallows all remote log-in sessions, even\nthose accompanied by a legitimate password.\n" }, { "page_number": 271, "text": "20. Assessing and Mitigating the Risks to a Hypothetical Computer System\n259\nThe LAN server enforces a similar type of restriction for dial-in access via the public-switched\nnetwork. The access controls provided by the server's operating system have been configured so\nthat during dial-in sessions, only the e-mail utility can be executed. (HGA policy, enforced by\nperiodic checks, prohibits installation of modems on PCs, so that access must be through the LAN\nserver.) In addition, the server's access controls have been configured so that its WAN interface\ndevice is accessible only to programs that possess a special access-control privilege. Only the\nSystem Administrator can assign this privilege to server programs, and only a handful of\nspecial-purpose applications, like the time and attendance application, have been assigned this\nprivilege.\n20.4.6 Protection Against Risks from Non–HGA Computer Systems\nHGA relies on systems and components that it cannot control directly because they are owned by\nother organizations. HGA has developed a policy to avoid undue risk in such situations. The\npolicy states that system components controlled and operated by organizations other than HGA\nmay not be used to process, store, or transmit HGA information without obtaining explicit\npermission from the application owner and the COG Manager. Permission to use such system\ncomponents may not be granted without written commitment from the controlling organization\nthat HGA's information will be safeguarded commensurate with its value, as designated by HGA. \nThis policy is somewhat mitigated by the fact that HGA has developed an issue-specific policy on\nthe use of the Internet, which allows for its use for e-mail with outside organizations and access to\nother resources (but not for transmission of HGA's proprietary data). \n20.5 Vulnerabilities Reported by the Risk Assessment Team\nThe risk assessment team found that many of the risks to which HGA is exposed stem from (1)\nthe failure of individuals to comply with established policies and procedures or (2) the use of\nautomated mechanisms whose assurance is questionable because of the ways they have been\ndeveloped, tested, implemented, used, or maintained. The team also identified specific\nvulnerabilities in HGA's policies and procedures for protecting against payroll fraud and errors,\ninterruption of operations, disclosure and brokering of confidential information, and unauthorized\naccess to data by outsiders.\n20.5.1 Vulnerabilities Related to Payroll Fraud\nFalsified Time Sheets \nThe primary safeguards against falsified time sheets are review and approval by supervisory\npersonnel, who are not permitted to approve their own time and attendance data. The risk\nassessment has concluded that, while imperfect, these safeguards are adequate. The related\nrequirement that a clerk and a supervisor must cooperate closely in creating time and attendance\n" }, { "page_number": 272, "text": "V. Example\n260\ndata and submitting the data to the mainframe also safeguards against other kinds of illicit\nmanipulation of time and attendance data by clerks or supervisors acting independently.\nUnauthorized Access \nWhen a PC user enters a password to the server during I&A, the password is sent to the server by\nbroadcasting it over the LAN \"in the clear.\" This allows the password to be intercepted easily by\nany other PC connected to the LAN. In fact, so-called \"password sniffer\" programs that capture\npasswords in this way are widely available. Similarly, a malicious program planted on a PC could\nalso intercept passwords before transmitting them to the server. An unauthorized individual who\nobtained the captured passwords could then run the time and attendance application in place of a\nclerk or supervisor. Users might also store passwords in a log-on script file.\nBogus Time and Attendance Applications\nThe server's access controls are probably adequate for protection against bogus time and\nattendance applications that run on the server. However, the server's operating system and access\ncontrols have only been in widespread use for a few years and contain a number of\nsecurity-related bugs. And the server's access controls are ineffective if not properly configured,\nand the administration of the server's security features in the past has been notably lax.\nUnauthorized Modification of Time and Attendance Data \nProtection against unauthorized modification of time and attendance data requires a variety of\nsafeguards because each system component on which the data are stored or transmitted is a\npotential source of vulnerabilities. \nFirst, the time and attendance data are entered on the server by a clerk. On occasion, the clerk\nmay begin data entry late in the afternoon, and complete it the following morning, storing it in a\ntemporary file between the two sessions. One way to avoid unauthorized modification is to store\nthe data on a diskette and lock it up overnight. After being entered, the data will be stored in\nanother temporary file until reviewed and approved by a supervisor. These files, now stored on\nthe system, must be protected against tampering. As before, the server's access controls, if\nreliable and properly configured, can provide such protection (as can digital signatures, as\ndiscussed later) in conjunction with proper auditing.\nSecond, when the Supervisor approves a batch of time and attendance data, the time and\nattendance application sends the data over the WAN to the mainframe. The WAN is a collection\nof communications equipment and special-purpose computers called \"switches\" that act as relays,\nrouting information through the network from source to destination. Each switch is a potential\nsite at which the time and attendance data may be fraudulently modified. For example, an HGA\nPC user might be able to intercept time and attendance data and modify the data enroute to the\n" }, { "page_number": 273, "text": "20. Assessing and Mitigating the Risks to a Hypothetical Computer System\n261\npayroll application on the mainframe. Opportunities include tampering with incomplete time and\nattendance input files while stored on the server, interception and tampering during WAN transit,\nor tampering on arrival to the mainframe prior to processing by the payroll application.\nThird, on arrival at the mainframe, the time and attendance data are held in a temporary file on the\nmainframe until the payroll application is run. Consequently, the mainframe's I&A and access\ncontrols must provide a critical element of protection against unauthorized modification of the\ndata.\nAccording to the risk assessment, the server's access controls, with prior caveats, probably\nprovide acceptable protection against unauthorized modification of data stored on the server. The\nassessment concluded that a WAN-based attack involving collusion between an employee of\nHGA and an employee of the WAN service provider, although unlikely, should not be dismissed\nentirely, especially since HGA has only cursory information about the service provider's personnel\nsecurity practices and no contractual authority over how it operates the WAN.\nThe greatest source of vulnerabilities, however, is the mainframe. Although its operating system's\naccess controls are mature and powerful, it uses password-based I&A. This is of particular\nconcern, because it serves a large number of federal agencies via WAN connections. A number of\nthese agencies are known to have poor security programs. As a result, one such agency's systems\ncould be penetrated (e.g., from the Internet) and then used in attacks on the mainframe via the\nWAN. In fact, time and attendance data awaiting processing on the mainframe would probably\nnot be as attractive a target to an attacker as other kinds of data or, indeed, disabling the system,\nrendering it unavailable. For example, an attacker might be able to modify the employee data base\nso that it disbursed paychecks or pensions checks to fictitious employees. Disclosure-sensitive\nlaw enforcement databases might also be attractive targets.\nThe access control on the mainframe is strong and provides good protection against intruders\nbreaking into a second application after they have broken into a first. However, previous audits\nhave shown that the difficulties of system administration may present some opportunities for\nintruders to defeat access controls. \n20.5.2 Vulnerabilities Related to Payroll Errors\nHGA's management has established procedures for ensuring the timely submission and\ninteragency coordination of paperwork associated with personnel status changes. However, an\nunacceptably large number of troublesome payroll errors during the past several years has been\ntraced to the late submission of personnel paperwork. The risk assessment documented the\nadequacy of HGA's safeguards, but criticized the managers for not providing sufficient incentives\nfor compliance. \n" }, { "page_number": 274, "text": "V. Example\n262\n20.5.3 Vulnerabilities Related to Continuity of Operations\nCOG Contingency Planning\nThe risk assessment commended HGA for many aspects of COG's contingency plan, but pointed\nout that many COG personnel were completely unaware of the responsibilities the plan assigned\nto them. The assessment also noted that although HGA's policies require annual testing of\ncontingency plans, the capability to resume HGA's computer-processing activities at another\ncooperating agency has never been verified and may turn out to be illusory. \nDivision Contingency Planning\nThe risk assessment reviewed a number of the application-oriented contingency plans developed\nby HGA's divisions (including plans related to time and attendance). Most of the plans were\ncursory and attempted to delegate nearly all contingency planning responsibility to COG. The\nassessment criticized several of these plans for failing to address potential disruptions caused by\nlack of access to (1) computer resources not managed by COG and (2) nonsystem resources, such\nas buildings, phones, and other facilities. In particular, the contingency plan encompassing the\ntime and attendance application was criticized for not addressing disruptions caused by WAN and\nmainframe outages.\nVirus Prevention\nThe risk assessment found HGA's virus-prevention policy and procedures to be sound, but noted\nthat there was little evidence that they were being followed. In particular, no COG personnel\ninterviewed had ever run a virus scanner on a PC on a routine basis, though several had run them\nduring publicized virus scares. The assessment cited this as a significant risk item.\nAccidental Corruption and Loss of Data\nThe risk assessment concluded that HGA's safeguards against accidental corruption and loss of\ntime and attendance data were adequate, but that safeguards for some other kinds of data were\nnot. The assessment included an informal audit of a dozen randomly chosen PCs and PC users in\nthe agency. It concluded that many PC users store significant data on their PC's hard disks, but\ndo not back them up. Based on anecdotes, the assessment's authors stated that there appear to\nhave been many past incidents of loss of information stored on PC hard disks and predicted that\nsuch losses would continue.\n20.5.4 Vulnerabilities Related to Information Disclosure/Brokerage\nHGA takes a conservative approach toward protecting information about its employees. Since\ninformation brokerage is more likely to be a threat to large collections of data, HGA risk\n" }, { "page_number": 275, "text": "20. Assessing and Mitigating the Risks to a Hypothetical Computer System\n263\nassessment focused primarily, but not exclusively, on protecting the mainframe. \nThe risk assessment concluded that significant, avoidable information brokering vulnerabilities\nwere present—particularly due to HGA's lack of compliance with its own policies and procedures. \nTime and attendance documents were typically not stored securely after hours, and few PCs\ncontaining time and attendance information were routinely locked. Worse yet, few were routinely\npowered down, and many were left logged into the LAN server overnight. These practices make\nit easy for an HGA employee wandering the halls after hours to browse or copy time and\nattendance information on another employee's desk, PC hard disk, or LAN server directories.\nThe risk assessment pointed out that information sent to or retrieved from the server is subject to\neavesdropping by other PCs on the LAN. The LAN hardware transmits information by\nbroadcasting it to all connection points on the LAN cable. Moreover, information sent to or\nretrieved from the server is transmitted in the clear—that is, without encryption. Given the\nwidespread availability of LAN \"sniffer\" programs, LAN eavesdropping is trivial for a prospective\ninformation broker and, hence, is likely to occur.\nLast, the assessment noted that HGA's employee master database is stored on the mainframe,\nwhere it might be a target for information brokering by employees of the agency that owns the\nmainframe. It might also be a target for information brokering, fraudulent modification, or other\nillicit acts by any outsider who penetrates the mainframe via another host on the WAN.\n20.5.5 Network-Related Vulnerabilities\nThe risk assessment concurred with the general approach taken by HGA, but identified several\nvulnerabilities. It reiterated previous concerns about the lack of assurance associated with the\nserver's access controls and pointed out that these play a critical role in HGA's approach. The\nassessment noted that the e-mail utility allows a user to include a copy of any otherwise accessible\nfile in an outgoing mail message. If an attacker dialed in to the server and succeeded in logging in\nas an HGA employee, the attacker could use the mail utility to export copies of all the files\naccessible to that employee. In fact, copies could be mailed to any host on the Internet. \nThe assessment also noted that the WAN service provider may rely on microwave stations or\nsatellites as relay points, thereby exposing HGA's information to eavesdropping. Similarly, any\ninformation, including passwords and mail messages, transmitted during a dial-in session is subject\nto eavesdropping. \n" }, { "page_number": 276, "text": "V. Example\n \n Some of the controls, such as auditing and access controls, play an important role in many areas. The\n143\nlimited nature of this example, however, prevents a broader discussion.\n \n Note that, for the sake of brevity, the process of evaluating the cost-effectiveness of various security\n144\ncontrols is not specifically discussed. \n264\n20.6 Recommendations for Mitigating the Identified Vulnerabilities\nThe discussions in the following subsections were chosen to illustrate a broad sampling\n of\n143\nhandbook topics. Risk management and security program management themes are integral\nthroughout, with particular emphasis given to the selection of risk-driven safeguards.\n20.6.1 Mitigating Payroll Fraud Vulnerabilities\nTo remove the vulnerabilities related to payroll fraud, the risk assessment team recommended144\nthe use of stronger authentication mechanisms based on smart tokens to generate one-time\npasswords that cannot be used by an interloper for subsequent sessions. Such mechanisms would\nmake it very difficult for outsiders (e.g., from the Internet) who penetrate systems on the WAN to\nuse them to attack the mainframe. The authors noted, however, that the mainframe serves many\ndifferent agencies, and HGA has no authority over the way the mainframe is configured and\noperated. Thus, the costs and procedural difficulties of implementing such controls would be\nsubstantial. The assessment team also recommended improving the server's administrative\nprocedures and the speed with which security-related bug fixes distributed by the vendor are\ninstalled on the server.\nAfter input from COG security specialists and application owners, HGA's managers accepted\nmost of the risk assessment team's recommendations. They decided that since the residual risks\nfrom the falsification of time sheets were acceptably low, no changes in procedures were\nnecessary. However, they judged the risks of payroll fraud due to the interceptability of LAN\nserver passwords to be unacceptably high, and thus directed COG to investigate the costs and\nprocedures associated with using one-time passwords for Time and Attendance Clerks and\nsupervisor sessions on the server. Other users performing less sensitive tasks on the LAN would\ncontinue to use password-based authentication.\nWhile the immaturity of the LAN server's access controls was judged a significant source of risk,\nCOG was only able to identify one other PC LAN product that would be significantly better in\nthis respect. Unfortunately, this product was considerably less friendly to users and application\ndevelopers, and incompatible with other applications used by HGA. The negative impact of\nchanging PC LAN products was judged too high for the potential incremental gain in security\nbenefits. Consequently, HGA decided to accept the risks accompanying use of the current\nproduct, but directed COG to improve its monitoring of the server's access control configuration\n" }, { "page_number": 277, "text": "20. Assessing and Mitigating the Risks to a Hypothetical Computer System\n265\nand its responsiveness to vendor security reports and bug fixes.\nHGA concurred that risks of fraud due to unauthorized modification of time and attendance data\nat or in transit to the mainframe should not be accepted unless no practical solutions could be\nidentified. After discussions with the mainframe's owning agency, HGA concluded that the\nowning agency was unlikely to adopt the advanced authentication techniques advocated in the risk\nassessment. COG, however, proposed an alternative approach that did not require a major\nresource commitment on the part of the mainframe owner.\nThe alternative approach would employ digital signatures based on public key cryptographic\ntechniques to detect unauthorized modification of time and attendance data. The data would be\ndigitally signed by the supervisor using a private key prior to transmission to the mainframe. \nWhen the payroll application program was run on the mainframe, it would use the corresponding\npublic key to validate the correspondence between the time and attendance data and the signature. \nAny modification of the data during transmission over the WAN or while in temporary storage at\nthe mainframe would result in a mismatch between the signature and the data. If the payroll\napplication detected a mismatch, it would reject the data; HGA personnel would then be notified\nand asked to review, sign, and send the data again. If the data and signature matched, the payroll\napplication would process the time and attendance data normally.\nHGA's decision to use advanced authentication for time and attendance Clerks and Supervisors\ncan be combined with digital signatures by using smart tokens. Smart tokens are programmable\ndevices, so they can be loaded with private keys and instructions for computing digital signatures\nwithout burdening the user. When supervisors approve a batch of time and attendance data, the\ntime and attendance application on the server would instruct the supervisor to insert their token in\nthe token reader/writer device attached to the supervisors' PC. The application would then send a\nspecial \"hash\" (summary) of the time and attendance data to the token via the PC. The token\nwould generate a digital signature using its embedded secret key, and then transfer the signature\nback to the server, again via the PC. The time and attendance application running on the server\nwould append the signature to the data before sending the data to the mainframe and, ultimately,\nthe payroll application.\nAlthough this approach did not address the broader problems posed by the mainframe's I&A\nvulnerabilities, it does provide a reliable means of detecting time and attendance data tampering. \nIn addition, it protects against bogus time and attendance submissions from systems connected to\nthe WAN because individuals who lack a time and attendance supervisor's smart token will be\nunable to generate valid signatures. (Note, however, that the use of digital signatures does\nrequire increased administration, particularly in the area of key management.) In summary, digital\nsignatures mitigate risks from a number of different kinds of threats.\nHGA's management concluded that digitally signing time and attendance data was a practical,\ncost-effective way of mitigating risks, and directed COG to pursue its implementation. (They also\n" }, { "page_number": 278, "text": "V. Example\n266\nnoted that it would be useful as the agency moved to use of digital signatures in other\napplications.) This is an example of developing and providing a solution in an environment over\nwhich no single entity has overall authority.\n20.6.2 Mitigating Payroll Error Vulnerabilities\nAfter reviewing the risk assessment, HGA's management concluded that the agency's current\nsafeguards against payroll errors and against accidental corruption and loss of time and attendance\ndata were adequate. However, the managers also concurred with the risk assessment's\nconclusions about the necessity for establishing incentives for complying (and penalties for not\ncomplying) with these safeguards. They thus tasked the Director of Personnel to ensure greater\ncompliance with paperwork-handling procedures and to provide quarterly compliance audit\nreports. They noted that the digital signature mechanism HGA plans to use for fraud protection\ncan also provide protection against payroll errors due to accidental corruption.\n20.6.3 Mitigating Vulnerabilities Related to the Continuity of Operations\nThe assessment recommended that COG institute a program of periodic internal training and\nawareness sessions for COG personnel having contingency plan responsibilities.The assessment\nurged that COG undertake a rehearsal during the next three months in which selected parts of the\nplan would be exercised. The rehearsal should include attempting to initiate some aspect of\nprocessing activities at one of the designated alternative sites. HGA's management agreed that\nadditional contingency plan training was needed for COG personnel and committed itself to its\nfirst plan rehearsal within three months. \nAfter a short investigation, HGA divisions owning applications that depend on the WAN\nconcluded that WAN outages, although inconvenient, would not have a major impact on HGA. \nThis is because the few time-sensitive applications that required WAN-based communication with\nthe mainframe were originally designed to work with magnetic tape instead of the WAN, and\ncould still operate in that mode; hence courier-delivered magnetic tapes could be used as an\nalternative input medium in case of a WAN outage. The divisions responsible for contingency\nplanning for these applications agreed to incorporate into their contingency plans both\ndescriptions of these procedures and other improvements.\nWith respect to mainframe outages, HGA determined that it could not easily make arrangements\nfor a suitable alternative site. HGA also obtained and examined a copy of the mainframe facility's\nown contingency plan. After detailed study, including review by an outside consultant, HGA\nconcluded that the plan had major deficiencies and posed significant risks because of HGA's\nreliance on it for payroll and other services. This was brought to the attention of the Director of\nHGA, who, in a formal memorandum to the head of the mainframe's owning agency, called for (1)\na high-level interagency review of the plan by all agencies that rely on the mainframe, and (2)\ncorrective action to remedy any deficiencies found.\n" }, { "page_number": 279, "text": "20. Assessing and Mitigating the Risks to a Hypothetical Computer System\n267\nHGA's management agreed to improve adherence to its virus-prevention procedures. It agreed\n(from the point of view of the entire agency) that information stored on PC hard disks is\nfrequently lost. It estimated, however, that the labor hours lost as a result would amount to less\nthan a person year—which HGA management does not consider to be unacceptable. After\nreviewing options for reducing this risk, HGA concluded that it would be cheaper to accept the\nassociated loss than to commit significant resources in an attempt to avoid it. COG volunteered,\nhowever, to set up an automated program on the LAN server that e-mails backup reminders to all\nPC users once each quarter. In addition, COG agreed to provide regular backup services for\nabout 5 percent of HGA's PCs; these will be chosen by HGA's management based on the\ninformation stored on their hard disks.\n20.6.4 Mitigating Threats of Information Disclosure/Brokering\nHGA concurred with the risk assessment's conclusions about its exposure to\ninformation-brokering risks, and adopted most of the associated recommendations.\nThe assessment recommended that HGA improve its security awareness training (e.g., via\nmandatory refresher courses) and that it institute some form of compliance audits. The training\nshould be sure to stress the penalties for noncompliance. It also suggested installing \"screen lock\"\nsoftware on PCs that automatically lock a PC after a specified period of idle time in which no\nkeystrokes have been entered; unlocking the screen requires that the user enter a password or\nreboot the system.\nThe assessment recommended that HGA modify its information-handling policies so that\nemployees would be required to store some kinds of disclosure-sensitive information only on PC\nlocal hard disks (or floppies), but not on the server. This would eliminate or reduce risks of LAN\neavesdropping. It was also recommended that an activity log be installed on the server (and\nregularly reviewed). Moreover, it would avoid unnecessary reliance on the server's access-control\nfeatures, which are of uncertain assurance. The assessment noted, however, that this strategy\nconflicts with the desire to store most information on the server's disks so that it is backed up\nroutinely by COG personnel. (This could be offset by assigning responsibility for someone other\nthan the PC owner to make backup copies.) Since the security habits of HGA's PC users have\ngenerally been poor, the assessment also recommended use of hard-disk encryption utilities to\nprotect disclosure-sensitive information on unattended PCs from browsing by unauthorized\nindividuals. Also, ways to encrypt information on the server's disks would be studied. \nThe assessment recommended that HGA conduct a thorough review of the mainframe's\nsafeguards in these respects, and that it regularly review the mainframe audit log, using a query\npackage, with particular attention to records that describe user accesses to HGA's employee\nmaster database. \n" }, { "page_number": 280, "text": "V. Example\n268\n20.6.5 Mitigating Network-Related Threats\nThe assessment recommended that HGA: \nrequire stronger I&A for dial-in access or, alternatively, that a restricted version of the\nmail utility be provided for dial-in, which would prevent a user from including files in\noutgoing mail messages;\nreplace its current modem pool with encrypting modems, and provide each dial-in user\nwith such a modem; and\nwork with the mainframe agency to install a similar encryption capability for\nserver-to-mainframe communications over the WAN.\nAs with previous risk assessment recommendations, HGA's management tasked COG to analyze\nthe costs, benefits, and impacts of addressing the vulnerabilities identified in the risk assessment. \nHGA eventually adopted some of the risk assessment's recommendations, while declining others. \nIn addition, HGA decided that its policy on handling time and attendance information needed to\nbe clarified, strengthened, and elaborated, with the belief that implementing such a policy would\nhelp reduce risks of Internet and dial-in eavesdropping. Thus, HGA developed and issued a\nrevised policy, stating that users are individually responsible for ensuring that they do not transmit\ndisclosure-sensitive information outside of HGA's facilities via e-mail or other means. It also\nprohibited them from examining or transmitting e-mail containing such information during dial-in\nsessions and developed and promulgated penalties for noncompliance.\n20.7 Summary\nThis chapter has illustrated how many of the concepts described in previous chapters might be\napplied in a federal agency. An integrated example concerning a Hypothetical Government\nAgency (HGA) has been discussed and used as the basis for examining a number of these\nconcepts. HGA's distributed system architecture and its uses were described. The time and\nattendance application was considered in some detail.\nFor context, some national and agency-level policies were referenced. Detailed operational\npolicies and procedures for computer systems were discussed and related to these high-level\npolicies. HGA assets and threats were identified, and a detailed survey of selected safeguards,\nvulnerabilities, and risk mitigation actions were presented. The safeguards included a wide variety\nof procedural and automated techniques, and were used to illustrate issues of assurance,\ncompliance, security program oversight, and inter-agency coordination.\nAs illustrated, effective computer security requires clear direction from upper management. \n" }, { "page_number": 281, "text": "20. Assessing and Mitigating the Risks to a Hypothetical Computer System\n269\nUpper management must assign security responsibilities to organizational elements and individuals\nand must formulate or elaborate the security policies that become the foundation for the\norganization's security program. These policies must be based on an understanding of the\norganization's mission priorities and the assets and business operations necessary to fulfill them. \nThey must also be based on a pragmatic assessment of the threats against these assets and\noperations. A critical element is assessment of threat likelihoods. These are most accurate when\nderived from historical data, but must also anticipate trends stimulated by emerging technologies.\nA good security program relies on an integrated, cost-effective collection of physical, procedural,\nand automated controls. Cost-effectiveness requires targeting these controls at the threats that\npose the highest risks while accepting other residual risks. The difficulty of applying controls\nproperly and in a consistent manner over time has been the downfall of many security programs. \nThis chapter has provided numerous examples in which major security vulnerabilities arose from a\nlack of assurance or compliance. Hence, periodic compliance audits, examinations of the\neffectiveness of controls, and reassessments of threats are essential to the success of any\norganization's security program. \n" }, { "page_number": 282, "text": "270\n" }, { "page_number": 283, "text": "271\n Cross Reference and Index\n" }, { "page_number": 284, "text": "272\nInterdependencies Cross Reference\nThe following is a cross reference of the interdependencies sections. Note that the references\nonly include specific controls. Some controls were referenced in groups, such as technical\ncontrols and occasionally interdependencies were noted for all controls.\nControl\nChapters Where It Is Cited\nPolicy\nProgram Management\nLife Cycle\nPersonnel/User\nContingency\nAwareness and Training\nLogical Access\nAudit\nProgram Management\nPolicy\nAwareness and Training\nRisk Management\nLife Cycle\nContingency\nIncident\nLife Cycle\nProgram Management\nAssurance\nAssurance\nLife Cycle\nSupport and Operations\nAudit\nCryptography\nPersonnel\nTraining and Awareness\nSupport and Operations\nAccess\nTraining and Awareness\nPersonnel/User\nIncident\nSupport and Operations\nContingency\nIncident\n" }, { "page_number": 285, "text": " Cross Reference and Index\n273\nSupport and Operations\nPhysical and Environmental\nAudit\nIncident\nContingency\nSupport and Operations\nAudit\nPhysical and Environment\nContingency\nSupport and Operations\nLogical Access\nCryptography\nSupport and Operations\nContingency\nIncident\nIdentification and \nPersonnel/User\n Authentication\nPhysical and Environmental\nLogical Access\nAudit\nCryptography\nAccess Controls\nPolicy\nPersonnel/User\nPhysical and Environmental\nIdentification and Authentication\nAudit\nCryptography\nAudit\nIdentification and Authentication\nLogical Access\nCryptography\nCryptography\nIdentification and Authentication\n" }, { "page_number": 286, "text": "Cross Reference and Index\n274\nGeneral Index\nA\naccount management (user)\n110-12\naccess control lists\n182, 189, 199-201, 203\naccess modes\n196-7, 200\nacknowledgment statements\n111, 112, 144\naccountability\n12, 36, 39, 143, 144, 159, 179, 195, 212\naccreditation\n6, 66-7, 75, 80, 81-2, 89, 90-2, 94-5, \n reaccreditation\n75, 83, 84, 85, 96, 100\nadvanced authentication\n181, 204, 230\nadvanced development\n93\nasset valuation\n61\nattack signature\n219, 220\naudits/auditing \n18, 51, 73, 75, 81, 82, 96-9, 110, 111, 112-3, 159,\n195, 211\naudit reduction\n219\nauthentication, host-based\n205\nauthentication, host-to-host\n189\nauthentication servers\n189\nauthorization (to process) \n66, 81, 112\nB\nbastion host\n204\nbiometrics\n180, 186-7\nC\ncertification\n75, 81, 85, 91, 93, 95\n self-certification\n94\nchallenge response\n185, 186, 189\nchecksumming\n99\ncold site \n125, 126\nComputer Security Act\n3, 4, 7, 52-3, 71-2, 73, 76, 143, 149, \nComputer Security Program Managers' \n Forum\n50, 52, 151 \nconformance - see validation\nconsequence assessment\n61\nconstrained user interface\n201-2\ncost-benefit\n65-6, 78, 173-4\ncrackers - see hackers\n" }, { "page_number": 287, "text": " Cross Reference and Index\n275\nD\ndata categorization\n202\nData Encryption Standard (DES)\n205, 224, 231\ndatabase views\n202\ndiagnostic port - see maintenance accounts\ndial-back modems\n203\ndigital signature - see electronic signature\nDigital Signature Standard\n225, 231\ndisposition/disposal\n75, 85, 86, 160, 197, 235\ndual-homed gateway\n204\ndynamic password generator\n185\nE\nease of safe use\n94\nelectromagnetic interception\n172\n see also electronic monitoring\nelectronic monitoring\n171, 182, 184, 185, 186, \nelectronic/digital signature\n95, 99, 218, 228-30, 233 \nencryption\n140, 162, 182, 188, 199, 224-7, 233 \nend-to-end encryption\n233\nEscrowed Encryption Standard\n224, 225-6, 231\nespionage\n22, 26-8\nevaluations (product)\n94 \n see also validation\nexport (of cryptography)\n233-4\nF\nFederal Information Resources Management\n Regulation (FIRMR)\n7, 46, 48, 52\nfirewalls - see secure gateways\nFIRST\n52, 139 \nFISSEA\n151\nG\ngateways - see secure gateways\nH\nhackers\n25-6, 97, 116, 133, 135, 136, 156, 162, 182, 183,\n186, 204\nHALON\n169, 170\nhash, secure\n228, 230\nhot site\n125, 126\n" }, { "page_number": 288, "text": "Cross Reference and Index\n276\nI\nindividual accountability - see accountability\nintegrity statements\n95\nintegrity verification\n100, 159-60, 227-30\ninternal controls\n98, 114\nintrusion detection\n100, 168, 213\nJ, K\nkeys, cryptographic for authentication\n182\nkey escrow\n225-6\n see also Escrowed Encryption Standard\nkey management (cryptography)\n85, 114-5, 186, 199, 232\nkeystroke monitoring\n214\nL\nlabels\n159, 202-3\nleast privilege\n107-8, 109, 112, 114, 179\nliabilities\n95\nlikelihood analysis\n62-3\nlink encryption\n233\nM\nmaintenance accounts\n161-2\nmalicious code\n27-8, 79, 95, 99, 133-5, 157, 166, 204, 213,\n (virus, virus scanning, Trojan horse)\n215, 230\nmonitoring\n36, 67, 75, 79, 82, 86, 96, 99-101, 171, 182, 184,\n185, 186, 205, 213, 214, 215\nN, O\noperational assurance\n82-3, 89, 96\nOMB Circular A-130\n7, 48, 52, 73, 76, 116, 149\nP\npassword crackers\n99-100, 182\npasswords, one-time\n185-6, 189, 230\npassword-based access control\n182, 199\npenetration testing\n98-9\npermission bits\n200-1, 203\nplan, computer security\n53, 71-3, 98, 127, 161\nprivacy\n14, 28-9, 38, 78, 92, 196\npolicy (general)\n12, 33-43, 49, 51, 78, 144, 161\npolicy, issue-specific \n37-40, 78\n" }, { "page_number": 289, "text": " Cross Reference and Index\n277\npolicy, program\n34-7, 51\npolicy, system-specific\n40-3, 53, 78, 86, 198, 204, 205, 215\nport protection devises\n203-4\nprivileged accounts\n206\nproxy host\n204\npublic access\n116-7\npublic key cryptography\n223-30\npublic key infrastructure\n232\nQ, R\nRSA\n225\nreciprocal agreements\n125\nredundant site\n125\nreliable (architectures, security)\n93, 94\nresponsibility\n12-3, 15-20\n see also accountability\nroles, role-based access\n107, 113-4, 195\nrouters \n204\nS\nsafeguard analysis\n61\nscreening (personnel)\n108-9, 113, 162\nsecret key cryptography \n223-9\nsecure gateways (firewalls)\n204-5\nsensitive (systems, information)\n4, 7, 53, 71, 76\nsensitivity assessment\n75, 76-7\nsensitivity (position)\n107-9, 205\nseparation of duties\n107, 109, 114, 195\nsingle log-in\n188-9\nstandards, guidelines, procedures\n35, 48, 51, 78, 93, 231\nsystem integrity\n6-7, 166\nT\nTEMPEST - see electromagnetic interception\ntheft\n23-4, 26, 166, 172\ntokens (authentication)\n115, 162, 174, 180-90\nthreat identification\n21-29, 61\nTrojan horse - see malicious code\ntrusted development\n93\ntrusted system \n6, 93, 94\n" }, { "page_number": 290, "text": "Cross Reference and Index\n278\nU, V\nuncertainty analysis\n64, 67-8\nvirus, virus scanning - see malicious code\nvalidation testing\n93, 234\nvariance detection\n219\nvulnerability analysis\n61-2\nW, X, Y, Z\nwarranties\n95\n" } ] }