{ "pages": [ { "page_number": 1, "text": "" }, { "page_number": 2, "text": "San Francisco • London\nCISSP®:\nCertified Information Systems \nSecurity Professional\nStudy Guide\n3rd Edition\nJames Michael Stewart\nEd Tittel\nMike Chapple\n" }, { "page_number": 3, "text": "" }, { "page_number": 4, "text": "Reinforce understanding of key topics\nwith flashcards for your PC, Pocket PC,\nor Palm handheld!\n\u0001\nContains over 300 flashcard questions.\n\u0001\nRuns on multiple platforms for usability\nand portability.\n\u0001\nQuiz yourself anytime, anywhere!\nAccess the entire book in PDF!\n\u0001\nFull search capabilities let you quickly\nfind the information you need.\n\u0001\nComplete with tables and illustrations.\n\u0001\nAdobe Acrobat Reader included.\nhe Best CISSP Study Combination Available!\nT\nPrepare yourself for the CISSP exam\nwith hundreds of challenging sample\ntest questions!\n\u0001\nChapter-by-chapter review questions\nfrom the book.\n\u0001\nFive bonus exams available only on \nthe CD.\n\u0001\nSupports question formats found on\nactual exam.\n" }, { "page_number": 5, "text": "My sole source of exam-related study was this book. I found that I knew much\nof the material already, but this book definitely filled in all of the gaps.\n—Amazon.com reader from Charlotte, NC\nAs I took the CISSP exam, I kept thinking, ‘the CISSP Study Guide authors\nreally knew what they were talking about.’ If you were to know this book\nbackwards and forwards, you would do well on the CISSP exam.\n—Amazon.com reader from Utah, USA\nThis book follows in the tradition of the Sybex MCSE Study Guides, provid-\ning a good balance between detailed explanation and comprehensive coverage\nof the exam topics.\n—J. O’Connor, Amazon.com reader from Dublin, Ireland\nIt is crisp, sets the right tone for the actual exam, and does not lie.\n—Amazon.com reader from New York City\nI recently took and passed the CISSP exam…My sole source of exam related\nstudy was this book.\n—Amazon.com reader\nPraise for CISSP: Certified Information Systems\nSecurity Professional Study Guide from Sybex\n" }, { "page_number": 6, "text": "CISSP:\nCertified Information Systems \nSecurity Professional\nStudy Guide\n3rd Edition\n" }, { "page_number": 7, "text": "" }, { "page_number": 8, "text": "San Francisco • London\nCISSP®:\nCertified Information Systems \nSecurity Professional\nStudy Guide\n3rd Edition\nJames Michael Stewart\nEd Tittel\nMike Chapple\n" }, { "page_number": 9, "text": "Publisher: Neil Edde\nAcquisitions and Developmental Editor: Heather O’Connor\nProduction Editor: Lori Newman\nTechnical Editor: Ed Tittel\nCopyeditor: Judy Flynn\nCompositor: Jeffrey Wilson, Happenstance Type-O-Rama\nCD Coordinators and Technicians: Dan Mummert, Keith McNeil, Kevin Ly\nProofreaders: Nancy Riddiough, Jim Brook, Candace English\nIndexer: Ted Laux\nBook Designer: Bill Gibson, Judy Fung\nCover Designer: Archer Design\nCover Illustrator/Photographer: Victor Arre and Photodisc\nCopyright © 2005 SYBEX Inc., 1151 Marina Village Parkway, Alameda, CA 94501. World rights reserved. No \npart of this publication may be stored in a retrieval system, transmitted, or reproduced in any way, including but \nnot limited to photocopy, photograph, magnetic, or other record, without the prior agreement and written per-\nmission of the publisher.\nFirst edition copyright © 2004 SYBEX Inc.\nLibrary of Congress Card Number: 2005929270\nISBN: 0-7821-4443-8\nSYBEX and the SYBEX logo are either registered trademarks or trademarks of SYBEX Inc. in the United States \nand/or other countries.\nScreen reproductions produced with FullShot 99. FullShot 99 © 1991-1999 Inbit Incorporated. All rights reserved.\nFullShot is a trademark of Inbit Incorporated.\nThe CD interface was created using Macromedia Director, COPYRIGHT 1994, 1997-1999 Macromedia Inc. For \nmore information on Macromedia and Macromedia Director, visit http://www.macromedia.com.\nThis study guide and/or material is not sponsored by, endorsed by, or affiliated with International Information \nSystems Security Certification Consortium, Inc. (ISC)2® and CISSP® are registered service and/or trademarks of \nthe International Information Systems Security Certification Consortium, Inc. All other trademarks are the prop-\nerty of their respective owners.\nTRADEMARKS: SYBEX has attempted throughout this book to distinguish proprietary trademarks from \ndescriptive terms by following the capitalization style used by the manufacturer.\nThe author and publisher have made their best efforts to prepare this book, and the content is based upon final \nrelease software whenever possible. Portions of the manuscript may be based upon pre-release versions supplied \nby software manufacturer(s). The author and the publisher make no representation or warranties of any kind \nwith regard to the completeness or accuracy of the contents herein and accept no liability of any kind including \nbut not limited to performance, merchantability, fitness for any particular purpose, or any losses or damages of \nany kind caused or alleged to be caused directly or indirectly from this book.\nManufactured in the United States of America\n10 9 8 7 6 5 4 3 2 1\n" }, { "page_number": 10, "text": "Wiley Publishing Inc End-User License Agreement\nREAD THIS. You should carefully read these terms and \nconditions before opening the software packet(s) included \nwith this book “Book”. This is a license agreement “Agree-\nment” between you and Wiley Publishing, Inc.”WPI”. By \nopening the accompanying software packet(s), you \nacknowledge that you have read and accept the following \nterms and conditions. If you do not agree and do not want \nto be bound by such terms and conditions, promptly return \nthe Book and the unopened software packet(s) to the place \nyou obtained them for a full refund.\n1. License Grant. WPI grants to you (either an individual \nor entity) a nonexclusive license to use one copy of the \nenclosed software program(s) (collectively, the “Software” \nsolely for your own personal or business purposes on a sin-\ngle computer (whether a standard computer or a worksta-\ntion component of a multi-user network). The Software is \nin use on a computer when it is loaded into temporary \nmemory (RAM) or installed into permanent memory (hard \ndisk, CD-ROM, or other storage device). WPI reserves all \nrights not expressly granted herein.\n2. Ownership. WPI is the owner of all right, title, and inter-\nest, including copyright, in and to the compilation of the \nSoftware recorded on the disk(s) or CD-ROM “Software \nMedia”. Copyright to the individual programs recorded \non the Software Media is owned by the author or other \nauthorized copyright owner of each program. Ownership \nof the Software and all proprietary rights relating thereto \nremain with WPI and its licensers.\n3. Restrictions On Use and Transfer. (a) You may only (i) \nmake one copy of the Software for backup or archival pur-\nposes, or (ii) transfer the Software to a single hard disk, \nprovided that you keep the original for backup or archival \npurposes. You may not (i) rent or lease the Software, (ii) \ncopy or reproduce the Software through a LAN or other \nnetwork system or through any computer subscriber sys-\ntem or bulletin- board system, or (iii) modify, adapt, or cre-\nate derivative works based on the Software. (b) You may \nnot reverse engineer, decompile, or disassemble the Soft-\nware. You may transfer the Software and user documenta-\ntion on a permanent basis, provided that the transferee \nagrees to accept the terms and conditions of this Agree-\nment and you retain no copies. If the Software is an update \nor has been updated, any transfer must include the most \nrecent update and all prior versions.\n4. Restrictions on Use of Individual Programs. You must \nfollow the individual requirements and restrictions \ndetailed for each individual program in the About the CD-\nROM appendix of this Book. These limitations are also \ncontained in the individual license agreements recorded on \nthe Software Media. These limitations may include a \nrequirement that after using the program for a specified \nperiod of time, the user must pay a registration fee or dis-\ncontinue use. By opening the Software packet(s), you will \nbe agreeing to abide by the licenses and restrictions for \nthese individual programs that are detailed in the About \nthe CD-ROM appendix and on the Software Media. None \nof the material on this Software Media or listed in this \nBook may ever be redistributed, in original or modified \nform, for commercial purposes.\n5. Limited Warranty. (a) WPI warrants that the Software \nand Software Media are free from defects in materials and \nworkmanship under normal use for a period of sixty (60) \ndays from the date of purchase of this Book. If WPI \nreceives notification within the warranty period of defects\nin materials or workmanship, WPI will replace the defec-\ntive Software Media. (b) WPI AND THE AUTHOR OF \nTHE BOOK DISCLAIM ALL OTHER WARRANTIES, \nEXPRESS OR IMPLIED, INCLUDING WITHOUT LIM-\nITATION IMPLIED WARRANTIES OF MERCHANT-\nABILITY AND FITNESS FOR A PARTICULAR \nPURPOSE, WITH RESPECT TO THE SOFTWARE, \nTHE PROGRAMS, THE SOURCE CODE CON-\nTAINED THEREIN, AND/OR THE TECHNIQUES \nDESCRIBED IN THIS BOOK. WPI DOES NOT WAR-\nRANT THAT THE FUNCTIONS CONTAINED IN \nTHE SOFTWARE WILL MEET YOUR REQUIRE-\nMENTS OR THAT THE OPERATION OF THE SOFT-\nWARE WILL BE ERROR FREE. (c) This limited warranty \ngives you specific legal rights, and you may have other \nrights that vary from jurisdiction to jurisdiction.\n6. Remedies. (a) WPI’s entire liability and your exclusive \nremedy for defects in materials and workmanship shall be \nlimited to replacement of the Software Media, which may \nbe returned to WPI with a copy of your receipt at the fol-\nlowing address:\nSoftware Media Fulfillment Department,\nAttn.: CISSP: Certified Information Systems Security \nProfessional Study Guide, 3rd Ed.,\nWiley Publishing, Inc., 10475\nCrosspoint Blvd., Indianapolis, IN 46256, \nor call 1-800-762-2974. \nPlease allow four to six weeks for delivery. This Limited \nWarranty is void if failure of the Software Media has \nresulted from accident, abuse, or misapplication. Any \nreplacement Software Media will be warranted for the \nremainder of the original warranty period or thirty (30) \ndays, whichever is longer. (b) In no event shall WPI or the \nauthor be liable for any damages whatsoever (including \nwithout limitation damages for loss of business profits, \nbusiness interruption, loss of business information, or any \nother pecuniary loss) arising from the use of or inability to \nuse the Book or the Software, even if WPI has been advised \nof the possibility of such damages. (c) Because some juris-\ndictions do not allow the exclusion or limitation of liability \nfor consequential or incidental damages, the above limita-\ntion or exclusion may not apply to you.\n7. U.S. Government Restricted Rights. Use, duplication, or \ndisclosure of the Software for or on behalf of the United \nStates of America, its agencies and/or instrumentalities \n“U.S. Government” is subject to restrictions as stated in \nparagraph (c)(1)(ii) of the Rights in Technical Data and \nComputer Software clause of DFARS 252.227-7013, or \nsubparagraphs (c) (1) and (2) of the Commercial Com-\nputer Software - Restricted Rights clause at FAR 52.227-\n19, and in similar clauses in the NASA FAR supplement, as \napplicable.\n8. General. This Agreement constitutes the entire under-\nstanding of the parties and revokes and supersedes all prior \nagreements, oral or written, between them and may not be \nmodified or amended except in a writing signed by both \nparties hereto that specifically refers to this Agreement. \nThis Agreement shall take precedence over any other doc-\numents that may be in conflict herewith. If any one or more \nprovisions contained in this Agreement are held by any \ncourt or tribunal to be invalid, illegal, or otherwise unen-\nforceable, each and every other provision shall remain in \nfull force and effect.\n" }, { "page_number": 11, "text": "To Cathy, whenever there is trouble, just remember “Some beach, somewhere...”\n" }, { "page_number": 12, "text": "Acknowledgments\nWow, I can’t believe it has already been a year since the last revision and lots of things have \nchanged in the world of CISSP. I hope our efforts to improve this study guide will lend themselves \nhandily to your understanding and comprehension of the wide berth of CISSP concepts. I’d like \nto express my thanks to Sybex for continuing to support this project. Thanks to Ed Tittel co-\nauthor (1st and 2nd editions) and technical editor (3rd edition) for a great job making sure as few \nerrors as possible made it into print. Also thanks to all my CISSP course students who have pro-\nvided their insight and input to improve my training courseware and ultimately this tome.\nTo my fiancé, Cathy, I’m looking forward to a wonderful life shared with you. To my par-\nents, Dave and Sue, thanks for your love and consistent support. To my sister Sharon and \nnephew Wesley, it’s great having family like you to spend time with. To Mark, we’d all get along \nbetter if you and everyone else would just learn to worship me. To HERbert and Quin, brace \nyourself, the zoo is about to invade! And finally, as always, to Elvis—I just discovered you’ve \nbeen re-incarnated in the Cow Parade as Cowlvis!\n—James Michael Stewart\n" }, { "page_number": 13, "text": "Contents At A Glance\nIntroduction\nxxiii\nAssessment Test\nxxxi\nChapter\n1\nAccountability and Access Control \n1\nChapter\n2\nAttacks and Monitoring \n43\nChapter\n3\nISO Model, Network Security, and Protocols \n69\nChapter\n4\nCommunications Security and Countermeasures \n121\nChapter\n5\nSecurity Management Concepts and Principles \n153\nChapter\n6\nAsset Value, Policies, and Roles \n175\nChapter\n7\nData and Application Security Issues \n209\nChapter\n8\nMalicious Code and Application Attacks \n257\nChapter\n9\nCryptography and Private Key Algorithms \n293\nChapter\n10\nPKI and Cryptographic Applications \n335\nChapter\n11\nPrinciples of Computer Design \n369\nChapter\n12\nPrinciples of Security Models \n415\nChapter\n13\nAdministrative Management \n449\nChapter\n14\nAuditing and Monitoring \n477\nChapter\n15\nBusiness Continuity Planning \n509\nChapter\n16\nDisaster Recovery Planning \n535\nChapter\n17\nLaw and Investigations \n571\nChapter\n18\nIncidents and Ethics \n605\nChapter\n19\nPhysical Security Requirements \n627\nGlossary\n659\nIndex\n725\n" }, { "page_number": 14, "text": "Contents\nIntroduction\nxxiii\nAssessment Test\nxxxi\nChapter\n1\nAccountability and Access Control\n1\nAccess Control Overview \n2\nTypes of Access Control \n2\nAccess Control in a Layered Environment \n5\nThe Process of Accountability \n5\nIdentification and Authentication Techniques \n9\nPasswords \n10\nBiometrics \n13\nTokens \n18\nTickets \n20\nSingle Sign On \n20\nAccess Control Techniques \n23\nDiscretionary Access Controls (DAC) \n23\nNondiscretionary Access Controls \n24\nMandatory Access Controls \n24\nRole-Based Access Control (RBAC) \n25\nLattice-Based Access Controls \n26\nAccess Control Methodologies and Implementation \n27\nCentralized and Decentralized Access Control \n27\nRADIUS and TACACS \n27\nAccess Control Administration \n28\nAccount Administration \n29\nAccount, Log, and Journal Monitoring \n30\nAccess Rights and Permissions \n30\nSummary \n32\nExam Essentials \n34\nReview Questions \n36\nAnswers to Review Questions \n40\nChapter\n2\nAttacks and Monitoring\n43\nMonitoring \n44\nIntrusion Detection \n45\nHost-Based and Network-Based IDSs \n46\nKnowledge-Based and Behavior-Based Detection \n47\nIDS-Related Tools \n48\nPenetration Testing \n49\n" }, { "page_number": 15, "text": "x\nContents\nMethods of Attacks \n50\nBrute Force and Dictionary Attacks \n51\nDenial of Service \n52\nSpoofing Attacks \n55\nMan-in-the-Middle Attacks \n56\nSniffer Attacks \n57\nSpamming Attacks \n57\nCrackers \n58\nAccess Control Compensations \n58\nSummary \n59\nExam Essentials \n59\nReview Questions \n62\nAnswers to Review Questions \n66\nChapter\n3\nISO Model, Network Security, and Protocols\n69\nOSI Model \n70\nHistory of the OSI Model \n70\nOSI Functionality \n71\nEncapsulation/Deencapsulation \n72\nOSI Layers \n73\nTCP/IP Model \n78\nCommunications and Network Security \n79\nNetwork Cabling \n79\nLAN Technologies \n84\nNetwork Topologies \n87\nTCP/IP Overview \n89\nInternet/Intranet/Extranet Components \n96\nFirewalls \n97\nOther Network Devices \n100\nRemote Access Security Management \n102\nNetwork and Protocol Security Mechanisms \n103\nVPN Protocols \n103\nSecure Communications Protocols \n104\nE-Mail Security Solutions \n105\nDial-Up Protocols \n105\nAuthentication Protocols \n106\nCentralized Remote Authentication Services \n106\nNetwork and Protocol Services \n107\nFrame Relay \n107\nOther WAN Technologies \n108\nAvoiding Single Points of Failure \n108\nRedundant Servers \n109\nFailover Solutions \n109\nRAID \n110\n" }, { "page_number": 16, "text": "Contents\nxi\nSummary \n111\nExam Essentials \n112\nReview Questions \n114\nAnswers to Review Questions \n118\nChapter\n4\nCommunications Security and Countermeasures\n121\nVirtual Private Network (VPN) \n122\nTunneling \n123\nHow VPNs Work \n124\nImplementing VPNs \n124\nNetwork Address Translation \n125\nPrivate IP Addresses \n125\nStateful NAT \n126\nSwitching Technologies \n126\nCircuit Switching \n126\nPacket Switching \n127\nVirtual Circuits \n127\nWAN Technologies \n128\nWAN Connection Technologies \n129\nEncapsulation Protocols \n130\nMiscellaneous Security Control Characteristics \n131\nTransparency \n131\nVerifying Integrity \n131\nTransmission Mechanisms \n132\nManaging E-Mail Security \n132\nE-Mail Security Goals \n132\nUnderstanding E-Mail Security Issues \n133\nE-Mail Security Solutions \n134\nSecuring Voice Communications \n136\nSocial Engineering \n136\nFraud and Abuse \n137\nPhreaking \n138\nSecurity Boundaries \n139\nNetwork Attacks and Countermeasures \n139\nEavesdropping \n140\nSecond-Tier Attacks \n140\nAddress Resolution Protocol (ARP) \n141\nSummary \n142\nExam Essentials \n143\nReview Questions \n146\nAnswers to Review Questions \n150\nChapter\n5\nSecurity Management Concepts and Principles\n153\nSecurity Management Concepts and Principles \n154\nConfidentiality \n154\n" }, { "page_number": 17, "text": "xii\nContents\nIntegrity \n155\nAvailability \n156\nOther Security Concepts \n157\nProtection Mechanisms \n159\nLayering \n160\nAbstraction \n160\nData Hiding \n160\nEncryption \n161\nChange Control/Management \n161\nData Classification \n162\nSummary \n165\nExam Essentials \n166\nReview Questions \n168\nAnswers to Review Questions \n172\nChapter\n6\nAsset Value, Policies, and Roles\n175\nEmployment Policies and Practices \n176\nSecurity Management for Employees \n176\nSecurity Roles \n179\nSecurity Management Planning \n181\nPolicies, Standards, Baselines, Guidelines, and Procedures \n182\nSecurity Policies \n182\nSecurity Standards, Baselines, and Guidelines \n184\nSecurity Procedures \n184\nRisk Management \n185\nRisk Terminology \n186\nRisk Assessment Methodologies \n188\nQuantitative Risk Analysis \n190\nQualitative Risk Analysis \n193\nHandling Risk \n195\nSecurity Awareness Training \n196\nSummary \n197\nExam Essentials \n199\nReview Questions \n202\nAnswers to Review Questions \n206\nChapter\n7\nData and Application Security Issues\n209\nApplication Issues \n210\nLocal/Nondistributed Environment \n210\nDistributed Environment \n212\nDatabases and Data Warehousing \n216\nDatabase Management System (DBMS) Architecture \n216\nDatabase Transactions \n219\n" }, { "page_number": 18, "text": "Contents\nxiii\nSecurity for Multilevel Databases \n220\nODBC \n222\nAggregation \n223\nData Mining \n224\nData/Information Storage \n225\nTypes of Storage \n225\nStorage Threats \n226\nKnowledge-Based Systems \n226\nExpert Systems \n227\nNeural Networks \n228\nDecision Support Systems \n228\nSecurity Applications \n229\nSystems Development Controls \n229\nSoftware Development \n229\nSystems Development Life Cycle \n234\nLife Cycle Models \n237\nGantt Charts and PERT \n240\nChange Control and Configuration Management \n242\nSoftware Testing \n243\nSecurity Control Architecture \n244\nService Level Agreements \n247\nSummary \n247\nExam Essentials \n248\nWritten Lab \n249\nReview Questions \n250\nAnswers to Review Questions \n254\nAnswers to Written Lab \n256\nChapter\n8\nMalicious Code and Application Attacks\n257\nMalicious Code \n258\nSources \n258\nViruses \n259\nLogic Bombs \n264\nTrojan Horses \n264\nWorms \n265\nActive Content \n267\nCountermeasures \n267\nPassword Attacks \n268\nPassword Guessing \n269\nDictionary Attacks \n269\nSocial Engineering \n270\nCountermeasures \n270\nDenial of Service Attacks \n271\nSYN Flood \n271\n" }, { "page_number": 19, "text": "xiv\nContents\nDistributed DoS Toolkits \n272\nSmurf \n273\nTeardrop \n274\nLand \n276\nDNS Poisoning \n276\nPing of Death \n276\nApplication Attacks \n277\nBuffer Overflows \n277\nTime-of-Check-to-Time-of-Use \n278\nTrap Doors \n278\nRootkits \n278\nReconnaissance Attacks \n278\nIP Probes \n279\nPort Scans \n279\nVulnerability Scans \n279\nDumpster Diving \n280\nMasquerading Attacks \n280\nIP Spoofing \n280\nSession Hijacking \n281\nDecoy Techniques \n281\nHoney Pots \n281\nPseudo-Flaws \n281\nSummary \n282\nExam Essentials \n283\nWritten Lab \n284\nReview Questions \n285\nAnswers to Review Questions \n289\nAnswers to Written Lab \n291\nChapter\n9\nCryptography and Private Key Algorithms\n293\nHistory \n294\nCaesar Cipher \n294\nAmerican Civil War \n295\nUltra vs. Enigma \n295\nCryptographic Basics \n296\nGoals of Cryptography \n296\nCryptography Concepts \n297\nCryptographic Mathematics \n299\nCiphers \n305\nModern Cryptography \n310\nCryptographic Keys \n311\nSymmetric Key Algorithms \n312\nAsymmetric Key Algorithms \n313\nHashing Algorithms \n316\n" }, { "page_number": 20, "text": "xl\nAnswers to Assessment Test\n27. B. Layers 1 and 2 contain device drivers but are not normally implemented in practice. Layer \n0 always contains the security kernel. Layer 3 contains user applications. Layer 4 does not exist. \nFor more information, please see Chapter 7.\n28. C. Transposition ciphers use an encryption algorithm to rearrange the letters of the plaintext \nmessage to form a ciphertext message. For more information, please see Chapter 9.\n29. C. The annualized loss expectancy (ALE) is computed as the product of the asset value (AV) \ntimes the annualized rate of occurrence (ARO). The other formulas displayed here do not accu-\nrately reflect this calculation. For more information, please see Chapter 15.\n30. C. The principle of integrity states that objects retain their veracity and are only intentionally \nmodified by authorized subjects. For more information, please see Chapter 5.\n31. D. E-mail is the most common delivery mechanism for viruses, worms, Trojan horses, docu-\nments with destructive macros, and other malicious code. For more information, please see \nChapter 4.\n32. A. Technical security controls include access controls, intrusion detection, alarms, CCTV, \nmonitoring, HVAC, power supplies, and fire detection and suppression. For more information, \nplease see Chapter 19.\n33. A. Administrative determinations of federal agencies are published as the Code of Federal Reg-\nulations. For more information, please see Chapter 17.\n34. A. Identification of priorities is the first step of the Business Impact Assessment process. For \nmore information, please see Chapter 15.\n35. C. Any recipient can use Mike’s public key to verify the authenticity of the digital signature. For \nmore information, please see Chapter 10.\n36. C. A Type 3 authentication factor is something you are, such as fingerprints, voice print, retina \npattern, iris pattern, face shape, palm topology, hand geometry, and so on. For more informa-\ntion, please see Chapter 1.\n37. C. The primary goal of risk management is to reduce risk to an acceptable level. For more infor-\nmation, please see Chapter 6.\n" }, { "page_number": 21, "text": "Contents\nxv\nSymmetric Cryptography \n316\nData Encryption Standard (DES) \n316\nTriple DES (3DES) \n318\nInternational Data Encryption Algorithm (IDEA) \n319\nBlowfish \n319\nSkipjack \n320\nAdvanced Encryption Standard (AES) \n320\nKey Distribution \n322\nKey Escrow \n324\nSummary \n324\nExam Essentials \n325\nWritten Lab \n327\nReview Questions \n328\nAnswers to Review Questions \n332\nAnswers to Written Lab \n334\nChapter\n10\nPKI and Cryptographic Applications\n335\nAsymmetric Cryptography \n336\nPublic and Private Keys \n337\nRSA \n337\nEl Gamal \n338\nElliptic Curve \n339\nHash Functions \n340\nSHA \n341\nMD2 \n342\nMD4 \n342\nMD5 \n343\nDigital Signatures \n344\nHMAC \n345\nDigital Signature Standard \n345\nPublic Key Infrastructure \n346\nCertificates \n346\nCertificate Authorities \n347\nCertificate Generation and Destruction \n348\nKey Management \n350\nApplied Cryptography \n350\nElectronic Mail \n351\nWeb \n353\nE-Commerce \n354\nNetworking \n355\nCryptographic Attacks \n359\nSummary \n360\nExam Essentials \n361\nReview Questions \n363\nAnswers to Review Questions \n367\n" }, { "page_number": 22, "text": "xvi\nContents\nChapter\n11\nPrinciples of Computer Design\n369\nComputer Architecture \n371\nHardware \n371\nInput/Output Structures \n389\nFirmware \n391\nSecurity Protection Mechanisms \n391\nTechnical Mechanisms \n391\nSecurity Policy and Computer Architecture \n393\nPolicy Mechanisms \n394\nDistributed Architecture \n395\nSecurity Models \n397\nState Machine Model \n397\nInformation Flow Model \n398\nNoninterference Model \n398\nTake-Grant Model \n398\nAccess Control Matrix \n399\nBell-LaPadula Model \n400\nBiba \n402\nClark-Wilson \n403\nBrewer and Nash Model (a.k.a. Chinese Wall) \n403\nClassifying and Comparing Models \n404\nSummary \n405\nExam Essentials \n406\nReview Questions \n408\nAnswers to Review Questions \n412\nChapter\n12\nPrinciples of Security Models\n415\nCommon Security Models, Architectures, and Evaluation Criteria \n416\nTrusted Computing Base (TCB) \n417\nSecurity Models \n418\nObjects and Subjects \n420\nClosed and Open Systems \n421\nTechniques for Ensuring Confidentiality, Integrity, \nand Availability \n422\nControls \n423\nTrust and Assurance \n423\nUnderstanding System Security Evaluation \n424\nRainbow Series \n424\nITSEC Classes and Required Assurance and Functionality \n428\nCommon Criteria \n429\nCertification and Accreditation \n432\nCommon Flaws and Security Issues \n435\nCovert Channels \n435\n" }, { "page_number": 23, "text": "Contents\nxvii\nAttacks Based on Design or Coding Flaws and Security Issues \n435\nProgramming \n439\nTiming, State Changes, and Communication Disconnects \n439\nElectromagnetic Radiation \n439\nSummary \n440\nExam Essentials \n441\nReview Questions \n443\nAnswers to Review Questions \n447\nChapter\n13\nAdministrative Management\n449\nOperations Security Concepts \n450\nAntivirus Management \n451\nOperational Assurance and Life Cycle Assurance \n452\nBackup Maintenance \n452\nChanges in Workstation/Location \n453\nNeed-to-Know and the Principle of Least Privilege \n453\nPrivileged Operations Functions \n454\nTrusted Recovery \n455\nConfiguration and Change Management Control \n455\nStandards of Due Care and Due Diligence \n456\nPrivacy and Protection \n457\nLegal Requirements \n457\nIllegal Activities \n457\nRecord Retention \n458\nSensitive Information and Media \n458\nSecurity Control Types \n461\nOperations Controls \n462\nPersonnel Controls \n464\nSummary \n466\nExam Essentials \n467\nReview Questions \n470\nAnswers to Review Questions \n474\nChapter\n14\nAuditing and Monitoring\n477\nAuditing \n478\nAuditing Basics \n478\nAudit Trails \n480\nReporting Concepts \n481\nSampling \n482\nRecord Retention \n483\nExternal Auditors \n484\nMonitoring \n484\nMonitoring Tools and Techniques \n485\n" }, { "page_number": 24, "text": "xviii\nContents\nPenetration Testing Techniques \n486\nPlanning Penetration Testing \n487\nPenetration Testing Teams \n488\nEthical Hacking \n488\nWar Dialing \n488\nSniffing and Eavesdropping \n489\nRadiation Monitoring \n490\nDumpster Diving \n490\nSocial Engineering \n491\nProblem Management \n491\nInappropriate Activities \n491\nIndistinct Threats and Countermeasures \n492\nErrors and Omissions \n492\nFraud and Theft \n493\nCollusion \n493\nSabotage \n493\nLoss of Physical and Infrastructure Support \n493\nMalicious Hackers or Crackers \n495\nEspionage \n495\nMalicious Code \n495\nTraffic and Trend Analysis \n495\nInitial Program Load Vulnerabilities \n496\nSummary \n497\nExam Essentials \n498\nReview Questions \n502\nAnswers to Review Questions \n506\nChapter\n15\nBusiness Continuity Planning\n509\nBusiness Continuity Planning \n510\nProject Scope and Planning \n511\nBusiness Organization Analysis \n511\nBCP Team Selection \n512\nResource Requirements \n513\nLegal and Regulatory Requirements \n514\nBusiness Impact Assessment \n515\nIdentify Priorities \n516\nRisk Identification \n516\nLikelihood Assessment \n517\nImpact Assessment \n518\nResource Prioritization \n519\nContinuity Strategy \n519\nStrategy Development \n519\nProvisions and Processes \n520\nPlan Approval \n522\n" }, { "page_number": 25, "text": "Contents\nxix\nPlan Implementation \n522\nTraining and Education \n522\nBCP Documentation \n523\nContinuity Planning Goals \n523\nStatement of Importance \n523\nStatement of Priorities \n524\nStatement of Organizational Responsibility \n524\nStatement of Urgency and Timing \n524\nRisk Assessment \n524\nRisk Acceptance/Mitigation \n525\nVital Records Program \n525\nEmergency Response Guidelines \n525\nMaintenance \n525\nTesting \n526\nSummary \n526\nExam Essentials \n526\nReview Questions \n528\nAnswers to Review Questions \n532\nChapter\n16\nDisaster Recovery Planning\n535\nDisaster Recovery Planning \n536\nNatural Disasters \n537\nMan-Made Disasters \n541\nRecovery Strategy \n545\nBusiness Unit Priorities \n545\nCrisis Management \n546\nEmergency Communications \n546\nWork Group Recovery \n546\nAlternate Processing Sites \n547\nMutual Assistance Agreements \n550\nDatabase Recovery \n551\nRecovery Plan Development \n552\nEmergency Response \n553\nPersonnel Notification \n553\nBackups and Offsite Storage \n554\nSoftware Escrow Arrangements \n557\nExternal Communications \n558\nUtilities \n558\nLogistics and Supplies \n558\nRecovery vs. Restoration \n558\nTraining and Documentation \n559\nTesting and Maintenance \n560\nChecklist Test \n560\nStructured Walk-Through \n560\n" }, { "page_number": 26, "text": "xx\nContents\nSimulation Test \n561\nParallel Test \n561\nFull-Interruption Test \n561\nMaintenance \n561\nSummary \n561\nExam Essentials \n562\nWritten Lab \n563\nReview Questions \n564\nAnswers to Review Questions \n568\nAnswers to Written Lab \n570\nChapter\n17\nLaw and Investigations\n571\nCategories of Laws \n572\nCriminal Law \n572\nCivil Law \n573\nAdministrative Law \n574\nLaws \n574\nComputer Crime \n575\nIntellectual Property \n578\nLicensing \n584\nImport/Export \n584\nPrivacy \n585\nInvestigations \n590\nEvidence \n591\nInvestigation Process \n593\nSummary \n595\nExam Essentials \n595\nWritten Lab \n597\nReview Questions \n598\nAnswers to Review Questions \n602\nAnswers to Written Lab \n604\nChapter\n18\nIncidents and Ethics\n605\nMajor Categories of Computer Crime \n606\nMilitary and Intelligence Attacks \n607\nBusiness Attacks \n607\nFinancial Attacks \n608\nTerrorist Attacks \n608\nGrudge Attacks \n609\n“Fun” Attacks \n609\nEvidence \n610\nIncident Handling \n610\nCommon Types of Incidents \n611\n" }, { "page_number": 27, "text": "Contents\nxxi\nResponse Teams \n612\nAbnormal and Suspicious Activity \n614\nConfiscating Equipment, Software, and Data \n614\nIncident Data Integrity and Retention \n615\nReporting Incidents \n615\nEthics \n616\n(ISC)2 Code of Ethics \n616\nEthics and the Internet \n617\nSummary \n618\nExam Essentials \n619\nReview Questions \n621\nAnswers to Review Questions \n625\nChapter\n19\nPhysical Security Requirements\n627\nFacility Requirements \n628\nSecure Facility Plan \n629\nPhysical Security Controls \n629\nSite Selection \n629\nVisibility \n630\nAccessibility \n630\nNatural Disasters \n630\nFacility Design \n630\nWork Areas \n630\nServer Rooms \n631\nVisitors \n631\nForms of Physical Access Controls \n631\nFences, Gates, Turnstiles, and Mantraps \n632\nLighting \n633\nSecurity Guards and Dogs \n634\nKeys and Combination Locks \n634\nBadges \n635\nMotion Detectors \n635\nIntrusion Alarms \n635\nSecondary Verification Mechanisms \n636\nTechnical Controls \n636\nSmart Cards \n637\nProximity Readers \n637\nAccess Abuses \n638\nIntrusion Detection Systems \n638\nEmanation Security \n639\nEnvironment and Life Safety \n640\nPersonnel Safety \n640\nPower and Electricity \n640\nNoise \n642\n" }, { "page_number": 28, "text": "xxii\nContents\nTemperature, Humidity, and Static \n642\nWater \n643\nFire Detection and Suppression \n643\nEquipment Failure \n647\nSummary \n648\nExam Essentials \n649\nReview Questions \n652\nAnswers to Review Questions \n656\nGlossary\n659\nIndex\n725\n" }, { "page_number": 29, "text": "Introduction\nThe CISSP: Certified Information Systems Security Professional Study Guide, 3rd Edition \noffers you a solid foundation for the Certified Information Systems Security Professional \n(CISSP) exam. By purchasing this book, you’ve shown a willingness to learn and a desire to \ndevelop the skills you need to achieve this certification. This introduction provides you with a \nbasic overview of this book and the CISSP exam.\nThis book is designed for readers and students who want to study for the CISSP certification exam. \nIf your goal is to become a certified security professional, then the CISSP certification and this study \nguide are for you. The purpose of this book is to adequately prepare you to take the CISSP exam.\nBefore you dive into this book, you need to have accomplished a few tasks on your own. You \nneed to have a general understanding of IT and of security. You should have the necessary 4 \nyears of experience (or 3 years if you have a college degree) in one of the 10 domains covered \nby the CISSP exam. If you are qualified to take the CISSP exam according to (ISC)2, then you \nare sufficiently prepared to use this book to study for the CISSP exam. For more information \non (ISC)2, see the next section.\n(ISC)2\nThe CISSP exam is governed by the International Information Systems Security Certification \nConsortium, Inc. (ISC)2 organization. (ISC)2 is a global not-for-profit organization. It has four \nprimary mission goals:\n\u0001\nMaintain the Common Body of Knowledge for the field of information systems security\n\u0001\nProvide certification for information systems security professionals and practitioners\n\u0001\nConduct certification training and administer the certification exams\n\u0001\nOversee the ongoing accreditation of qualified certification candidates through continued \neducation\nThe (ISC)2 is operated by a board of directors elected from the ranks of its certified practi-\ntioners. More information about (ISC)2 can be obtained from its website at www.isc2.org.\nCISSP and SSCP\n(ISC)2 supports and provides two primary certifications: CISSP and SSCP. These certifications are \ndesigned to emphasize the knowledge and skills of an IT security professional across all industries. \nCISSP is a certification for security professionals who have the task of designing a security infra-\nstructure for an organization. System Security Certified Practitioner (SSCP) is a certification for \nsecurity professionals who have the responsibility of implementing a security infrastructure in an \norganization. The CISSP certification covers material from the 10 CBK domains:\n1.\nAccess Control Systems and Methodology\n2.\nTelecommunications and Network Security\n" }, { "page_number": 30, "text": "xxiv\nIntroduction\n3.\nSecurity Management Practices\n4.\nApplications and Systems Development Security\n5.\nCryptography\n6.\nSecurity Architecture and Models\n7.\nOperations Security\n8.\nBusiness Continuity Planning and Disaster Recovery Planning\n9.\nLaw, Investigations, and Ethics\n10. Physical Security\nThe SSCP certification covers material from 7 CBK domains:\n\u0001\nAccess Controls\n\u0001\nAdministration\n\u0001\nAudit and Monitoring\n\u0001\nCryptography\n\u0001\nData Communications\n\u0001\nMalicious Code/Malware\n\u0001\nRisk, Response, and Recovery\nThe content for the CISSP and SSCP domains overlap significantly, but the focus is different \nfor each set of domains. CISSP focuses on theory and design, whereas SSCP focuses more on \nimplementation. This book focuses only on the domains for the CISSP exam.\nPrequalifications\n(ISC)2 has defined several qualification requirements you must meet to become a CISSP. First, \nyou must be a practicing security professional with at least 4 years’ experience or with 3 years’ \nexperience and a recent IT or IS degree. Professional experience is defined as security work per-\nformed for salary or commission within one or more of the 10 CBK domains.\nSecond, you must agree to adhere to the code of ethics. The CISSP Code of Ethics is a set of \nguidelines the (ISC)2 wants all CISSP candidates to follow in order to maintain professionalism \nin the field of information systems security. You can find it in the Information section on the \n(ISC)2 website at www.isc2.org.\n(ISC)2 has created a new program known as an Associate of (ISC)2. This program allows \nsomeone without any or enough experience to take the CISSP exam and then obtain experience \nafterward. They are given 5 years to obtain 4 years of security experience. Only after providing \nproof of experience, usually by means of endorsement and a resume, does (ISC)2 award the indi-\nvidual the CISSP certification label.\nTo sign up for the exam, visit the (ISC)2 website and follow the instructions listed there on reg-\nistering to take the CISSP exam. You’ll provide your contact information, payment details, and \nsecurity-related professional experience. You’ll also select one of the available time and location \nsettings for the exam. Once (ISC)2 approves your application to take the exam, you’ll receive a \nconfirmation e-mail with all the details you’ll need to find the testing center and take the exam.\n" }, { "page_number": 31, "text": "Introduction\nxxv\nOverview of the CISSP Exam\nThe CISSP exam consists of 250 questions, and you are given 6 hours to complete it. The exam \nis still administered in a booklet and answer sheet format. This means you’ll be using a pencil \nto fill in answer bubbles.\nThe CISSP exam focuses on security from a 30,000-foot view; it deals more with theory and \nconcept than implementation and procedure. It is very broad but not very deep. To successfully \ncomplete the exam, you’ll need to be familiar with every domain but not necessarily be a master \nof each domain.\nYou’ll need to register for the exam through the (ISC)2 website at www.isc2.org.\n(ISC)2 administers the exam itself. In most cases, the exams are held in large conference \nrooms at hotels. Existing CISSP holders are recruited to serve as proctors or administrators over \nthe exams. Be sure to arrive at the testing center around 8:00 a.m., and keep in mind that abso-\nlutely no one will be admitted into the exam after 8:30 a.m.\nCISSP Exam Question Types\nEvery single question on the CISSP exam is a four-option multiple choice question with a single \ncorrect answer. Some are straightforward, such as asking you to select a definition. Some are a \nbit more involved, such as asking you to select the appropriate concept or best practice. And \nsome questions present you with a scenario or situation and ask you to select the best response. \nHere’s an example:\n1.\nWhat is the most important goal and top priority of a security solution?\nA.\nPrevention of disclosure\nB.\nMaintaining integrity\nC.\nHuman safety\nD.\nSustaining availability\nYou must select the one correct or best answer and mark it on your answer sheet. In some \ncases, the correct answer will be very obvious to you. In other cases, there will be several \nanswers that seem correct. In these instances, you must choose the best answer for the question \nasked. Watch for general, specific, universal, superset, and subset answer selections. In other \ncases, none of the answers will seem correct. In these instances, you’ll need to select the least \nincorrect answer.\nBy the way, the correct answer for this question is C. Protecting human safety \nis always your first priority.\nAdvice on Taking the Exam\nThere are two key elements to the CISSP exam. First, you need to know the material from the \n10 CBK domains. Second, you must have good test-taking skills. With 6 hours to complete a \n" }, { "page_number": 32, "text": "xxvi\nIntroduction\n250-question exam, you have just under 90 seconds for each question. Thus, it is important to \nwork quickly, without rushing but without wasting time.\nA key factor to keep in mind is that guessing is better than not answering a question. If you \nskip a question, you will not get credit. But if you guess, you have at least a 25-percent chance \nof improving your score. Wrong answers are not counted against you. So, near the end of the \nsixth hour, be sure an answer is selected for every line on the answer sheet.\nYou can write on the test booklet, but nothing written on it will count for or against your \nscore. Use the booklet to make notes and keep track of your progress. We recommend circling \neach answer you select before you mark it on your answer sheet.\nTo maximize your test-taking activities, here are some general guidelines:\n1.\nAnswer easy questions first.\n2.\nSkip harder questions and return to them later. Consider creating a column on the front \ncover of your testing booklet to keep track of skipped questions.\n3.\nEliminate wrong answers before selecting the correct one.\n4.\nWatch for double negatives.\n5.\nBe sure you understand what the question is asking.\nManage your time. You should try to keep up with about 50 questions per hour. This will \nleave you with about an hour to focus on skipped questions and double-check your work.\nBe very careful to mark your answers on the correct question number on the answer sheet. \nThe most common cause of failure is making a transference mistake from the test booklet to the \nanswer sheet.\nStudy and Exam Preparation Tips\nWe recommend planning out a month or so for nightly intensive study for the CISSP exam. Here \nare some suggestions to maximize your learning time; you can modify them as necessary based \non your own learning habits:\n\u0001\nTake one or two evenings to read each chapter in this book and work through its review \nmaterial.\n\u0001\nTake all the practice exams provided in the book and on the CD.\n\u0001\nReview the (ISC)2’s study guide from www.isc2.org.\n\u0001\nUse the flashcards found on the CD to reinforce your understanding of concepts.\nI recommend spending about half of your study time reading and reviewing \nconcepts and the other half taking practice exams. My students have found that \nthe more time they spend taking practice exams, the better the topics were \nretained in their memory.\nYou might also consider visiting resources such as www.cccure.org, \nwww.cissp.com, and other CISSP-focused websites.\n" }, { "page_number": 33, "text": "Introduction\nxxvii\nCompleting the Certification Process\nOnce you have been informed that you successfully passed the CISSP certification, there is one \nfinal step before you are actually awarded the CISSP certification label. That final step is known \nas endorsement. Basically, this involves getting someone familiar with your work history to sign \nand submit an endorsement form on your behalf. The endorsement form is sent to you as an \nattachment on the e-mail notifying you of your achievement in passing the exam. Simply send \nthe form to a manager, supervisor, or even another CISSP along with your resume. The endorser \nmust review your resume, ensure that you have sufficient experience in the 10 CISSP domains, \nand then submit the signed form to (ISC)2 via fax or snail mail. You must have completed \nendorsement files with (ISC)2 within 90 days after receiving the confirmation of passing e-mail. \nOnce (ISC)2 receives your endorsement form, the certification process will be completed and \nyou will be sent a welcome packet via snail mail.\nPost CISSP Concentrations\n(ISC)2 has added three concentrations to its certification lineup. These concentrations are \noffered only to CISSP certificate holders. The (ISC)2 has taken the concepts introduced on the \nCISSP exam and focused on specific areas; namely, architecture, management, and engineering. \nThe three concentrations are as follows:\n\u0001\nISSAP (Information Systems Security Architecture Professional)\n\u0001\nISSMP (Information Systems Security Management Professional)\n\u0001\nISSEP (Information Systems Security Engineering Professional)\nFor more details about these concentration exams and certifications, please see the (ISC)2 \nwebsite at www.isc2.org.\nNotes on This Book’s Organization\nThis book is designed to cover each of the 10 CISSP Common Body of Knowledge (CBK) \ndomains in sufficient depth to provide you with a clear understanding of the material. The main \nbody of this book comprises 19 chapters. The first 9 domains are each covered by 2 chapters, \nand the final domain (Physical Security) is covered in Chapter 19. The domain/chapter break-\ndown is as follows:\nChapters 1 and 2\nAccess Control Systems and Methodology\nChapters 3 and 4\nTelecommunications and Network Security\nChapters 5 and 6\nSecurity Management Practices\nChapters 7 and 8\nApplications and Systems Development Security\nChapters 9 and 10\nCryptography\nChapters 11 and 12\nSecurity Architecture and Models\nChapters 13 and 14\nOperations Security\n" }, { "page_number": 34, "text": "xxviii\nIntroduction\nChapters 15 and 16\nBusiness Continuity Planning (BCP) and Disaster Recovery Planning (DRP)\nChapters 17 and 18\nLaw, Investigation, and Ethics\nChapter 19\nPhysical Security\nEach chapter includes elements to help you focus your studies and test your knowledge. \nThese include Exam Essentials, key terms, and review questions. The Exam Essentials point out \nkey topics to know for the exam. Unique terminology is presented in the chapter, and then each key \nterm is also later defined in the glossary at the end of the book for your convenience. Review \nquestions test your knowledge retention for the material covered in the chapter.\nThere is a CD included that offers many other study tools, including lengthy practice exams \n(all of the questions from each chapter plus over 300 additional unique questions) and a com-\nplete set of study flashcards.\nThe Elements of this Study Guide\nYou’ll see many recurring elements as you read through the study guide. Here’s a description of \nsome of those elements.\nKey Terms and Glossary\nIn every chapter, we’ve identified key terms, which are important \nfor you to know. You’ll also find these key terms and their definitions in the glossary.\nSummaries\nThe summary is a brief review of the chapter to sum up what was covered.\nExam Essentials\nThe Exam Essentials highlight topics that could appear on one or both of the \nexams in some form. While we obviously do not know exactly what will be included in a par-\nticular exam, this section reinforces significant concepts that are key to understanding the body \nof knowledge area and the test specs for the CISSP exam.\nChapter Review Questions\nEach chapter includes 20 practice questions that have been \ndesigned to measure your knowledge of key ideas that were discussed in the chapter. After you \nfinish each chapter, answer the questions; if some of your answers are incorrect, it’s an indica-\ntion that you need to spend some more time studying that topic. The answers to the practice \nquestions can be found at the end of the chapter.\nWhat’s on the CD?\nWe worked really hard to provide some essential tools to help you with your certification pro-\ncess. All of the following gear should be loaded on your workstation when studying for the test.\nThe Sybex Test Preparation Software\nThe test preparation software, made by experts at Sybex, prepares you for the CISSP exam. In \nthis test engine, you will find all the review and assessment questions from the book, plus five \nadditional bonus exams that appear exclusively on the CD. You can take the assessment test, \ntest yourself by chapter, take the practice exams, or take a randomly generated exam compris-\ning all the questions.\n" }, { "page_number": 35, "text": "Introduction\nxxix\nElectronic Flashcards for PCs and Palm Devices\nSybex’s electronic flashcards include hundreds of questions designed to challenge you further \nfor the CISSP exam. Between the review questions, practice exams, and flashcards, you’ll have \nmore than enough practice for the exam!\nCISSP Study Guide in PDF\nSybex offers the CISSP Study Guide in PDF format on the CD so you can read the book on your \nPC or laptop. So if you travel and don’t want to carry a book, or if you just like to read from \nthe computer screen, Acrobat Reader 5 is also included on the CD.\nHow to Use This Book and CD\nThis book has a number of features designed to guide your study efforts for the CISSP certifi-\ncation exam. It assists you by listing the CISSP body of knowledge at the beginning of each \nchapter and by ensuring that each of them is fully discussed within the chapter. The practice \nquestions at the end of each chapter and the practice exams on the CD are designed to assist you \nin testing your retention of the material you’ve read to make you are aware of areas in which \nyou should spend additional study time. Here are some suggestions for using this book and CD:\n1.\nTake the assessment test before you start reading the material. This will give you an idea \nof the areas in which you need to spend additional study time, as well as those areas in \nwhich you may just need a brief refresher.\n2.\nAnswer the review questions after you’ve read each chapter; if you answer any incorrectly, \ngo back to the chapter and review the topic, or utilize one of the additional resources if you \nneed more information.\n3.\nDownload the flashcards to your hand-held device and review them when you have a few \nminutes during the day.\n4.\nTake every opportunity to test yourself. In addition to the assessment test and review ques-\ntions, there are five bonus exams on the CD. Take these exams without referring to the \nchapters and see how well you’ve done—go back and review any topics you’ve missed until \nyou fully understand and can apply the concepts.\nFinally, find a study partner if possible. Studying for, and taking, the exam with someone else \nwill make the process more enjoyable, and you’ll have someone to help you understand topics \nthat are difficult for you. You’ll also be able to reinforce your own knowledge by helping your \nstudy partner in areas where they are weak.\n" }, { "page_number": 36, "text": "xxx\nIntroduction\nAbout the Authors\nJames Michael Stewart, CISSP, has been writing and training for over 11 years, with a current \nfocus on security. He has taught dozens of CISSP training courses, not to mention numerous ses-\nsions on Windows security and the Certified Ethical Hacker certification. He is the author of \nseveral books and courseware sets on security certification, Microsoft topics, and network \nadministration. He is also a regular speaker at Interop and COMDEX. More information about \nMichael can be found at his website: www.impactonline.com.\nEd Tittel is a full-time freelance writer, trainer, and consultant specializing in matters related \nto information security, markup languages, and networking technologies. He’s a regular con-\ntributor to numerous TechTarget websites, is technology editor for Certification Magazine, and \nwrites an e-mail newsletter for CramSession called “Must Know News.”\nMike Chapple, CISSP, is an IT security professional with the University of Notre Dame. In \nthe past, he was chief information officer of Brand Institute and an information security \nresearcher with the National Security Agency and the U.S. Air Force. His primary areas of \nexpertise include network intrusion detection and access controls. Mike is a frequent contrib-\nutor to TechTarget’s SearchSecurity site, a technical editor for Information Security Magazine, \nand the author of several information security titles including Wiley’s GSEC Prep Guide and \nInformation Security Illuminated from Jones and Bartlett Publishers.\n" }, { "page_number": 37, "text": "Assessment Test\nxxxi\nAssessment Test\n1.\nIn what phase of the Capability Maturity Model for Software (SW-CMM) are quantitative mea-\nsures utilized to gain a detailed understanding of the software development process?\nA. Repeatable\nB. Defined\nC. Managed\nD. Optimizing\n2.\nYou are the security administrator of a large law firm. You have been asked to select a security \nmodel that supports your organization’s desire to ensure data confidentiality and integrity. You \nmust select one or more models that will protect data from internal and external attacks. What \nsecurity model(s) will you choose? (Choose all that apply.)\nA. Bell-LaPadula\nB. Take Grant Model\nC. Clark-Wilson\nD. TCSEC\n3.\nWhy are military and intelligence attacks among the most serious computer crimes?\nA. The use of information obtained can have far-reaching detrimental strategic effect on \nnational interests in an enemy’s hands.\nB. Military information is stored on secure machines, so a successful attack can be embarrassing.\nC. The long-term political use of classified information can impact a country’s leadership.\nD. The military and intelligence agencies have ensured that the laws protecting their informa-\ntion are the most severe.\n4.\nWhat is the length of a message digest produced by the MD5 algorithm?\nA. 64 bits\nB. 128 bits\nC. 256 bits\nD. 384 bits\n5.\nWhich of the following is most likely to detect DoS attacks?\nA. Host-based IDS\nB. Network-based IDS\nC. Vulnerability scanner\nD. Penetration testing\n" }, { "page_number": 38, "text": "xxxii\nAssessment Test\n6.\nHow is annualized loss expectancy (ALE) calculated?\nA. SLE*AS (single loss expectancy * asset value)\nB. AS*EF (asset value * exposure factor)\nC. ARO*V (annualized rate of occurrence * vulnerability)\nD. SLE*ARO (single loss expectancy * annualized rate of occurrence\n7.\nAt what height and form will a fence deter determined intruders?\nA. 3- to 4-feet high chain link\nB. 6- to 7-feet high wood\nC. 8-feet high with 3 strands of barbed wire\nD. 4- to 5-feet high concrete\n8.\nA VPN can be established over which of the following?\nA. Wireless LAN connection\nB. Remote access dial-up connection\nC. WAN link\nD. All of the above\n9.\nWhat is the Biba access control model primarily based upon?\nA. Identity\nB. Analog\nC. Military\nD. Lattice\n10. Which one of the following database backup techniques requires the greatest expenditure of funds?\nA. Transaction logging\nB. Remote journaling\nC. Electronic vaulting\nD. Remote mirroring\n11. What is the value of the logical operation shown here?\nX: 0 1 1 0 1 0\nY: 0 0 1 1 0 1\n___________________________\nX ∨ Y: ?\nA. 0 1 1 1 1 1\nB. 0 1 1 0 1 0\nC. 0 0 1 0 0 0\nD. 0 0 1 1 0 1\n" }, { "page_number": 39, "text": "Assessment Test\nxxxiii\n12. Which one of the following security modes does not require that a user have a valid security \nclearance for all information processed by the system?\nA. Dedicated mode\nB. System high mode\nC. Compartmented mode\nD. Multilevel mode\n13. You are the security administrator for an international shipping company. You have been asked \nto evaluate the security of a new shipment tracking system for your London office. It is impor-\ntant to evaluate the security features and assurance of the system separately to compare it to \nother systems that management is considering. What evaluation criteria should you use (assume \nthe year is 1998)?\nA. TCSEC\nB. ITSEC\nC. The Blue Book\nD. IPSec\n14. What is the last phase of the TCP/IP three-way handshake sequence?\nA. SYN packet\nB. ACK packet\nC. NAK packet\nD. SYN/ACK packet\n15. Which of the following is a requirement of change management?\nA. Changes must comply with Internet standards.\nB. All changes must be capable of being rolled back.\nC. Upgrade strategies must be revealed over the Internet.\nD. The audit reports of change management should be accessible to all users.\n16. Which of the following is a procedure designed to test and perhaps bypass a system’s security \ncontrols?\nA. Logging usage data\nB. War dialing\nC. Penetration testing\nD. Deploying secured desktop workstations\n17.\nAt which layer of the OSI model does a router operate?\nA. Network layer\nB. Layer 1\nC. Transport layer\nD. Layer 5\n" }, { "page_number": 40, "text": "xxxiv\nAssessment Test\n18. Which of the following is considered a denial of service attack?\nA. Pretending to be a technical manager over the phone and asking a receptionist to change \ntheir password\nB. While surfing the Web, sending to a web server a malformed URL that causes the system to \nuse 100 percent of the CPU to process an endless loop\nC. Intercepting network traffic by copying the packets as they pass through a specific subnet\nD. Sending message packets to a recipient who did not request them simply to be annoying\n19. Audit trails, logs, CCTV, intrusion detection systems, antivirus software, penetration testing, \npassword crackers, performance monitoring, and cyclic redundancy checks (CRCs) are exam-\nples of what?\nA. Directive controls\nB. Preventive controls\nC. Detective controls\nD. Corrective controls\n20. Which one of the following vulnerabilities would best be countered by adequate parameter checking?\nA. Time-of-check-to-time-of-use\nB. Buffer overflow\nC. SYN flood\nD. Distributed denial of service\n21. What technology allows a computer to harness the power of more than one CPU?\nA. Multitasking\nB. Multiprocessing\nC. Multiprogramming\nD. Multithreading\n22. What type of backup stores all files modified since the time of the most recent full or incremental \nbackup?\nA. Full backup\nB. Incremental backup\nC. Partial backup\nD. Differential backup\n23. What law allows ISPs to voluntarily provide government investigators with a large range of user \ninformation without a warrant?\nA. Electronic Communications Privacy Act\nB. Gramm-Leach-Bliley Act\nC. USA Patriot Act\nD. Privacy Act of 1974\n" }, { "page_number": 41, "text": "Assessment Test\nxxxv\n24. What type of detected incident allows the most time for an investigation?\nA. Compromise\nB. Denial of service\nC. Malicious code\nD. Scanning\n25. Auditing is a required factor to sustain and enforce what?\nA. Accountability\nB. Confidentiality\nC. Accessibility\nD. Redundancy\n26. Which type of firewall automatically adjusts its filtering rules based on the content of the traffic \nof existing sessions?\nA. Static packet-filtering\nB. Application-level gateway\nC. Stateful inspection\nD. Dynamic packet-filtering\n27. Which one of the following is a layer of the ring protection scheme that is not normally imple-\nmented in practice?\nA. Layer 0\nB. Layer 1\nC. Layer 3\nD. Layer 4\n28. In what type of cipher are the letters of the plaintext message rearranged to form the ciphertext?\nA. Substitution cipher\nB. Block cipher\nC. Transposition cipher\nD. One-time pad\n29. What is the formula used to compute the ALE?\nA. ALE = AV*EF\nB. ALE = ARO*EF\nC. ALE = AV*ARO\nD. ALE = EF*ARO\n" }, { "page_number": 42, "text": "xxxvi\nAssessment Test\n30. Which of the following is the principle that objects retain their veracity and are only intention-\nally modified by authorized subjects?\nA. Privacy\nB. Authentication\nC. Integrity\nD. Data hiding\n31. E-mail is the most common delivery vehicle for which of the following?\nA. Viruses\nB. Worms\nC. Malicious code\nD. All of the above\n32. What type of physical security controls are access controls, intrusion detection, alarms, CCTV, \nmonitoring, HVAC, power supplies, and fire detection and suppression?\nA. Technical\nB. Administrative\nC. Physical\nD. Preventative\n33. In the United States, how are the administrative determinations of federal agencies promulgated?\nA. Code of Federal Regulations\nB. United States Code\nC. Supreme Court decisions\nD. Administrative declarations\n34. What is the first step of the Business Impact Assessment process?\nA. Identification of priorities\nB. Likelihood assessment\nC. Risk identification\nD. Resource prioritization\n35. If Renee receives a digitally signed message from Mike, what key does she use to verify that the \nmessage truly came from Mike?\nA. Renee’s public key\nB. Renee’s private key\nC. Mike’s public key\nD. Mike’s private key\n" }, { "page_number": 43, "text": "Assessment Test\nxxxvii\n36. The “something you are” authentication factor is also known as what?\nA. Type 1\nB. Type 2\nC. Type 3\nD. Type 4\n37. What is the primary goal of risk management?\nA. To produce a 100-percent risk-free environment\nB. To guide budgetary decisions\nC. To reduce risk to an acceptable level\nD. To provide an asset valuation for insurance\n" }, { "page_number": 44, "text": "xxxviii\nAnswers to Assessment Test\nAnswers to Assessment Test\n1.\nC. The Managed phase of the SW-CMM involves the use of quantitative development metrics. \nThe Software Engineering Institute (SEI) defines the key process areas for this level as Quanti-\ntative Process Management and Software Quality Management. For more information, please \nsee Chapter 7.\n2.\nA, C. Because your organization needs to ensure confidentiality, you should choose the Bell-\nLaPadula model. To ensure the integrity of your data, you should also use the Clark-Wilson \nmodel, which addresses separation of duties. This feature offers better protection from internal \nand external attacks. For more information, please see Chapter 12.\n3.\nA. The purpose of a military and intelligence attack is to acquire classified information. The det-\nrimental effect of using such information could be nearly unlimited in the hands of an enemy. \nAttacks of this type are launched by very sophisticated attackers. It is often very difficult to ascer-\ntain what documents were successfully obtained. So when a breach of this type occurs, you some-\ntimes cannot know the full extent of the damage. For more information, please see Chapter 18.\n4.\nB. The MD5 algorithm produces a 128-bit message digest for any input. For more information, \nplease see Chapter 10.\n5.\nB. Network-based IDSs are usually able to detect the initiation of an attack or the ongoing \nattempts to perpetrate an attack (including DoS). They are, however, unable to provide infor-\nmation about whether an attack was successful or which specific systems, user accounts, files, \nor applications were affected. Host-based IDSs have some difficulty with detecting and tracking \ndown DoS attacks. Vulnerability scanners don’t detect DoS attacks; they test for possible vul-\nnerabilities. Penetration testing may cause a DoS or test for DoS vulnerabilities, but it is not a \ndetection tool. For more information, please see Chapter 2.\n6.\nD. Annualized loss expectancy (ALE) is the possible yearly cost of all instances of a specific \nrealized threat against a specific asset. The ALE is calculated using the formula SLE*ARO. For \nmore information, please see Chapter 6.\n7.\nC. A fence that is 8 feet high with 3 strands of barbed wire deters determined intruders. For \nmore information, please see Chapter 19.\n8.\nD. A VPN link can be established over any other network communication connection. This \ncould be a typical LAN cable connection, a wireless LAN connection, a remote access dial-up \nconnection, a WAN link, or even an Internet connection used by a client for access to the office \nLAN. For more information, please see Chapter 4.\n9.\nD. Biba is also a state machine model based on a classification lattice with mandatory access \ncontrols. For more information, please see Chapter 1.\n10. D. Remote mirroring maintains a live database server at the remote site and comes at the high-\nest cost. For more information, please see Chapter 16.\n11. A. The ∨ symbol represents the OR function, which is true when one or both of the input bits \nare true. For more information, please see Chapter 9.\n" }, { "page_number": 45, "text": "Answers to Assessment Test\nxxxix\n12. D. In multilevel security mode, some users do not have a valid security clearance for all infor-\nmation processed by the system. For more information, please see Chapter 11.\n13. B. ITSEC was developed in Europe for evaluating systems. Although TCSEC (also called the \nOrange Book) would satisfy the evaluation criteria, only ITSEC evaluates functionality and \nassurance separately. For more information, please see Chapter 12.\n14. B. The SYN packet is first sent from the initiating host to the destination host. The destination \nhost then responds with a SYN/ACK packet. The initiating host sends an ACK packet and the \nconnection is then established. For more information, please see Chapter 8.\n15. B. One of the requirements of change management is that all changes must be capable of being \nrolled back. For more information, please see Chapter 5.\n16. C. Penetration testing is the attempt to bypass security controls to test overall system security. \nFor more information, please see Chapter 14.\n17.\nA. Network hardware devices, including routers, function at layer 3, the Network layer. For \nmore information, please see Chapter 3.\n18. B. Not all instances of DoS are the result of a malicious attack. Errors in coding OSs, services, \nand applications have resulted in DoS conditions. Some examples of this include a process failing \nto release control of the CPU or a service consuming system resources out of proportion to the \nservice requests it is handling. Social engineering and sniffing are typically not considered DoS \nattacks. For more information, please see Chapter 2.\n19. C. Examples of detective controls are audit trails, logs, CCTV, intrusion detection systems, \nantivirus software, penetration testing, password crackers, performance monitoring, and CRCs. \nFor more information, please see Chapter 13.\n20. B. Parameter checking is used to prevent the possibility of buffer overflow attacks. For more \ninformation, please see Chapter 8.\n21. B. Multiprocessing computers use more than one processor, in either a symmetric multipro-\ncessing (SMP) or massively parallel processing (MPP) scheme. For more information, please see \nChapter 11.\n22. D. Differential backups store all files that have been modified since the time of the most recent \nfull or incremental backup. For more information, please see Chapter 16.\n23. C. The USA Patriot Act granted broad new powers to law enforcement, including the solicita-\ntion of voluntary ISP cooperation. For more information, please see Chapter 17.\n24. D. Scanning incidents are generally reconnaissance attacks. The real damage to a system comes \nin the subsequent attacks, so you may have some time to react if you detect the scanning attack \nearly. For more information, please see Chapter 18.\n25. A. Auditing is a required factor to sustain and enforce accountability. For more information, \nplease see Chapter 14.\n26. D. Dynamic packet-filtering firewalls enable real-time modification of the filtering rules based \non traffic content. For more information, please see Chapter 3.\n" }, { "page_number": 46, "text": "Chapter\n1\nAccountability and \nAccess Control\nTHE CISSP EXAM TOPICS COVERED IN THIS \nCHAPTER INCLUDE:\n\u0001 Accountability\n\u0001 Access Control Techniques\n\u0001 Access Control Administration\n\u0001 Identification and Authentication Techniques\n\u0001 Access Control Methodologies and Implementation\n" }, { "page_number": 47, "text": "The Access Control Systems and Methodology domain of the \nCommon Body of Knowledge (CBK) for the CISSP certification \nexam deals with topics and issues related to the monitoring, iden-\ntification, and authorization of granting or restricting user access to resources. Generally, an \naccess control is any hardware, software, or organizational administrative policy or procedure \nthat grants or restricts access, monitors and records attempts to access, identifies users attempt-\ning to access, and determines whether access is authorized.\nIn this chapter and in Chapter 2, “Attacks and Monitoring,” the Access Control Systems \nand Methodology domain is discussed. Be sure to read and study the materials from both \nchapters to ensure complete coverage of the essential material for this domain of the CISSP \ncertification exam.\nAccess Control Overview\nControlling access to resources is one of the central themes of security. Access control addresses \nmore than just controlling which users can access which files or services. Access control is about \nthe relationships between subjects and objects. The transfer of information from an object to a \nsubject is called access. However, access is not just a logical or technical concept; don’t forget \nabout the physical realm where access can be disclosure, use, or proximity. A foundational prin-\nciple of access control is to deny access by default if access is not granted specifically to a subject.\nSubjects are active entities that, through the exercise of access, seek information about or \ndata from passive entities, or objects. A subject can be a user, program, process, file, computer, \ndatabase, and so on. An object can be a file, database, computer, program, process, file, printer, \nstorage media, and so on. The subject is always the entity that receives information about or \ndata from the object. The subject is also the entity that alters information about or data stored \nwithin the object. The object is always the entity that provides or hosts the information or data. \nThe roles of subject and object can switch as two entities, such as a program and a database or \na process and a file, communicate to accomplish a task.\nTypes of Access Control\nAccess controls are necessary to protect the confidentiality, integrity, and availability of objects \n(and by extension, their information and data). The term access control is used to describe a \nbroad range of controls, from forcing a user to provide a valid username and password to log \non to preventing users from gaining access to a resource outside of their sphere of access.\n" }, { "page_number": 48, "text": "Access Control Overview\n3\nAccess controls can be divided into the following seven categories of function or purpose. \nYou should notice that some security mechanisms can be labeled with multiple function or pur-\npose categories.\nPreventative access control\nA preventative access control (or preventive access control) is \ndeployed to stop unwanted or unauthorized activity from occurring. Examples of preventative \naccess controls include fences, locks, biometrics, mantraps, lighting, alarm systems, separation \nof duties, job rotation, data classification, penetration testing, access control methods, encryp-\ntion, auditing, presence of security cameras or closed circuit television (CCTV), smart cards, \ncallback, security policies, security awareness training, and antivirus software.\nDeterrent access control\nA deterrent access control is deployed to discourage the violation of \nsecurity policies. A deterrent control picks up where prevention leaves off. The deterrent doesn’t \nstop with trying to prevent an action; instead, it goes further to exact consequences in the event \nof an attempted or successful violation. Examples of deterrent access controls include locks, \nfences, security badges, security guards, mantraps, security cameras, trespass or intrusion \nalarms, separation of duties, work task procedures, awareness training, encryption, auditing, \nand firewalls.\nDetective access control\nA detective access control is deployed to discover unwanted or unau-\nthorized activity. Often detective controls are after-the-fact controls rather than real-time con-\ntrols. Examples of detective access controls include security guards, guard dogs, motion detectors, \nrecording and reviewing of events seen by security cameras or CCTV, job rotation, mandatory \nvacations, audit trails, intrusion detection systems, violation reports, honey pots, supervision and \nreviews of users, incident investigations, and intrusion detection systems.\nCIA Triad\nThe essential security principles of confidentiality, integrity, and availability are often \nreferred to as the CIA Triad. All security controls must address these principles. These \nthree security principles serve as common threads throughout the CISSP CBK. Each domain \naddresses these principles in unique ways, so it is important to understand them both in gen-\neral terms and within each specific domain:\n\u0002\nConfidentiality is the principle that objects are not disclosed to unauthorized subjects.\n\u0002\nIntegrity is the principle that objects retain their veracity and are intentionally modified by \nauthorized subjects only.\n\u0002\nAvailability is the principle that authorized subjects are granted timely access to objects \nwith sufficient bandwidth to perform the desired interaction.\nDifferent security mechanisms address these three principles in different ways and offer vary-\ning degrees of support or application of these principles. Objects must be properly classified \nand prioritized so proper security access controls can be deployed. These and many other \nissues related to the CIA Triad are discussed throughout this book.\n" }, { "page_number": 49, "text": "4\nChapter 1\n\u0002 Accountability and Access Control\nCorrective access control\nA corrective access control is deployed to restore systems to normal \nafter an unwanted or unauthorized activity has occurred. Usually corrective controls are simple \nin nature, such as terminating access or rebooting a system. Corrective controls have only a min-\nimal capability to respond to access violations. Examples of corrective access controls include \nintrusion detection systems, antivirus solutions, alarms, mantraps, business continuity plan-\nning, and security policies.\nRecovery access control\nA recovery access control is deployed to repair or restore resources, \nfunctions, and capabilities after a violation of security policies. Recovery controls have a more \nadvanced or complex capability to respond to access violations than a corrective access control. \nFor example, a recovery access control can repair damage as well as stop further damage. Exam-\nples of recovery access controls include backups and restores, fault tolerant drive systems, server \nclustering, antivirus software, and database shadowing.\nCompensation access control\nA compensation access control is deployed to provide various \noptions to other existing controls to aid in the enforcement and support of a security policy. \nExamples of compensation access controls include security policy, personnel supervision, mon-\nitoring, and work task procedures.\nCompensation controls can also be considered to be controls used in place of or instead of more \ndesirable or damaging controls. For example, if a guard dog cannot be used because of the prox-\nimity of a residential area, a motion detector with a spotlight and a barking sound playback \ndevice can be used.\nDirective access control\nA directive access control is deployed to direct, confine, or control the \nactions of subjects to force or encourage compliance with security policies. Examples of direc-\ntive access controls include security guards, guard dogs, security policy, posted notifications, \nescape route exit signs, monitoring, supervising, work task procedures, and awareness training.\nAccess controls can be further categorized by how they are implemented. In this case, the cat-\negories are administrative, logical/technical, or physical:\nAdministrative access controls\nAdministrative access controls are the policies and procedures \ndefined by an organization’s security policy to implement and enforce overall access control. \nAdministrative access controls focus on two areas: personnel and business practices (e.g., people \nand policies). Examples of administrative access controls include policies, procedures, hiring \npractices, background checks, data classification, security training, vacation history, reviews, \nwork supervision, personnel controls, and testing.\nLogical/technical access controls\nLogical access controls and technical access controls are the \nhardware or software mechanisms used to manage access to resources and systems and provide \nprotection for those resources and systems. Examples of logical or technical access controls \ninclude encryption, smart cards, passwords, biometrics, constrained interfaces, access control \nlists (ACLs), protocols, firewalls, routers, intrusion detection systems, and clipping levels.\nThe words logical and technical may be used interchangeably within this concept.\n" }, { "page_number": 50, "text": "Access Control Overview\n5\nPhysical access controls\nPhysical access controls are the physical barriers deployed to prevent \ndirect contact with systems or portions of a facility. Examples of physical access controls \ninclude guards, fences, motion detectors, locked doors, sealed windows, lights, cable protec-\ntion, laptop locks, swipe cards, guard dogs, video cameras, mantraps, and alarms.\nAccess Control in a Layered Environment\nNo single access control mechanism is ever deployed on its own. In fact, combining various types \nof access controls is the only means by which a reasonably secure environment can be developed. \nOften multiple layers or levels of access controls are deployed to provide layered security or \ndefense in depth. This idea is described by the notion of concentric circles of protection, which \nputs forth the concept of surrounding your assets and resources with logical circles of security pro-\ntection. Thus, intruders or attackers would need to overcome multiple layers of defenses to reach \nthe protected assets. Layered security or defense in depth is considered a more logical approach \nto security than a traditional fortress mentality. In a fortress mentality security approach, a single \ngiant master wall is built around the assets like the massive rock walls of a castle fortress. The \nmajor flaw in such an approach is that large massive structures often have minor weakness and \nflaws; are difficult if not impossible to reconfigure, adjust, or move; and are easily seen and \navoided by would be attackers (i.e., they find easier ways into the protected area).\nIn a layered security or concentric circles of protection deployment, your assets are sur-\nrounded by a layer of protection provided for by administrative access controls, which in turn \nis surrounded by a layer of protection consisting of logical or technical access controls, which is \nfinally surrounded by a layer of protection that includes physical access controls. This concept \nof defense in depth highlights two important points. First, the security policy of an organization \nultimately provides the first or innermost layer of defense for your assets. Without a security \npolicy, there is no real security that can be trusted. Security policies are one element of admin-\nistrative access controls. Second, people are your last line of defense. People or personnel are the \nother focus of administrative access control. Only with proper training and education will your \npersonnel be able to implement, comply with, and support the security elements defined in your \nsecurity policy.\nThe Process of Accountability\nOne important purpose of security is to be able to hold people accountable for the activities that \ntheir online personas (i.e., their user accounts) perform within the digital world of the computer \nnetwork. The first step in this process is identifying the subject. In fact, there are several steps \nleading up to being able to hold a person accountable for online actions: identification, authen-\ntication, authorization, auditing, and accountability.\nIdentification\nIdentification is the process by which a subject professes an identity and accountability is initi-\nated. A user providing a username, a logon ID, a personal identification number (PIN), or a \nsmart card represents the identification process. Providing a process ID number also represents \n" }, { "page_number": 51, "text": "6\nChapter 1\n\u0002 Accountability and Access Control\nthe identification process. Once a subject has identified itself, the identity is accountable for any \nfurther actions by that subject. Information technology (IT) systems track activity by identities, \nnot by the subjects themselves. A computer doesn’t know one human from another, but it does \nknow that your user account is different from all other user accounts.\nAuthentication\nAuthentication is the process of verifying or testing that the claimed identity is valid. Authen-\ntication requires that the subject provide additional information that must exactly correspond \nto the identity indicated. The most common form of authentication is a password, which falls \nunder the first of three types of information that can be used for authentication:\nType 1\nA Type 1 authentication factor is something you know. It is any string of characters \nthat you have memorized and can reproduce on a keyboard when prompted. Examples of this \nfactor include a password, personal identification number (PIN), lock combination, pass phrase, \nmother’s maiden name, favorite color, and so on.\nType 2\nA Type 2 authentication factor is something you have. It is a physical device that you \nare in possession of and must have on your person at the time of authentication. Examples of \nthis factor include a smart card, token device, memory card, USB drive, and so on. This can also \ninclude your physical location, referred to as the “somewhere you are” factor.\nThe main difference between a memory card and a smart card is that a memory \ncard is only used to store information while a smart card has the ability to pro-\ncess data. We’ll discuss these security methods in more detail in Chapter 19, \n“Physical Security Requirements.”\nType 3\nA Type 3 authentication factor is something you are. It is a body part or a physical \ncharacteristic of your person. Examples of this factor include fingerprints, voice print, retina \npattern, iris pattern, face shape, palm topology, hand geometry, and so on. (We’ll discuss these \nin more detail in just a moment).\nEach type of authentication factor is roughly the same in terms of the level of security pro-\nvided; only a single attack must be successful to overcome a single authentication factor. How-\never, each type is more secure than the one before it. For instance, a Type 3 factor is the most \ndifficult security to breach of the three factors. It can be overcome by creating a fake duplicate \n(like a gummy fingerprint). A Type 2 factor, the next most difficult security to breach, can be \novercome by physical theft, and a Type 1 factor can be overcome by a password cracker. As you \ncan see, the Type 3 factor is slightly more secure than a Type 2 factor, which is in turn more \nsecure than a Type 1 factor.\n“Something” and “Somewhere”\nIn addition to these three commonly recognized factors, there are at least two others. One is \ncalled “something you do,” such as writing a signature, typing out a pass phrase (keyboard \ndynamics), or saying a phrase. Something you do is often included in the “something you are,” \nor Type 3, category.\n" }, { "page_number": 52, "text": "Access Control Overview\n7\nAnother factor, mentioned earlier, is called “somewhere you are,” such as the computer ter-\nminal from which you logged in or the phone number (identified by caller ID) or country (iden-\ntified by your IP address) from which you dialed up. Controlling access by physical location \nforces the subject to be present rather than connecting remotely. Somewhere you are often \nincluded in the “something you have,” or Type 2, category.\nLogical Location\nLogical location can combine the ideas of “somewhere you are,” “something you have,” and \n“something you know.” A logical location access control restricts access based upon some form \nof logical identification, such as IP address, MAC address, client type, or protocol used. How-\never, it should be noted that logical location control should not be the only factor used because \nany type of address information can be spoofed with hacking tools.\nAccess can further be restricted to date and time of day or by transaction type. The former \nprevents access accept within defined time periods. The latter is a type of content- or context-\ndependant control where access is dynamic based on the transactions being attempted by the \nsubject.\nMultiple-Factor Authentication\nTwo-factor authentication occurs when two different factors are required to provide authenti-\ncation. For example, when cashing a check at the grocery store, you often have to provide your \ndriver’s license (something you have) and your phone number (something you know). Strong \nauthentication is simply any authentication that requires two or more factors but not necessar-\nily different factors. However, as a general rule, when different factors are employed, the result-\nant authentication is more secure.\nThe concept behind two-factor authentication is that when two of the same factors are used \ntogether, the strength of the system is no greater than just one of the factors used alone. More \nspecifically, the same attack that could steal or obtain one instance of the factor could obtain \nall instances of the factor. For example, using two passwords together is no more secure than \nusing a single password because a password cracking attack could discover both with a single \nsuccessful attack. However, when two or more different factors are employed, two or more dif-\nferent types or methods of attack must be successful to collect all relevant authentication ele-\nments. For example, if a password, a token, and a biometric factor are all used for a single \nauthentication, then a password crack, a physical theft, and a biometric duplication attack must \nall be successful simultaneously to gain entry to the system.\nOnce the logon credentials of the offered identity and the authentication factor(s) are pro-\nvided to the system, they are checked against the database of identities on the system. If the iden-\ntity is located and the correct authentication factor(s) have been provided, then the subject has \nbeen authenticated.\nAuthorization\nOnce a subject is authenticated, its access must be authorized. The process of authorization \nensures that the requested activity or object access is possible given the rights and privileges \nassigned to the authenticated identity (which we will refer to as the subject from this point for-\nward). Authorization indicates who is trusted to perform specific operations. In most cases, the \n" }, { "page_number": 53, "text": "8\nChapter 1\n\u0002 Accountability and Access Control\nsystem evaluates an access control matrix that compares the subject, the object, and the \nintended activity (we discuss the access control matrix in greater detail in Chapter 11 “Princi-\nples of Computer Design”). If the specific action is allowed, the subject is authorized. If the spe-\ncific action is not allowed, the subject is not authorized.\nKeep in mind that just because a subject has been identified and authenticated, it does not \nautomatically mean they have been authorized. It is possible for a subject to be logged onto a \nnetwork (i.e., identified and authenticated) but blocked from accessing a file or printing to a \nprinter (i.e., by not being authorized to perform that activity). Most network users are autho-\nrized to perform only a limited number of activities on a specific collection of resources. Iden-\ntification and authentication are “all or nothing” aspects of access control. Authorization has \na wide range of variations between all and nothing for each individual subject or object within \nthe environment. A user may be able to read a file but not delete it. A user may be able to print \na document but not alter the print queue. A user may be able to log onto a system but not access \nany resources.\nIt is important to understand the differences between identification, authentication, and \nauthorization. Although they are similar and are essential to all security mechanisms, they are \ndistinct and must not be confused.\nAuditing and Accountability\nAuditing is the process by which the online activities of user accounts and processes are tracked \nand recorded. Auditing produces audit trails. Audit trails can be used to reconstruct events and to \nverify whether or not security policy or authorization was violated. By comparing the contents of \naudit trails with authorization against authenticated user accounts, the people associated with \nuser accounts can be held accountable for the significant online actions of those user accounts.\nAccording to the National Institute of Standards and Technology Minimum Security \nRequirements (MSR) for Multi-User Operating Systems (NISTIR 5153) document, audit data \nrecording must comply with the following requirements:\n\u0002\nThe system shall provide a mechanism for generating a security audit trail that contains \ninformation to support after-the-fact investigation of loss or impropriety and appropriate \nmanagement response.\n\u0002\nThe system shall provide end-to-end user accountability for all security relevant events.\n\u0002\nThe system shall protect the security audit trail from unauthorized access.\n\u0002\nThe system shall provide a mechanism to dynamically control, during normal system oper-\nation, the types of events recorded.\n\u0002\nThe system shall protect the audit control mechanisms from unauthorized access.\n\u0002\nThe system shall, by default, cause a record to be written to the security audit trail for \n[numerous specific security-related] events.\n\u0002\nThe system shall provide a privileged mechanism to enable or disable the recording of other \nevents into the security audit trail.\n\u0002\nFor each recorded event, the audit record shall identify [several specific datapoints] at a \nminimum.\n" }, { "page_number": 54, "text": "Identification and Authentication Techniques\n9\n\u0002\nThe character strings input as a response to a password challenge shall not be recorded in \nthe security audit trail.\n\u0002\nThe audit control mechanism shall provide an option to enable or disable the recording of \ninvalid user IDs during failed user authentication attempts.\n\u0002\nAudit control data (e.g., audit event masks) shall survive system restarts.\n\u0002\nThe system shall provide a mechanism for automatic copying of security audit trail files to \nan alternative storage area after a customer-specifiable period of time.\n\u0002\nThe system shall provide a mechanism for automatic deletion of security audit trail files \nafter a customer-specifiable period of time.\n\u0002\nThe system shall allow site control of the procedure to be invoked when audit records are \nunable to be recorded.\n\u0002\nThe system shall provide tools to monitor the activities (i.e., capture the keystrokes) of spe-\ncific terminals or network connections in real time.\nThis list was taken directly from the NISTIR 5153 document. It has been \nedited for length. We have provided only a small excerpt of the entire mate-\nrial. To view all of the details of this MSR, see the NISTIR 5153 document at \nhttp://csrc.nist.gov.\nAn organization’s security policy can be properly enforced only if accountability is main-\ntained. In other words, security can be maintained only if subjects are held accountable for their \nactions. Effective accountability relies upon the capability to prove a subject’s identity and track \ntheir activities. Thus, accountability builds on the concepts of identification, authentication, \nauthorization, access control, and auditing.\nIdentification and Authentication \nTechniques\nIdentification is a fairly straightforward concept. A subject must provide an identity to a system \nto start the authentication, authorization, and accountability processes. Providing an identity \ncan be typing in a username, swiping a smart card, waving a token device, speaking a phrase, \nor positioning your face, hand, or finger for a camera or scanning device. Without an identity, \na system has no way to correlate an authentication factor with the subject. A subject’s identity \nis typically considered to be public information.\nAuthentication verifies the identity of the subject by comparing one or more factors against \nthe database of valid identities (i.e., user accounts). The authentication factor used to verify \nidentity is typically considered to be private information. The ability of the subject and system \nto maintain the secrecy of the authentication factors for identities directly reflects the level of \nsecurity of that system.\n" }, { "page_number": 55, "text": "10\nChapter 1\n\u0002 Accountability and Access Control\nIdentification and authentication are always together as a single two-step process. Providing \nan identity is step one and providing the authentication factor(s) is step two. Without both, a \nsubject cannot gain access to a system—neither element alone is useful.\nThere are several types of authentication information a subject can provide (e.g., something \nyou know, something you have, etc.). Each authentication technique or factor has its unique \nbenefits and drawbacks. Thus it is important to evaluate each mechanism in light of the envi-\nronment in which it will be deployed to determine viability.\nPasswords\nThe most common authentication technique is the use of passwords, but they are also consid-\nered to be the weakest form of protection. Passwords are poor security mechanisms for several \nreasons, including the following:\n\u0002\nUsers typically choose passwords that are easy to remember and therefore easy to guess \nor crack.\n\u0002\nRandomly generated passwords are hard to remember, thus many users write them down.\n\u0002\nPasswords are easily shared, written down, and forgotten.\n\u0002\nPasswords can be stolen through many means, including observation, recording and play-\nback, and security database theft.\n\u0002\nPasswords are often transmitted in cleartext or with easily broken encryption protocols.\n\u0002\nPassword databases are often stored in publicly accessible online locations.\n\u0002\nShort passwords can be discovered quickly in brute force attacks.\nPassword Selection\nPasswords can be effective if selected intelligently and managed properly. There are two types \nof passwords: static and dynamic. Static passwords always remain the same. Dynamic pass-\nwords change after a specified interval of time or use. One-time passwords or single-use passwords \nare a variant of dynamic passwords that are changed every time they are used. One-time pass-\nwords are considered the strongest type of password, at least in concept. Humans don’t have the \nability to remember an infinite series of lengthy random character strings, which have only a \nsingle-attempt use before expiring. Thus, one-time passwords are often implemented as Type 2 \nfactors using a processing device known as a token (see later this chapter for more details).\nAs the importance of maintaining security increases, so does the need to change passwords \nmore frequently. The longer a password remains static and the more often the same password \nis used, the more likely it will be compromised or discovered.\nIn some environments, the initial passwords for user accounts are automatically generated. \nOften the generated password is a form of composition password. A composition password is \na password constructed from two or more unrelated words joined together with a number or \nsymbol in between. Composition passwords are easy for computers to generate, but they should \nnot be used for extended periods of time because they are vulnerable to password guessing \n" }, { "page_number": 56, "text": "Identification and Authentication Techniques\n11\nattacks. If the algorithm for computer-generated passwords is discovered, all passwords created \nby the system are in jeopardy of being compromised.\nA password mechanism that is slightly more effective than a basic password is a pass phrase. \nA pass phrase is a string of characters usually much longer than a password. Once the pass \nphrase is entered, the system converts it into a virtual password for use by the authentication \nprocess. Pass phrases are often modified natural language sentences to allow for simplified \nmemorization. Here’s an example: “She $ell$ C shells ByE the c-shor.” Using a pass phrase has \nseveral benefits. It is difficult to crack a pass phrase using a brute force tool and the pass phrase \nencourages the use of a password with numerous characters yet is still easy to remember.\nAnother interesting password mechanism is the cognitive password. A cognitive password is \nusually a series of questions about facts or predefined responses that only the subject should \nknow. For example, three to five questions might be asked of the subject, such as the following:\n\u0002\nWhat is your birth date?\n\u0002\nWhat is your mother’s maiden name?\n\u0002\nWhat is the name of your division manager?\n\u0002\nWhat was your score on your last evaluation exam?\n\u0002\nWho was your favorite baseball player in the 1984 World Series?\nIf all the questions are answered correctly, the subject is authenticated. The most effective \ncognitive password systems ask a different set of questions each time. The primary limitation \nof cognitive password systems is that each question must be answered at the time of user enroll-\nment (i.e., user account creation) and answered again during the logon process, which increases \nthe time to log on. Cognitive passwords are often employed for phone-based authentication by \nfinancial organizations, such as your bank. However, this type of password is considered to be \ninappropriate and insecure for protecting IT.\nMany systems include password policies that restrict or dictate the characteristics of pass-\nwords. Common restrictions are minimum length, minimum age, maximum age, requiring \nthree or four character types (i.e., uppercase, lowercase, numbers, symbols), and preventing \npassword reuse. As the need for security increases, these restrictions should be tightened.\nHowever, even with strong software-enforced password restrictions, easily guessed or \ncracked passwords can still be created. An organization’s security policy must clearly define \nboth the need for strong passwords and what a strong password is. Users need to be trained \nabout security so they will respect the organization’s security policy and adhere to its require-\nments. If passwords are created by end users, offer suggestions such as the following for creating \nstrong passwords:\n\u0002\nDon’t reuse part of your name, logon name, e-mail address, employee number, Social Secu-\nrity number, phone number, extension, or other identifying name or code.\n\u0002\nDon’t use dictionary words, slang, or industry acronyms.\n\u0002\nDo use nonstandard capitalization and spelling.\n\u0002\nDo switch letters and replace letters with numbers.\n" }, { "page_number": 57, "text": "12\nChapter 1\n\u0002 Accountability and Access Control\nPassword Security\nWhen a malicious user or attacker seeks to obtain passwords, there are several methods they can \nemploy, including network traffic analysis, password file access, brute force attacks, dictionary \nattacks, and social engineering. Network traffic analysis (also known as sniffing) is the process \nof capturing network traffic when a user is entering a password for authentication. Once the \npassword is discovered, the attacker attempts to replay the packet containing the password \nagainst the network to gain access. If an attacker can gain access to the password database file, \nit can be copied and a password cracking tool can be used against it to extract usernames and \npasswords. Brute force and dictionary attacks are types of password attacks that can be waged \nagainst a stolen password database file or a system’s logon prompt. In a dictionary attack, the \nattacker uses a script of common passwords and dictionary words to attempt to discover an \naccount’s password. In a brute force attack, a systematic trial of all possible character combi-\nnations is used to discover an account’s password. Finally, a hybrid attack attempts a dictionary \nattack and then performs a type of brute force attack. The follow-up brute force attack is used \nto add prefix or suffix characters to passwords from the dictionary to discover one-upped con-\nstructed passwords, two-upped constructed passwords, and so on. A one-upped constructed \npassword is a password with a single character difference from its present form in the dictio-\nnary. For example, “password1” is one-upped from “password,” and so are “Password,” \n“1password,” and “passXword.”\nNo matter what type of password attack is used, only read access is required to the password \ndatabase. Write access is not required. Therefore, a wider number of user accounts can be \nemployed to launch password cracking attacks. From an intruder’s perspective, this makes find-\ning a weak user account more attractive than having to attack the administrator or root account \ndirectly and initially to gain system access.\nA social engineering attack is an attempt by an attacker to obtain logon capabilities through \ndeceiving a user, usually over the telephone, into performing specific actions on the system, such \nas changing the password of an executive who’s on the road or creating a user account for a new \nfictitious employee.\nThere are several ways to improve the security of passwords. Account lockout is a mecha-\nnism used to disable a user account after a specified number of failed logons occur. Account \nlockouts stop brute force and dictionary attacks against a system’s logon prompt. Once the \nlogon attempt limit is reached, a message displaying the time, date, and location (i.e., computer \nname or IP address) of the last successful or failed logon attempt is displayed. Users who suspect \nthat their account is under attack or has been compromised can report this to the system admin-\nistrator. Auditing can be configured to track logon success and failure. An intrusion detection \nsystem can easily identify logon prompt attacks and notify administrators.\nThere are other options to improve the security offered by password authentication:\n\u0002\nUse the strongest form of one-way encryption available for password storage.\n\u0002\nNever allow passwords to be transmitted over the network in cleartext or with weak \nencryption.\n\u0002\nUse password verification tools and password cracking tools against your own password \ndatabase file. Require that weak or discovered passwords be changed.\n" }, { "page_number": 58, "text": "Identification and Authentication Techniques\n13\n\u0002\nDisable user accounts for short periods of inactivity, such as a week or a month. Delete user \naccounts that are no longer used.\n\u0002\nProperly train users about the necessity of maintaining security and the use of strong pass-\nwords. Warn about writing down or sharing passwords. Offer tips to prevent shoulder \nsurfing or keyboard logging to capture passwords. Offer tips and recommendations on how \nto create strong passwords, such as the following:\n\u0002\nRequire that users change passwords consistently. The more secure or sensitive the envi-\nronment, the more frequently passwords should be changed.\n\u0002\nNever display passwords in clear form on any screen or within any form. Instead, mask \nthe display of the password at all times. This is a commonly recognized feature of soft-\nware, such as the display of asterisks instead of letters when typing in your password in \na logon dialog box.\n\u0002\nLonger passwords, such as those with 16 characters or more, are harder for a brute force \npassword cracking tool to discover. However, it’s harder for people to remember longer \npasswords, which often lead to users writing the password down. Your organization \nshould have a standard security awareness rule that no passwords should ever be written \ndown. The only possible exception to that rule is that very long very complex passwords \nfor the most sensitive accounts, such as administrator or root, can be written down and \nstored in a vault or safety deposit box.\n\u0002\nCreate lists of passwords users should avoid. Easy-to-memorize passwords are often eas-\nily discovered by password cracking tools.\n\u0002\nIf the root or administrator password is ever compromised, every password on every \naccount should be changed. (In a high-security environment, a compromised system can \nnever be fully trusted again. Thus it may require formatting the drives and rebuilding the \nentire system from scratch.)\n\u0002\nPasswords should be handed out in person after the user has proved their identity. Never \ntransmit passwords via e-mail.\nBiometrics\nAnother common authentication and identification technique is the use of biometric factors. \nBiometric factors fall into the Type 3 “something you are” authentication category. A biometric \nfactor is a behavioral or physiological characteristic that is unique to a subject. There are many \ntypes of biometric factors, including fingerprints, face scans, iris scans, retina scans, palm scans \n(also known as palm topography or palm geography), hand geometry, heart/pulse patterns, \nvoice patterns, signature dynamics, and keystroke patterns (keystroke dynamics).\nLet’s discuss these biometric factors in more detail, taking into account the human body part \nthey utilize and the information that each quantifies in order to make the most accurate iden-\ntification possible.\nFingerprints\nThe macroscopic (i.e., visible to the naked eye) patterns on the last digit of fingers \nand thumbs are what make fingerprinting so effective for security. A type of fingerprinting \n" }, { "page_number": 59, "text": "14\nChapter 1\n\u0002 Accountability and Access Control\nknown as minutia matching examines the microscopic view of the fingertips. Unfortunately, \nminutia matching is affected by small changes to the finger, including temperature, pressure, \nand minor surface damage (such as sliding your fingers across a rough surface).\nFace scans\nFace scans utilize the geometric patterns of faces for detection and recognition. \nThey employ the recognition technology known as eigenfeatures (facial metrics) or eigenfaces. \n(The German word eigen refers to recursive mathematics used to analyze intrinsic or unique \nnumerical characteristics.)\nIris scans\nFocusing on the colored area around the pupil. Iris scans are the second most accu-\nrate form of biometric authentication. However, iris scans cannot differentiate between identi-\ncal twins. Iris scans are often recognized as having a longer useful authentication life span than \nany other biometric factor. This is because the iris remains relatively unchanged throughout a \nperson’s life (barring eye damage or illness). Every other type of biometric factor is more vul-\nnerable and more likely to change over time. Iris scans are considered acceptable by general \nusers because they don’t involve direct contact with the reader and don’t reveal personal med-\nical information.\nRetina scans\nRetina scans focus on the pattern of blood vessels at the back of the eye. They are \nthe most accurate form of biometric authentication (they are able to differentiate between iden-\ntical twins) but also the least acceptable because retina scans can reveal medical conditions, such \nas high blood pressure and pregnancy. In addition, these types of scans often require the subject \nto place their eye onto a cup reader that blows air into the eye.\nPalm scans (also known as palm topography or palm geography)\nPalm scans utilize the \nwhole area of the hand, including the palm and fingers. Palm scans function as a hand-sized fin-\ngerprint by analyzing the grooves, ridges, and creases as well as the fingerprints themselves.\nHand geometry\nHand geometry recognizes the physical dimensions of the hand. This includes \nwidth and length of the palm and fingers. This can be a mechanical or image-edge (i.e., visual \nsilhouette) graphical solution.\nSkin scans are not used as a form of biometric authentication because they can-\nnot be used to differentiate between all individuals.\nHeart/pulse patterns\nThis involves measuring the pulse or heartbeat of the user to ensure that \na real live person is providing the biometric factor. This is often employed as a secondary bio-\nmetric to support one of the other types.\nVoice pattern recognition\nThis type of biometric authentication relies on the sound of a sub-\nject’s speaking voice. This is different than speech recognition, which extracts communications \nfrom sound (i.e., automatic dictation software). In other words, voice pattern recognition dif-\nferentiates between one person’s voice and another, while speech recognition differentiates \nbetween words within any person’s voice.\n" }, { "page_number": 60, "text": "Identification and Authentication Techniques\n15\nVoice pattern recognition is often thought to have numerous benefits, such as \nits reliability and its function as a “natural” biometric factor. However, the idea \nof speech recognition is commonly confused with voice pattern recognition. \nRemember, voice pattern recognition differentiates between one person’s \nvoice and another, while speech recognition differentiates between words \nwithin any person’s voice. The benefits of speech recognition include flexibil-\nity, hands-free and eyes-free operation, reduction of data entry time, elimina-\ntion of spelling errors, and improved data accuracy.\nSignature dynamics\nThis recognizes how a subject writes a string of characters. Signature \ndynamics examine how the subject performs the act of writing as well as the features of the \nresultant written sample. The success of signature dynamics relies upon pen pressure, stroke \npattern, stroke length, and the points in time when the pen is lifted from the paper. However, \nthe speed at which the written sample is created is usually not an important factor.\nKeystroke patterns (keystroke dynamics)\nKeystroke patterns measure how a subject uses a \nkeyboard by analyzing flight time and dwell time. Flight time is how long it takes between key \npresses and dwell time is how long a key is pressed. Using keystroke patterns is inexpensive, \nnonintrusive, and often transparent to the user (both use and enrollment). Unfortunately, use of \nkeystroke patterns for security is subject to wild variances. Simple changes in user behavior \ngreatly affect this biometric authentication, such as only using one hand, being cold, standing \nrather than sitting, changing keyboards, and having an injured hand/finger.\nBiometric factors can be used as an identifying or authentication technique. Using a biomet-\nric factor instead of a username or account ID as an identification factor requires a one-to-many \nsearch of the offered biometric pattern against the stored database of enrolled and authorized \npatterns. As an identification technique, biometric factors are used in physical access controls. \nUsing a biometric factor as an authentication technique requires a one-to-one match of the \noffered biometric pattern against the stored pattern for the offered subject identity. As an \nauthentication technique, biometric factors are used in logical access controls.\nThe use of biometrics promises universally unique identification for every person on the \nplanet. Unfortunately, biometric technology has yet to live up to this promise. For biometric fac-\ntors to be useful, they must be extremely sensitive. The most important aspect of a biometric \ndevice is its accuracy. To use biometrics as an identifying mechanism, a biometric device must \nbe able to read information that is very minute, such as the variations in the blood vessels in a \nperson’s retina or the tones and timbres in their voice. Because most people are basically similar, \nthe level of detail required to authenticate a subject often results in false negative and false pos-\nitive authentications.\nBiometric Factor Ratings\nBiometric devices are rated for their performance against false negative and false positive \nauthentication conditions. Most biometric devices have a sensitivity adjustment so they can be \ntuned to be more or less sensitive. When a biometric device is too sensitive, a Type 1 error \n" }, { "page_number": 61, "text": "16\nChapter 1\n\u0002 Accountability and Access Control\noccurs. A Type 1 error occurs when a valid subject is not authenticated. The ratio of Type 1 \nerrors to valid authentications is known as the False Rejection Rate (FRR). When a biometric \ndevice is not sensitive enough, a Type 2 error occurs. A Type 2 error occurs when an invalid sub-\nject is authenticated. The ratio of Type 2 errors to valid authentications is known as the False \nAcceptance Rate (FAR). The FRR and FAR are usually plotted on a graph that shows the level \nof sensitivity adjustment against the percentage of FRR and FAR errors (see Figure 1.1). The \npoint at which the FRR and FAR are equal is known as the Crossover Error Rate (CER). The \nCER level is used as a standard assessment point from which to measure the performance of a \nbiometric device. The CER is used for a single purpose: to compare the accuracy of similar bio-\nmetric devices (i.e., those focusing on the same biometric factor) from different vendors or dif-\nferent models from the same vendor. On the CER graph, the device with the lowest CER is \noverall the most accurate. In some situations, having a device more sensitive than the CER rate \nis preferred, such as with a metal detector at an airport.\nF I G U R E\n1 . 1\nGraph of FRR and FAR errors indicating the CER point\nBiometric Registration\nIn addition to the sensitivity issues of biometric devices, there are several other factors that may \ncause them to be less than effective—namely, enrollment time, throughput rate, and acceptance. \nFor a biometric device to function as an identification or authentication mechanism, the subject \nmust be enrolled or registered. This means the subject’s biometric factor must be sampled and \nstored in the device’s database. The stored sample of a biometric factor is called a reference pro-\nfile or a reference template. The time required to scan and store a biometric factor varies greatly \nby what physical or performance characteristic is used. The longer it takes to enroll with a bio-\nmetric mechanism, the less a user community accepts the inconvenience. In general, enrollment \ntimes longer than two minutes are unacceptable. If you use a biometric characteristic that \nchanges with time, such as a person’s voice tones, facial hair, or signature pattern, enrollment \nmust be repeated at regular intervals.\n%\nSensitivity\nFAR\nFRR\nCER\n" }, { "page_number": 62, "text": "Identification and Authentication Techniques\n17\nOnce subjects are enrolled, the amount of time the system requires to scan and process them \nis the throughput rate. The more complex or detailed the biometric characteristic, the longer the \nprocessing will take. Subjects typically accept a throughput rate of about six seconds or faster.\nA subject’s acceptance of a security mechanism is dependent upon many subjective percep-\ntions, including privacy, invasiveness, and psychological and physical discomfort. Subjects may \nbe concerned about transfer of body fluids or revelations of health issues via the biometric scan-\nning devices.\nAppropriate Biometric Usage\nWhen selecting a biometric solution for a specific environment, numerous aspects must be consid-\nered. These aspects include which type of biometric factor is most suitable for your environment as \nwell as the effectiveness and acceptability of the biometric factor. When comparing different types \nof biometric factors, often a Zephyr chart is used. A Zephyr chart rates various aspects, functions, \nor features of different biometrics together on a single easy-to-read diagram (see Figure 1.2).\nF I G U R E\n1 . 2\nAn example Zephyr chart\nIntrusiveness\nAccuracy\nCost\nEffort\nAn “Ideal” Biometric\nZephyr™ Analysis\nKeystroke-Scan\nFacial-Scan\nRetina-Scan\nIris-Scan\nVoice-Scan\nFinger-Scan\nSignature-Scan\nHand-Scan\n©Copyright, International Biometric Group. \nThis image is used with permission of the International Biometric Group. www.biometricgroup.com\n" }, { "page_number": 63, "text": "18\nChapter 1\n\u0002 Accountability and Access Control\nThe effectiveness of biometrics is dependent on how accurate one type of biometric factor is \nin comparison to others. Here is a commonly accepted order of effectiveness from most to least:\n\u0002\nPalm scan\n\u0002\nHand geometry\n\u0002\nIris scan\n\u0002\nRetina pattern\n\u0002\nFingerprint\n\u0002\nVoice verification\n\u0002\nFacial recognition\n\u0002\nSignature dynamics\n\u0002\nKeystroke dynamics\nThe acceptance of biometrics is a rating of how well people accept the use of specific bio-\nmetric factors in their environment. The rating of acceptance incorporates a person’s view of \nhow invasive and easy to use a specific type of biometric factor is and the level of health risk it \npresents. Here is a commonly accepted order of acceptance level from most to least:\n\u0002\nIris scan\n\u0002\nKeystroke dynamics\n\u0002\nSignature dynamics\n\u0002\nVoice verification\n\u0002\nFacial recognition\n\u0002\nFingerprint\n\u0002\nPalm scan\n\u0002\nHand geometry\n\u0002\nRetina pattern\nTokens\nTokens (or smart tokens) are password-generating devices that subjects must carry with them. \nA token device is an example of a Type 2 factor, or “something you have.” A token can be a \nstatic password device, such as an ATM card or other memory card. To use an ATM card, you \nmust supply the token (the ATM card itself) and your PIN. Tokens can also be one-time or \ndynamic password devices that look a bit like small calculators or even be smart cards (to read \nmore about smart cards, see Chapter 19). The device displays a string of characters (a password) \nfor you to enter into the system.\nThere are four types of token devices:\n\u0002\nStatic tokens\n\u0002\nSynchronous dynamic password tokens\n" }, { "page_number": 64, "text": "Identification and Authentication Techniques\n19\n\u0002\nAsynchronous dynamic password tokens\n\u0002\nChallenge-response tokens\nA static token can be a swipe card, a smart card, a floppy disk, a USB RAM dongle, or even \nsomething as simple as a key to operate a physical lock. Static tokens offer a physical means to \nprovide identity. Static tokens still require an additional factor to provide authentication, such \nas a password or biometric factor. Most device static tokens host a cryptographic key, such as \na private key, digital signature, or encrypted logon credentials. The cryptographic key can be \nused as an identifier or as an authentication mechanism. The cryptographic key is much stron-\nger than a password because it is pre-encrypted using a strong encryption protocol, it is signif-\nicantly longer, and it resides only in the token. Static tokens are most often used as identification \ndevices rather than as authentication factors.\nA synchronous dynamic password token generates passwords at fixed time intervals. Time \ninterval tokens require that the clock on the authentication server and the clock on the token \ndevice be synchronized. The generated password is entered into the system by the subject along \nwith a PIN, pass phrase, or password. The generated password provides the identification, and \nthe PIN/password provides the authentication.\nAn asynchronous dynamic password token generates passwords based on the occurrence of \nan event. An event token requires that the subject press a key on the token and on the authen-\ntication server. This action advances to the next password value. The generated password and \nthe subject’s PIN, pass phrase, or password are entered into the system for authentication.\nChallenge-response tokens generate passwords or responses based on instructions from the \nauthentication system. The authentication system displays a challenge, usually in the form of a code \nor pass phrase. This challenge is entered into the token device. The token generates a response based \non the challenge, and then the response is entered into the system for authentication.\nOne-Time Password Generators\nAs we discussed earlier, one-time passwords are dynamic passwords that change every time \nthey are used. They can be very effective for security purposes, except that humans rarely have \nthe capacity to remember passwords that change so frequently. One-time password genera-\ntors create the passwords for your users and make one-time passwords reasonable to deploy. \nUsers only need to possess the token device (i.e., password generator), have knowledge of the \nlogon procedure, and possibly have memorized a short PIN, depending on which generator \nyou use. With device-based authentication systems, an environment can benefit from the \nstrength of one-time passwords without placing a huge burden of memorization on the users.\nThe five widely recognized one-time password generator systems are synchronous, PIN syn-\nchronous, asynchronous, PIN asynchronous, and transaction synchronous. The systems with \na PIN in their name simply require an additional memorized key sequence to be entered to com-\nplete the authentication process.\n" }, { "page_number": 65, "text": "20\nChapter 1\n\u0002 Accountability and Access Control\nUsing token authentication systems is a much stronger security measure than using password \nauthentication alone. Token systems use two or more factors to establish identity and pro-\nvide authentication. In addition to knowing the username, password, PIN, code, and so on, the \nsubject must be in physical possession of the token device.\nHowever, token systems do have failings. If the battery dies or the device is broken, the subject \nis unable to gain access. Token devices can be lost or stolen. Tokens should be stored and managed \nintelligently because once a token system is compromised, it can be difficult and expensive to \nreplace. Furthermore, human factors can render tokens less secure than they are designed to be. \nFirst and foremost, if the user writes their access code or PIN on the token device, the security of \nthe token system is compromised. Users should understand that loaning out a token and PIN, even \nto a coworker, is a violation of security.\nTickets\nTicket authentication is a mechanism that employs a third-party entity to prove identification \nand provide authentication. The most common and well-known ticket system is Kerberos. Ker-\nberos was developed under Project Athena at MIT. Its name is borrowed from Greek mythol-\nogy. A three-headed dog named Kerberos guards the gates to the underworld, but in the myth, \nthe three-headed dog faced inward, thus preventing escape rather than preventing entrance. \nKerberos and its tickets are discussed later in this chapter.\nSingle Sign On\nSingle Sign On (SSO) is a mechanism that allows a subject to be authenticated only once on a \nsystem and be able to access resource after resource unhindered by repeated authentication \nprompts. With SSO, once a subject is authenticated, they can roam the network freely and \naccess resources and services without being rechallenged for authentication. This is considered \nthe primary disadvantage to SSO: Once an account is compromised, the malicious subject has \nunrestricted access. Or in other words, the maximum level of unauthorized access is gained sim-\nply through password disclosure. SSO typically allows for stronger passwords because the sub-\nject must memorize only a single password. Furthermore, SSO offers easier administration by \nreducing the number of locations on which an account must be defined for the subject. SSO can \nbe enabled through authentication systems or through scripts that provide logon credentials \nautomatically when prompted.\n Kerberos, SESAME, KryptoKnight, NetSP, thin clients, directory services, and scripted \naccess are examples of SSO mechanisms. Two or more SSO mechanisms can be combined into \na single security solution. It is most common for Kerberos to be combined with another SSO \nmechanism. For example, under Windows 2003 (as well as Windows 2000), it is possible to \nemploy the native directory service (Active Directory), which is integrated with Kerberos with \nother SSO options, including thin clients (i.e., Terminal Services) and scripted access (i.e., logon \nscripts).\n" }, { "page_number": 66, "text": "Identification and Authentication Techniques\n21\nKerberos\nKerberos is a trusted third-party authentication protocol that can be used to provide a single \nsign-on solution and to provide protection for logon credentials. Kerberos relies upon symmet-\nric key cryptography (a.k.a. private key cryptography), specifically Data Encryption Standard \n(DES), and provides end-to-end security for authentication traffic between the client and the \nKey Distribution Center (KDC). Kerberos provides the security services of confidentiality and \nintegrity protection for authentication traffic.\nThe Kerberos authentication mechanism centers on a trusted server (or servers) that hosts the \nfunctions of the KDC, Ticket Granting Service (TGS), and Authentication Service (AS). Gener-\nally, the Kerberos central server that hosts all of these services is simply referred to as the KDC. \nKerberos uses symmetric key cryptography to authenticate clients to servers. All clients and \nservers are registered with the KDC, so it maintains the secret keys of all network members.\nA complicated exchange of tickets (i.e., cryptographic messages) between clients, network \nservers, and the KDC is used to prove identity and provide authentication. This allows the client \nto request resources from the server with full assurance that both the client and the server are \nwho they claim to be. The exchange of encrypted tickets also ensures that no logon credentials, \nsession keys, or authentication messages are ever transmitted in cleartext.\nKerberos tickets have specific lifetimes and use parameters. Once a ticket expires, the client \nmust request a renewal or a new ticket to continue communications with a server.\nThe Kerberos logon process is as follows:\n1.\nUser types username and password into client.\n2.\nClient encrypts credentials with DES for transmission to KDC.\n3.\nKDC verifies user credentials.\n4.\nKDC generates a TGT by hashing the user’s password.\n5.\nThe TGT is encrypted with DES for transmission to the client.\n6.\nThe client installs the TGT for use until it expires.\nThe Kerberos server or service access process is as follows:\n1.\nThe client sends its TGT back to the KDC with a request for access to a server or service.\n2.\nThe KDC verifies the ongoing validity of the TGT and checks its access control matrix to \nverify that the user has sufficient privilege to access the requested resource.\n3.\nA Service Ticket (ST) is generated and sent to the client.\n4.\nThe client sends the ST to the server or service host.\n5.\nThe server or service host verifies the validity of the ST with the KDC.\n6.\nOnce identity and authorization is verified, Kerberos activity is complete. The server or service \nhost then opens a session with the client and begins communications or data transmission.\nLimitations of Kerberos\nKerberos is a versatile authentication mechanism that can be used over local LANs, local logons, \nremote access, and client-server resource requests. However, Kerberos has a single point of \n" }, { "page_number": 67, "text": "22\nChapter 1\n\u0002 Accountability and Access Control\nfailure—the KDC. If the KDC is ever compromised, then the secret key of every system on the net-\nwork is also compromised. Also, if the KDC goes offline, no subject authentication is possible.\nThere are other limitations or problems with Kerberos:\n\u0002\nDictionary and brute force attacks on the initial KDC response to a client may reveal the sub-\nject’s password. In fact, direct password guessing attacks can be waged against the KDC unim-\npeded. A countermeasure to such attacks is to deploy a preauthentication service to check logon \ncredentials and watch for access attacks before granting a subject access to the KDC.\n\u0002\nIssued tickets are stored in memory on the client and server.\n\u0002\nMalicious subjects can replay captured tickets if they are reused within their lifetime window.\n\u0002\nIssued tickets, specifically the Ticket Granting Ticket (TGT), are based on a hash of the \nuser’s password with an added time stamp for expiration.\n\u0002\nKerberos only encrypts authentication traffic (i.e., mechanisms for proving identity), it does \nnot provide any security for subsequent communication sessions or data transmissions.\nOther Examples of Single Sign On\nWhile Kerberos seems to be the most widely recognized (and deployed) form of single sign on, \nit is not the sole example of this moniker. Here is a quick review of other SSO mechanisms that \nyou may encounter.\nSecure European System for Applications in a Multivendor Environment (SESAME) was a \nsystem developed to address the weaknesses in Kerberos. However, it was incomplete in its \nattempt to compensate for all of problems with Kerberos. Eventually Kerberos’s later versions \nand various vendor implementation techniques resolved the initial problems. In the professional \nsecurity world, SESAME is no longer considered a viable product.\nKryptoKnight is a peer-to-peer-based authentication solution developed by IBM. It was \nincorporated into the NetSP product. Like SESAME, KryptoKnight and NetSP never gained a \nfoothold and is no longer a widely used product.\nThin clients are low-end client systems that connect over a network to a server system. Thin \nclients originated in the mainframe world where host-terminal connections allowed for dumb \nterminals to interact with and control centralized mainframes. The terminals had no processing \nor storage capabilities. The idea of thin clients has been replicated on modern client-server envi-\nronments using interface software applications that act as clients to server-hosted environ-\nments. All processing and storage takes place on the server, while the client provides an interface \nfor the subject through the local keyboard, mouse, and monitor. Sometimes thin clients can be \ncalled remote control tools.\nA directory service is a centralized database of resources available to the network. It can be \nthought of as a telephone directory for network services and assets. Users, clients, and processes \nconsult the directory service to learn where a desired system or resource resides. Then once this \naddress or location is known, access can be directed toward it. A directory service must be \nauthenticated to before queries and lookup activities can be performed. Even after authentica-\ntion, the directory service will only reveal information to a subject based on that subject’s \nassigned privileges. Directory services are often based upon the Lightweight Directory Access \nProtocol (LDAP). Some well-known commercial directory services include Microsoft’s Active \nDirectory and Novell’s NetWare Directory Services (NDS), recently renamed eDirectory.\n" }, { "page_number": 68, "text": "Access Control Techniques\n23\nScripted access or logon scripts are used to establish communication links by providing an \nautomated process by which logon credentials are transmitted to resource hosts at the start of \na logon session. Scripted access can often simulate SSO even though the environment still \nrequires a unique authentication process to connect to each server or resource. Scripts can be \nused to implement SSO in those environments where true SSO technologies are not available. \nHowever, scripts and batch files should be stored in a protected area as they usually contain \naccess credentials.\nAccess Control Techniques\nOnce a subject has been identified and authenticated and accountability has been established, \nit must be authorized to access resources or perform actions. Authorization can occur only after \nthe subject’s identity has been verified through authentication. Systems provide authorization \nthrough the use of access controls. Access controls manage the type and extent of access subjects \nhave to objects. There are two primary categories of access control techniques: discretionary \nand nondiscretionary. Nondiscretionary can be further subdivided into specific techniques, \nsuch as mandatory, role-based, and task-based access controls.\nDiscretionary Access Controls (DAC)\nA system that employs discretionary access controls (DAC) allows the owner or creator of an \nobject to control and define subject access to that object. In other words, access control is based \non the discretion (i.e., a decision) of the owner. Access is granted or denied in a discretionary \nenvironment based on the identity of the subject (which is typically the user account name). For \nexample, if a user creates a new spreadsheet file, they are the owner of that file. As the owner \nof the file, they can modify the permissions on that file to grant or deny access to other subjects. \nDACs are often implemented using access control lists (ACLs) on objects. Each ACL defines the \ntypes of access granted or restricted to individual or grouped subjects. Discretionary access con-\ntrol does not offer a centrally controlled management system because owners can alter the ACLs \non their objects. Thus, access is more dynamic than it is with mandatory access controls.\nDAC environments can be extended beyond just controlling type of access between subjects \nand objects via ACLs by including or applying time controls, transaction controls, and other \nforms of ID-focused controls (i.e., device, host, protocol, address, etc.). Within a DAC environ-\nment, a user’s privileges can be suspended while they are on vacation, resumed when they \nreturn, or terminated when they have left the organization.\nThe United States government labels access controls that do not rely upon pol-\nicy to define access as discretionary; however, corporate environments and \nnongovernment organizations will often label such environments as need-to-\nknow.\n" }, { "page_number": 69, "text": "24\nChapter 1\n\u0002 Accountability and Access Control\nNondiscretionary Access Controls\nNondiscretionary access controls are used in a rule-based system in which a set of rules, restric-\ntions, or filters determines what can and cannot occur on the system, such as granting subject \naccess, performing an action on an object, or accessing a resource. Access is not based on admin-\nistrator or owner discretion and is not focused on user identity. (Thus, nondiscretionary access \ncontrol is the opposite of discretionary in much the same way as Non-A is the opposite of A.) \nRather, access is managed by a static set of rules that governs the whole environment (i.e., cen-\ntrally controlled management system). In general, rule-based access control systems are more \nappropriate for environments that experience frequent changes to data permissions (i.e., chang-\ning the security domain or label of objects). This is so because rule-based systems can implement \nsweeping changes just by changing the central rules without having to manipulate or “touch” \nevery subject and/or object in the environment. However, in most cases, once the rules are estab-\nlished, they remain fairly static and unchanged throughout the life of the environment.\nIn rule-based access control systems, control is based on a specific profile created for each \nuser. A common example of such a system is that of a firewall. A firewall is governed by a set \nof rules or filters defined by the administrator. Users are able to communicate across the firewall \nbecause they have initiated transactions that are allowed by the defined rules. Users are able to \naccomplish this because they have client environments configured to do so; these are the specific \nprofiles. The formalized definition of a rule-based access control (or specifically, a rule-based \nsecurity policy) is found in RFC 2828, entitled “Internet Security Glossary.” This document \nincludes the following definition for the term rule-based security policy: “A security policy \nbased on global rules imposed for all users. These rules usually rely on comparison of the sen-\nsitivity of the resource being accessed and the possession of corresponding attributes of users, \na group of users, or entities acting on behalf of users.”\nMandatory Access Controls\nMandatory access controls rely upon the use of classification labels. Each classification label \nrepresents a security domain or a realm of security. A security domain is a realm of common \ntrust that is governed by a specific security policy for that domain. Subjects are labeled by their \nlevel of clearance (which is a form of privilege). Objects are labeled by their level of classification \nor sensitivity. For example, the military uses the labels of top secret, secret, confidential, sensi-\ntive but unclassified (SBU), and unclassified (see Chapter 5 “Security Management Concepts \nand Principles”). In a mandatory access control system, subjects are able to access objects that \nhave the same or a lower level of classification. An expansion of this access control method is \nknown as need-to-know. Subjects with higher clearance levels are granted access to highly sen-\nsitive resources only if their work tasks require such access. If they don’t have a need to know, \neven if they have sufficient clearance, they are denied access. Mandatory access control (MAC) \nis prohibitive rather than permissive. If an access is not specifically granted, it is forbidden. \nMAC is generally recognized as being more secure than DAC but not as flexible or scalable. This \nrelative scale of security is evident via the TCSEC evaluation criteria, which lists mandatory pro-\ntection as a higher level of security than discretionary protection (for more information about \nTCSEC, see Chapter 12 “Principles of Security Models”).\n" }, { "page_number": 70, "text": "Access Control Techniques\n25\nThe use of security labels in mandatory access controls presents some interesting problems. \nFirst, for a mandatory access control system to function, every subject and object must have a \nsecurity label. Depending on the environment, security labels can refer to sensitivity, value to the \norganization, need for confidentiality, classification, department, project, and so on. The mili-\ntary security labels mentioned earlier range from highest sensitivity to lowest: top secret, secret, \nconfidential, sensitive but unclassified (SBU), and unclassified. Common corporate or commer-\ncial security labels are confidential, proprietary, private, sensitive, and public. Security classifi-\ncations indicate a hierarchy of sensitivity, but each level is distinct.\nClassifications within a mandatory access control environment are of three types: hierarchi-\ncal, compartmentalized, or hybrid. Let’s discuss these in more detail.\nHierarchical environments\nHierarchical environments relate the various classification labels in an \nordered structure from low security to medium security to high security. Each level or classification \nlabel in the structure is related. Clearance in a level grants the subject access to objects in that level \nas well as to all objects in all lower levels but prohibits access to all objects in higher levels.\nCompartmentalized environments\nIn compartmentalized environments, there is no relation-\nship between one security domain and another. In order to gain access to an object, the subject \nmust have the exact specific clearance for that object’s security domain.\nHybrid environments\nA hybrid environment combines the hierarchical and compartmentalized \nconcepts so that each hierarchical level may contain numerous subcompartments that are isolated \nfrom the rest of the security domain. A subject must not only have the correct clearance but also the \nneed-to-know for the specific compartment in order to have access to the compartmentalized object. \nHaving the need to know for one compartment within a security domain does not grant the subject \naccess to any other compartment. Each compartment has its own unique and specific need-to-know. \nIf you have the need to know (which is based on your assigned work tasks), then you are granted \naccess. If you don’t have the need to know, then your access is blocked. A hybrid MAC environment \nprovides for more granular control over access but becomes increasingly difficult to manage as the \nsize of the environment (i.e., number of classifications, objects, and subjects) increases.\nRole-Based Access Control (RBAC)\nSystems that employ role-based or task-based access controls define the ability of a subject to \naccess an object through the use of subject roles (i.e., job descriptions) or tasks (i.e., work func-\ntions). If a subject is in a management position, they will have greater access to resources than \nsomeone who is in a temporary position. Role-based access controls are useful in environments \nwith frequent personnel changes because access is based on a job description (i.e., a role or task) \nrather than on a subject’s identity.\nRole-based access control (RBAC) and groups within a DAC environment may serve a similar \npurpose, but they are different in their deployment and use. They are similar in that they both \nserve as containers to collect users into manageable units. However, a user can be a member of \nmore than one group. In addition to collecting the rights and permissions from each group, an \nindividual user account may also have rights and permissions assigned directly to it. In a DAC sys-\ntem, even with groups, access is still based on discretion of an owner and focuses control on the \n" }, { "page_number": 71, "text": "26\nChapter 1\n\u0002 Accountability and Access Control\nidentity of the user. When an RBAC system is employed, a user may have only a single role, but \nthere are new trends emerging where a user is assigned multiple roles. Users have only the rights \nand permissions assigned to such roles and there are no additional individually assigned rights or \npermissions. Furthermore, access is not determined by owner discretion; it is determined by the \ninherent responsibilities of the assigned role (i.e., job description). Also, access focuses on the \nassigned role, not on the identity of the user. Two different users with the same assigned role will \nhave the exact same access and privileges. Role-based access control is becoming increasingly \nattractive to corporate entities that have a high rate of employee turnover. RBAC also allows \ncompany-specific security policies to be directly mapped and enforced in such a way as to map \ndirectly with the organization’s hierarchy and management structure. This implies that the roles \nor job descriptions within an RBAC system are often hierarchical, meaning that the roles are \nrelated in a low-to-high fashion so that the higher roles are created by adding access and privileges \nto lower roles. Often, MAC and DAC environments can be replaced by RBAC solutions.\nA related method to RBAC is task-based access control (TBAC). TBAC is the same basic idea \nas RBAC, but instead of being assigned a single role, each user is assigned dozens of tasks. How-\never, the assigned tasks all relate to the assigned work tasks of the person associated with the \nuser account. Under TBAC, access is still based on rules (i.e., the work tasks) and still focuses \non controlling access based upon tasks assigned rather than user identity.\nLattice-Based Access Controls\nSome, if not most, nondiscretionary access controls can be labeled as lattice-based access con-\ntrols. Lattice-based access controls define upper and lower bounds of access for every relation-\nship between a subject and object. These boundaries can be arbitrary, but they usually follow \nthe military or corporate security label levels. A subject with the lattice permissions shown in \nFigure 1.3 has access to resources up to private and down to sensitive but does not have access \nto confidential, proprietary, or public resources. Subjects under lattice-based access controls are \nsaid to have the least upper bound and the greatest lower bound of access to labeled objects \nbased on their assigned lattice position. Lattice-based access controls were originally developed \nto address information flow, which is primarily concerned with confidentiality. One common \nexample of a lattice-based access control is a mandatory access control.\nF I G U R E\n1 . 3\nA representation of the boundaries provided by lattice-based access controls\nPrivate\nSensitive\nPublic\nLattice of Permissions for a Subject\nConfidential/Proprietary\n" }, { "page_number": 72, "text": "Access Control Methodologies and Implementation\n27\nAccess Control Methodologies and \nImplementation\nThere are two primary access control methodologies: centralized and decentralized (or distrib-\nuted). Centralized access control implies that all authorization verification is performed by a \nsingle entity within a system. Decentralized access control, or distributed access control, implies \nthat authorization verification is performed by various entities located throughout a system.\nCentralized and Decentralized Access Control\nCentralized and decentralized access control methodologies offer the benefits and drawbacks \nthat any centralized or decentralized system offers. Centralized access control can be managed \nby a small team or an individual. Administrative overhead is lower because all changes are made \nin a single location. A single change affects the entire system. However, centralized access con-\ntrol also has a single point of failure. If system elements are unable to access the centralized \naccess control system, then subject and objects cannot interact. Two examples of centralized access \ncontrol are Remote Authentication Dial-In User Service (RADIUS) and Terminal Access Con-\ntroller Access Control System (TACACS).\nDecentralized access control often requires several teams or multiple individuals. Adminis-\ntrative overhead is higher because the changes must be implemented in numerous locations. \nMaintaining homogeneity across the system becomes more difficult as the number of access \ncontrol points increases. Changes made to an individual access control point affect only aspects \nof the systems that rely upon that specific access control point. Decentralized access control \ndoes not have a single point of failure. If an access control point fails, other access control points \nmay be able to balance the load until the control point is repaired, plus objects and subjects that \ndon’t rely upon the failed access control point can continue to interact normally. Domains and \ntrusts are commonly used in decentralized access control systems.\nA domain is a realm of trust or a collection of subjects and objects that share a common secu-\nrity policy. Each domain’s access control is maintained independently of that for other domains. \nThis results in decentralized access control when multiple domains are involved. To share \nresources from one domain to another, a trust is established. A trust is simply a security bridge \nthat is established between two domains and allows users from one domain to access resources \nin another. Trusts can be one-way only or they can be two-way.\nRADIUS and TACACS\nRemote Authentication Dial-In User Service (RADIUS) is used to centralize the authentication \nof remote dial-up connections. A network that employs a RADIUS server is configured so the \nremote access server passes dial-up user logon credentials to the RADIUS server for authenti-\ncation. This process is similar to the process used by domain clients sending logon credentials \nto a domain controller for authentication. Use of an authentication server, such as RADIUS or \n" }, { "page_number": 73, "text": "28\nChapter 1\n\u0002 Accountability and Access Control\nTACACS, that is separate from the primary remote access server system provides the benefit of \nkeeping auditing and access settings on a system other than the remote access server, thus pro-\nviding greater security. RADIUS and other remote authentication protocols and services are \ndesigned to transport authentication, authorization, and session configuration information \nbetween a remote access server (a.k.a. a network access server) and a centralized authentication \nserver (often known as a domain controller).\nRADIUS is defined in RFC 2138. It is primarily used to provide an additional layer of pro-\ntection against intrusions over dial-up connections. RADIUS supports dynamic passwords and \ncallback security. It acts as a proxy for the remote client because it acts on behalf of the client \nto obtain authentication on the network. RADIUS acts as a client for the network by requesting \nauthentication in much the same manner as a typical client would. Likewise, within the \nRADIUS architecture, the remote access server is configured as a client of RADIUS.\nDue to the success of RADIUS, an enhanced version of RADIUS named DIAMETER was \ndeveloped; it is designed for use on all forms of remote connectivity, not just dial-up. However, \nRADIUS and DIAMETER are not interoperable. Eventually, the features of DIAMETER were \nadded back into RADIUS. Now, only a version of RADIUS that supports all types of remote \naccess connectivity is available.\nTerminal Access Controller Access Control System (TACACS) is an alternative to RADIUS. \nTACACS is available in three versions: original TACACS, XTACACS (Extended TACACS), \nand TACACS+. TACACS integrates the authentication and authorization processes. XTA-\nCACS keeps the authentication, authorization, and accounting processes separate. TACACS+ \nimproves XTACACS by adding two-factor authentication. TACACS and RADIUS operate sim-\nilarly, and TACACS provides the same functionality as RADIUS. However, RADIUS is based \non an Internet standard, whereas TACACS is more of a proprietary (although widely used) solu-\ntion. TACACS is defined in RFC 1492.\nThese forms of centralized access control, specific to remote access, provide an additional \nlayer of security for your private network. They prevent LAN authentication systems and \ndomain controllers from being attacked directly by remote attackers. By deploying a separate \nsystem for remote access users, even if that system is compromised, only the remote access users \nare affected; the rest of the LAN still functions unhindered.\nAccess Control Administration\nAccess control administration is the collection of tasks and duties assigned to an administrator \nto manage user accounts, access, and accountability. A system’s security is based on effective \nadministration of access controls. Remember that access controls rely upon four principles: \nidentification, authentication, authorization, and accountability. In relation to access control \nadministration, these principles transform into three main responsibilities:\n\u0002\nUser account management\n\u0002\nActivity tracking\n\u0002\nAccess rights and permissions management\n" }, { "page_number": 74, "text": "Access Control Administration\n29\nAccount Administration\nUser account management involves the creation, maintenance, and closing of user accounts. \nAlthough these activities may seem mundane, they are essential to the system’s access control \ncapabilities. Without properly defined and maintained user accounts, a system is unable to \nestablish identity, perform authentication, prove authorization, or track accountability.\nCreating New Accounts\nThe creation of new user accounts is a simple process systematically, but it must be protected \nor secured through organizational security policy procedures. User accounts should not be cre-\nated at the whim of an administrator or at the request of anyone. Rather, a stringent procedure \nshould be followed that flows from the HR department’s hiring or promotion procedures.\nThe HR department should make a formal request for a user account for a new employee. \nThat request should include the classification or security level that should be assigned to the new \nemployee’s user account. The new employee’s department manager and the organization’s secu-\nrity administrator should verify the security assignment. Once the request has been verified, \nonly then should a new user account be created. Creating user accounts outside of established \nsecurity policies and procedures simply creates holes and oversights that can be exploited by \nmalicious subjects. A similar process for increasing or decreasing an existing user account’s \nsecurity level should be followed.\nAs part of the hiring process, new employees should be trained on the security policies and \nprocedures of the organization. Before hiring is complete, employees must sign an agreement \ncommitting to uphold the security standards of the organization. Many organizations have \nopted to craft a document that states that violating the security policy is grounds for dismissal \nas well as grounds for prosecution under federal, state, and local laws. When passing on the user \naccount ID and temporary password to a new employee, a review of the password policy and \nacceptable use restrictions should be performed.\nThe initial creation of a new user account is often called an enrollment. The enrollment process \ncreates the new identity and establishes the factors the system needs to perform authentication. It \nis critical that the enrollment process be completed fully and accurately. It is also critical that the \nidentity of the individual being enrolled be proved through whatever means your organization \ndeems necessary and sufficient. Photo ID, birth certificate, background check, credit check, secu-\nrity clearance verification, FBI database search, and even calling references are all valid forms of \nverifying a person’s identity before enrolling them into your secured system.\nAccount Maintenance\nThroughout the life of a user account, ongoing maintenance is required. Organizations with \nfairly static organizational hierarchies and low employee turnover or promotion will have sig-\nnificantly less account administration than an organization with a flexible or dynamic organi-\nzational hierarchy and high employee turnover and promotion. Most account maintenance \ndeals with altering rights and privileges. Procedures similar to the procedures used when new \naccounts are created should be established to govern how access is changed throughout the life \nof a user account. Unauthorized increases or decreases in an account’s access capabilities can \nresult in serious security repercussions.\n" }, { "page_number": 75, "text": "30\nChapter 1\n\u0002 Accountability and Access Control\nWhen an employee is no longer present at an organization, their user account should be dis-\nabled, deleted, or revoked. Whenever possible, this task should be automated and tied into the \nHR department. In most cases, when someone’s paychecks are stopped, that person should no \nlonger have logon capabilities. Temporary or short-term employees should have a specific expi-\nration date programmed into their user account. This maintains a degree of control established \nat the time of account creation without requiring ongoing administrative oversight.\nAccount, Log, and Journal Monitoring\nActivity auditing, account tracking, and system monitoring are also important aspects of access \ncontrol management. Without these capabilities, it would not be possible to hold subjects \naccountable. Through the establishment of identity, authentication, and authorization, tracking \nthe activities of subjects (including how many times they access objects) offers direct and specific \naccountability. Auditing and monitoring as an aspect of operations security and as an essential \nelement of a secure environment are discussed in Chapter 14, “Auditing and Monitoring.”\nAccess Rights and Permissions\nAssigning access to objects is an important part of implementing an organizational security pol-\nicy. Not all subjects should be granted access to all objects. Not all subjects should have the \nsame functional capabilities on objects. A few specific subjects should access only some objects; \nlikewise, certain functions should be accessible only by a few specific subjects.\nThe Principle of Least Privilege\nThe principle of least privilege arises out of the complex structure that results when subjects are \ngranted access to objects. This principle states that subjects should be granted only the amount \nof access to objects that is required to accomplish their assigned work tasks. This principle has \na converse that should be followed as well: subjects should be blocked from accessing objects \nthat are not required by their work tasks. The principle of least privilege is most often linked \nwith DAC, but this concept applies to all types of access control environments, including Non-\nDAC, MAC, RBAC, and TBAC.\nKeep in mind that the idea of privilege usually means the ability to write, create, alter, or \ndelete data. Thus, by limiting and controlling privilege based upon this concept, it serves as a \nprotection mechanism for data integrity. If users can change only those data files that their work \ntasks require them to change, then the integrity of all other files in the environment is protected.\nThis principle relies upon the fact that all users have a distinctly defined job description. \nWithout a specific job description, it is not possible to know what privileges a user does or does \nnot need.\nNeed-to-Know Access\nA related principle in the realm of mandatory access control environments is known as need-to-\nknow. Within a specific classification level or security domain, some assets or resources may be \nsectioned off or compartmentalized. Such resources are restricted from general access even to \n" }, { "page_number": 76, "text": "Access Control Administration\n31\nthose subjects with otherwise sufficient clearance. These compartmentalized resources require \nan additional level of formalized access approval before they can be used by subjects. Subjects \nare granted access when they can justify their work-task-related reason for access or their need \nto know. Often, the need to know is determined by a domain supervisor and is granted only for \na limited period of time.\nDetermining which subjects have access to which objects is a function of the organizational \nsecurity policy, the organizational hierarchy of personnel, and the implementation of an access \ncontrol model. Thus, the criteria for establishing or defining access can be based on identity, roles, \nrules, classifications, location, time, interfaces, need-to-know, and so on. Access control models \nare formal descriptions of a security policy. A security policy is a document that encapsulates the \nsecurity requirements of an organization and prescribes the steps necessary to achieve the desired \nsecurity. Access control models (or security models) are used in security evaluations and assess-\nments as well as in tools used to prove the existence of security.\nUsers, Owners, and Custodians\nWhen discussing access to objects, three subject labels are used: user, owner, and custodian. A \nuser is any subject who accesses objects on a system to perform some action or accomplish a \nwork task. An owner, or information owner, is the person who has final corporate responsibil-\nity for classifying and labeling objects and protecting and storing data. The owner may be liable \nfor negligence if they fail to perform due diligence in establishing and enforcing security policies \nto protect and sustain sensitive data. A custodian is a subject who has been assigned or delegated \nthe day-to-day responsibility of proper storage and protection of objects.\nA user is any end user on the system. The owner is typically the CEO, president, or depart-\nment head. The custodian is typically the IT staff or the system security administrator.\nSeparation of Duties and Responsibilities\nSeparation of duties and responsibilities is a common practice that prevents any single subject \nfrom being able to circumvent or disable security mechanisms. When core administration or \nExcessive Privilege and Creeping Privileges\nIt’s important to guard against two problems related to access control: excessive privilege and \ncreeping privileges. Excessive privilege is when a user has more access, privilege, or permis-\nsion than their assigned work tasks dictate. If a user account is discovered to have excessive \nprivilege, the additional and unnecessary privileges should be immediately revoked. Creeping \nprivileges involve a user account accumulating privileges over time as their job roles and \nassigned tasks change. This can occur because new tasks are added to a user’s job and the \nrelated or necessary privileges are added as well but no privileges or access is ever removed, \neven if the related work task is not longer associated with or assigned to the user. Creeping \nprivileges result in excessive privilege. Both of these issues can be prevented with the proper \napplication of the principle of least privilege.\n" }, { "page_number": 77, "text": "32\nChapter 1\n\u0002 Accountability and Access Control\nhigh-authority responsibilities are divided among several subjects, no one subject has sufficient \naccess to perform significant malicious activities or bypass imposed security controls. Separa-\ntion of duties creates a checks-and-balances system in which multiple subjects verify the actions \nof each other and must work in concert to accomplish necessary work tasks. Separation of \nduties makes the accomplishment of malicious, fraudulent, or otherwise unauthorized activities \nmuch more difficult and broadens the scope of detection and reporting. It is easy for an indi-\nvidual to perform an unauthorized act if they think they can get away with it. Once two or more \npeople are involved, the committal of an unauthorized activity requires that each person agree \nto keep a secret. This typically serves as a significant deterrent rather than as a means to corrupt \na group en masse. Separation of duties can be static or dynamic. Static separation of duties is \naccomplished by assigning privileges based on written policies that don’t change often. Dynamic \nseparation of duties is used when security requirements cannot be determined until the system \nis active and functioning.\nAn example of a properly enforced separation of duties is to prevent the security administrator \nfrom being able to access system administration utilities or to perform changes to system config-\nuration not related to security. For example, a security administrator needs no more than read \naccess to system logs. In this manner, separation of duties helps to prevent conflicts of interest in \nthe types of privileges assigned to administrators as well as users in general. Figure 1.4 illustrates \ncommon privileges that should not be combined with others in order to properly enforce separa-\ntion of duties.\nThe Segregation of Duties Control Matrix is not an industry standard, but a guideline indicat-\ning which positions should be separated and which require compensating controls when com-\nbined. The matrix is illustrative of potential segregation of duties and should not be viewed or \nused as an absolute, but rather it should be used to help identify potential conflicts so proper ques-\ntions may be asked to identify compensating controls.\nSummary\nThe first domain of the CISSP CBK is Access Control Systems and Methodology. Access con-\ntrols are central to the establishment of a secure system. They rely upon identification, authen-\ntication, authorization, and accountability. Access control is the management, administration, \nand implementation of granting or restricting subject access to objects.\nThe first step in access control is verifying the identities of subjects on the system, commonly \nknown as authentication. There are a number of methods available to authenticate subjects, \nincluding passwords and phrases, biometric scans, tokens, and tickets.\nOnce a subject is authenticated, their access must be managed (authorization) and their activ-\nities logged, so ultimately the person can be held accountable for the user account’s online \nactions.\nThere are various models for access control or authorization. These include discretionary \nand nondiscretionary access controls. There are at least three important subdivisions of non-\ndiscretionary access control: mandatory, role-based, and task-based access control.\n" }, { "page_number": 78, "text": "Summary\n33\nAccess can be managed for an entire network at once. Such systems are known as Single Sign \nOn solutions. Remote access clients pose unique challenges to LAN security and often require \nspecialized tools such as RADIUS or TACACS.\nFinally, once all these systems are in place, they must be maintained. It does very little good \nto set up system security only to let it go stale over time. Proper role assignment and object main-\ntenance are key aspects to keeping a system secure over time.\nF I G U R E\n1 . 4\nA Segregation of Duties Control Matrix\nControl\nGroup\nSystems \nAnalyst\nApplication \nProgrammer\nHelp Desk and \nSupport Mgr.\nEnd\nUser\nData\nEntry\nComputer \nOperator\nDB \nAdministrator\nNetwork \nAdministrator\nSystem \nAdministrator\nSecurity \nAdministrator\nTape\nLibrarian\nSystems \nProgrammer\nQuality \nAssurance\nControl\nGroup\nSystems \nAnalyst\nApplication \nProgrammer\nHelp Desk and \nSupport Mgr.\nEnd\nUser\nData\nEntry\nComputer \nOperator\nDB \nAdministrator\nNetwork \nAdministrator\nSystem \nAdministrator\nSecurity \nAdministrator\nTape\nLibrarian\nSystems \nProgrammer\nQuality \nAssurance\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX—Combination of these functions may create a potential control weakness.\n© 2005 Information Systems Audit and Control Association (ISACA). All rights reserved.\nUsed with permission.\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\nX\n" }, { "page_number": 79, "text": "34\nChapter 1\n\u0002 Accountability and Access Control\nExam Essentials\nUnderstand the CIA Triad.\nThe CIA Triad comprises confidentiality, integrity, and availabil-\nity. Confidentiality involves making sure that each aspect of a system is properly secured and \naccessible only by subjects who need it. Integrity assures that system objects are accurate and \nreliable. Availability ensures that the system is performing optimally and that authenticated \nsubjects can access system objects when they are needed.\nKnow the common access control techniques.\nCommon access control techniques include \ndiscretionary, mandatory, nondiscretionary, rule-based, role-based, and lattice-based. Access \ncontrols are used to manage the type and extent of access subjects have to objects, which is an \nimportant part of system security because such controls define who has access to what.\nUnderstand access control administration.\nThe secure creation of new user accounts, the ongo-\ning management and maintenance of user accounts, auditing/logging/monitoring subject activity, \nand assigning and managing subject access are important aspects of keeping a system secure. Secu-\nrity is an ongoing task, and administration is how you keep a system secure over time.\nKnow details about each of the access control models.\nThere are two primary categories of \naccess control techniques: discretionary and nondiscretionary. Nondiscretionary can be further \nsubdivided into specific techniques, such as mandatory, role-based, and task-based access control.\nUnderstand the processes of identification and common identification factors.\nThe processes \nof identification include subject identity claims by using a username, user ID, PIN, smart card, bio-\nmetric factors, and so on. They are important because identification is the first step in authenti-\ncating a subject’s identity and proper access rights to objects.\nUnderstand the processes of authentication and the various authentication factors.\nAuthenti-\ncation involves verifying the authentication factor provided by a subject against the authentication \nfactor stored for the claimed identity, which could include passwords, biometrics, tokens, tickets, \nSSO, and so on. In other words, the authentication process ensures that a subject is who they claim \nto be and grants object rights accordingly.\nUnderstand the processes of authorization.\nAuthorization ensures that the requested activity \nor object access is possible given the rights and privileges assigned to the authenticated identity. \nThis is important because it maintains security by providing proper access rights for subjects.\nUnderstand the strengths and weaknesses of passwords.\nUsers typically choosing passwords \nthat are easy to remember and therefore easy to guess or crack is one weakness associated with \npasswords. Another is that randomly generated passwords are hard to remember, thus many \nusers write them down. Passwords are easily shared and can be stolen through many means. \nAdditionally, passwords are often transmitted in cleartext or with easily broken encryption pro-\ntocols, and password databases are often stored in publicly accessible online locations. Finally, \nshort passwords can be discovered quickly in brute force attacks. On the other hand, passwords \ncan be effective if selected intelligently and managed properly. It is important to change pass-\nwords frequently; the more often the same password is used, the more likely it will be compro-\nmised or discovered.\n" }, { "page_number": 80, "text": "Exam Essentials\n35\nKnow the two access control methodologies and implementation examples.\nAccess control \nmethodologies include centralized access control, in which authorization verification is per-\nformed by a single entity within a system, and decentralized access control, in which authori-\nzation verification is performed by various entities located throughout a system. Remote \nauthentication mechanisms such as RADIUS and TACACS are implementation examples; they \nare used to centralize the authentication of remote dial-up connections.\nUnderstand the use of biometrics.\nBiometric factors are used for identification or authentica-\ntion. FRR, FAR, and CER are important aspects of biometric devices. Fingerprints, face scans, \niris scans, retina scans, palm topography, palm geography, heart/pulse pattern, voice pattern, \nsignature dynamics, and keystroke patterns are commonly used in addition to other authenti-\ncation factors, such as a password, to provide an additional method to control authentication \nof subjects.\nUnderstand Single Sign On.\nSingle Sign On (SSO) is a mechanism that allows a subject to be \nauthenticated only once on a system and be able to access resource after resource unhindered \nby repeated authentication prompts. Kerberos, SESAME, KryptoKnight, NetSP, thin clients, \ndirectory services, and scripted access are examples of SSO mechanisms.\n" }, { "page_number": 81, "text": "36\nChapter 1\n\u0002 Accountability and Access Control\nReview Questions\n1.\nWhat is access?\nA. Functions of an object\nB. Information flow from objects to subjects\nC. Unrestricted admittance of subjects on a system\nD. Administration of ACLs\n2.\nWhich of the following is true?\nA. A subject is always a user account.\nB. The subject is always the entity that provides or hosts the information or data.\nC. The subject is always the entity that receives information about or data from the object.\nD. A single entity can never change roles between subject and object.\n3.\nWhat are the elements of the CIA Triad?\nA. Confidentiality, integrity, and availability\nB. Confidentiality, interest, and accessibility\nC. Control, integrity, and authentication\nD. Calculations, interpretation, and accountability\n4.\nWhich of the following types of access control uses fences, security policies, security awareness \ntraining, and antivirus software to stop an unwanted or unauthorized activity from occurring?\nA. Preventative\nB. Detective\nC. Corrective\nD. Authoritative\n5.\n___________________ access controls are the hardware or software mechanisms used to manage \naccess to resources and systems and to provide protection for those resources and systems.\nA. Administrative\nB. Logical/technical\nC. Physical\nD. Preventative\n6.\nWhat is the first step of access control?\nA. Accountability logging\nB. ACL verification\nC. Subject authorization\nD. Subject identification\n" }, { "page_number": 82, "text": "Review Questions\n37\n7.\n___________________ is the process of verifying or testing the validity of a claimed identity.\nA. Identification\nB. Authentication\nC. Authorization\nD. Accountability\n8.\nWhich of the following is an example of a Type 2 authentication factor?\nA. Something you have, such as a smart card, ATM card, token device, and memory card\nB. Something you are, such as fingerprints, voice print, retina pattern, iris pattern, face shape, \npalm topology, and hand geometry\nC. Something you do, such as type a pass phrase, sign your name, and speak a sentence\nD. Something you know, such as a password, personal identification number (PIN), lock com-\nbination, pass phrase, mother’s maiden name, and favorite color\n9.\nWhich of the following is not a reason why using passwords alone is a poor security mechanism?\nA. When possible, users choose easy-to-remember passwords, which are therefore easy to guess \nor crack.\nB. Randomly generated passwords are hard to remember, thus many users write them down.\nC. Short passwords can be discovered quickly in brute force attacks only when used against a \nstolen password database file.\nD. Passwords can be stolen through many means, including observation, recording and play-\nback, and security database theft.\n10. Which of the following is not a valid means to improve the security offered by password \nauthentication?\nA. Enabling account lockout controls\nB. Enforcing a reasonable password policy\nC. Using password verification tools and password cracking tools against your own password \ndatabase file\nD. Allowing users to reuse the same password\n11. What can be used as an authentication factor that is a behavioral or physiological characteristic \nunique to a subject?\nA. Account ID\nB. Biometric factor\nC. Token\nD. IQ\n" }, { "page_number": 83, "text": "38\nChapter 1\n\u0002 Accountability and Access Control\n12. What does the Crossover Error Rate (CER) for a biometric device indicate?\nA. The sensitivity is tuned too high.\nB. The sensitivity is tuned too low.\nC. The False Rejection Rate and False Acceptance Rate are equal.\nD. The biometric device is not properly configured.\n13. Which if the following is not an example of an SSO mechanism?\nA. Kerberos\nB. KryptoKnight\nC. TACACS\nD. SESAME\n14. ___________________ access controls rely upon the use of labels.\nA. Discretionary\nB. Role-based\nC. Mandatory\nD. Nondiscretionary\n15. A network environment that uses discretionary access controls is vulnerable to which of the \nfollowing?\nA. SYN flood\nB. Impersonation\nC. Denial of service\nD. Birthday attack\n16. What is the most important aspect of a biometric device?\nA. Accuracy\nB. Acceptability\nC. Enrollment time\nD. Invasiveness\n17.\nWhich of the following is not an example of a deterrent access control?\nA. Encryption\nB. Auditing\nC. Awareness training\nD. Antivirus software\n" }, { "page_number": 84, "text": "Review Questions\n39\n18. Kerberos provides the security services of ____________________ protection for authentication \ntraffic.\nA. Availability and nonrepudiation\nB. Confidentiality and authentication\nC. Confidentiality and integrity\nD. Availability and authorization\n19. Which of the following forms of authentication provides the strongest security?\nA. Password and a PIN\nB. One-time password\nC. Pass phrase and a smart card\nD. Fingerprint\n20. Which of the following is the least acceptable form of biometric device?\nA. Iris scan\nB. Retina scan\nC. Fingerprint\nD. Facial geometry\n" }, { "page_number": 85, "text": "40\nChapter 1\n\u0002 Accountability and Access Control\nAnswers to Review Questions\n1.\nB. The transfer of information from an object to a subject is called access.\n2.\nC. The subject is always the entity that receives information about or data from the object. The \nsubject is also the entity that alters information about or data stored within the object. The object \nis always the entity that provides or hosts the information or data. A subject can be a user, a pro-\ngram, a process, a file, a computer, a database, and so on. The roles of subject and object can \nswitch as two entities, such as a program and a database or a process and a file, communicate to \naccomplish a task.\n3.\nA. The essential security principles of confidentiality, integrity, and availability are often \nreferred to as the CIA Triad.\n4.\nA. A preventative access control is deployed to stop an unwanted or unauthorized activity from \noccurring. Examples of preventative access controls include fences, security policies, security \nawareness training, and antivirus software.\n5.\nB. Logical/technical access controls are the hardware or software mechanisms used to manage \naccess to resources and systems and to provide protection for those resources and systems. \nExamples of logical or technical access controls include encryption, smart cards, passwords, bio-\nmetrics, constrained interfaces, access control lists, protocols, firewalls, routers, intrusion detec-\ntion systems, and clipping levels.\n6.\nD. Access controls govern subjects’ access to objects. The first step in this process is identifying \nwho the subject is. In fact, there are several steps preceding actual object access: identification, \nauthentication, authorization, and accountability.\n7.\nB. The process of verifying or testing the validity of a claimed identity is called authentication.\n8.\nA. A Type 2 authentication factor is something you have. This could include a smart card, ATM \ncard, token device, and memory card.\n9.\nC. Brute force attacks can be used against password database files and system logon prompts.\n10. D. Preventing password reuse increases security by preventing the theft of older password data-\nbase files, which can be used against the current user passwords.\n11. B. A biometric factor is a behavioral or physiological characteristic that is unique to a subject, \nsuch as fingerprints and face scans.\n12. C. The point at which the FRR and FAR are equal is known as the Crossover Error Rate (CER). \nThe CER level is used as a standard assessment point from which to measure the performance \nof a biometric device.\n13. C. Kerberos, SESAME, and KryptoKnight are examples of SSO mechanisms. TACACS is a cen-\ntralized authentication service used for remote access clients.\n" }, { "page_number": 86, "text": "Answers to Review Questions\n41\n14. C. Mandatory access controls rely upon the use of labels. A system that employs discretionary \naccess controls allows the owner or creator of an object to control and define subject access to \nthat object. Nondiscretionary access controls are also called role-based access controls. Systems that \nemploy nondiscretionary access controls define a subject’s ability to access an object through the \nuse of subject roles or tasks.\n15. B. A discretionary access control environment controls access based on user identity. If a user \naccount is compromised and another person uses that account, they are impersonating the real \nowner of the account.\n16. A. The most important aspect of a biometric factor is its accuracy. If a biometric factor is not \naccurate, it may allow unauthorized users into a system.\n17.\nD. Antivirus software is an example of a recovery or corrective access control.\n18. C. Kerberos provides the security services of confidentiality and integrity protection for authen-\ntication traffic.\n19. C. A pass phrase and a smart card provide the strongest authentication security because it is the \nonly selection offering two-factor authentication.\n20. B. Of the options listed, retina scan is the least accepted form of biometric device because it \nrequires touching a shared eye cup and can reveal personal health issues.\n" }, { "page_number": 87, "text": "" }, { "page_number": 88, "text": "Chapter\n2\nAttacks and \nMonitoring\nTHE CISSP EXAM TOPICS COVERED IN THIS \nCHAPTER INCLUDE:\n\u0001 Monitoring\n\u0001 Intrusion Detection\n\u0001 Penetration Testing\n\u0001 Access Control Attacks\n" }, { "page_number": 89, "text": "The Access Control Systems and Methodology domain of the \nCommon Body of Knowledge (CBK) for the CISSP certification \nexam deals with topics and issues related to the monitoring, iden-\ntification, and authorization of granting or restricting user access to resources. Generally, access \ncontrol is any hardware, software, or organizational administrative policy or procedure that \ngrants or restricts access, monitors and records attempts to access, identifies users attempting to \naccess, and determines whether access is authorized.\nThis domain is discussed in this chapter and in the previous chapter (Chapter 1, “Account-\nability and Access Control”). Be sure to read and study the materials from both chapters to \nensure complete coverage of the essential material for the CISSP certification exam.\nMonitoring\nMonitoring is the programmatic means by which subjects are held accountable for their actions \nwhile authenticated on a system. It is also the process by which unauthorized or abnormal activ-\nities are detected on a system. Monitoring is necessary to detect malicious actions by subjects, as \nwell as to detect attempted intrusions and system failures. It can help reconstruct events, provide \nevidence for prosecution, and produce problem reports and analysis. Auditing and logging are \nusually native features of an operating system and most applications and services. Thus, config-\nuring the system to record information about specific types of events is fairly straightforward.\nUsing log files to detect problems is another matter. In most cases, when sufficient logging \nand auditing is enabled to monitor a system, so much data is collected that the important details \nget lost in the bulk. There are numerous tools to search through log files for specific events or \nID codes. The art of data reduction is crucial when working with large volumes of monitoring \ndata obtained from log files. The tools used to extract the relevant, significant, or important \ndetails from large collections of data are known as data mining tools. For true automation and \neven real-time analysis of events, a specific type of data mining tool is required—namely, an \nintrusion detection system (IDS). See the next section for information on IDSs.\nAccountability is maintained by recording the activities of subjects and objects as well as core sys-\ntem functions that maintain the operating environment and the security mechanisms. The audit \ntrails created by recording system events to logs can be used to evaluate a system’s health and per-\nformance. System crashes may indicate faulty programs, corrupt drivers, or intrusion attempts. The \nevent logs leading up to a crash can often be used to discover the reason a system failed. Log files pro-\nvide an audit trail for re-creating a step-by-step history of an event, intrusion, or system failure.\nFor more information on configuring and administering auditing and logging, see Chapter 14, \n“Auditing and Monitoring.”\n" }, { "page_number": 90, "text": "Intrusion Detection\n45\nIntrusion Detection\nAn intrusion detection system (IDS) is a product that automates the inspection of audit logs and \nreal-time system events. IDSs are primarily used to detect intrusion attempts, but they can also \nbe employed to detect system failures or rate overall performance. IDSs watch for violations of \nconfidentiality, integrity, and availability. The goal of an IDS is to provide perpetrator account-\nability for intrusion activities and provide a means for a timely and accurate response to intru-\nsions. Attacks recognized by an IDS can come from external connections (such as the Internet \nor partner networks), viruses, malicious code, trusted internal subjects attempting to perform \nunauthorized activities, and unauthorized access attempts from trusted locations. An IDS is \nconsidered a form of a technical detective security control.\nAn IDS can actively watch for suspicious activity, peruse audit logs, send alerts to adminis-\ntrators when specific events are discovered, lock down important system files or capabilities, \ntrack slow and fast intrusion attempts, highlight vulnerabilities, identify the intrusion’s origi-\nnation point, track down the logical or physical location of the perpetrator, terminate or inter-\nrupt attacks or intrusion attempts, and reconfigure routers and firewalls to prevent repeats of \ndiscovered attacks. IDS alerts can be sent or communicated with an on-screen notification (the \nmost common), by playing a sound, via e-mail, via pager, or by recording information in a log file.\nA response by an IDS can be active, passive, or hybrid. An active response is one that directly \naffects the malicious activity of network traffic or the host application. A passive response is one \nthat does not affect the malicious activity but records information about the issue and notifies \nthe administrator. A hybrid response is one that stops unwanted activity, records information \nabout the event, and possibly even notifies the administrator.\nGenerally, an IDS is used to detect unauthorized or malicious activity originating from inside \nor outside of your trusted network. The capability of an IDS to stop current attacks or prevent \nfuture attacks is limited. Typically, the responses an IDS can take against an attack include port \nblocking, source address blocking, and disabling all communications over a specific cable seg-\nment. Whenever an IDS discovers abnormal traffic (e.g., spoofed) or violations of its security \npolicy, filters, and rules, it records a log detail of the issue and then drops, discards, or deletes \nthe relevant packets.\nAn IDS should be considered one of the many components a well-formed security endeavor \ncomprises to protect a network. An IDS is a complementary security tool to a firewall. Other \nsecurity controls, such as physical restrictions and logical access controls, are necessary com-\nponents (refer to Chapter 1 for a discussion of these controls).\nIntrusion prevention requires adequate maintenance of overall system security, such as \napplying patches and setting security controls. It also involves responding to intrusions discov-\nered via an IDS by erecting barriers to prevent future occurrences of the same attack. This could \nbe as simple as updating software or reconfiguring access controls, or it could be as drastic as \nreconfiguring a firewall, removing or replacing an application or service, or redesigning an \nentire network.\nWhen an intrusion is detected, your first response should be to contain the intrusion. Intru-\nsion containment prevents additional damage to other systems but may allow the continued \ninfestation of already compromised systems. Later, once compromised systems are rebuilt from \n" }, { "page_number": 91, "text": "46\nChapter 2\n\u0002 Attacks and Monitoring\nscratch, be sure to double-check compliance with your security policy—including checking \nACLs, service configurations, and user account settings—before connecting the reestablished \nsystem to your network. You should realize that if you wipe and re-create a system, none of the \nprevious system, nor any intrusion footprints, will remain.\nIt is considered unethical and risky to actively launch counterstrikes against an \nintruder or to actively attempt to reverse-hack the intruder's computer system. \nInstead, rely upon your logging capabilities and sniffing collections to provide \nsufficient data to prosecute criminals or to simply improve the security of your \nenvironment accordingly.\nHost-Based and Network-Based IDSs\nIDS types are most commonly classified by their information source. There are two primary \ntypes of IDSs: host based and network based. A host-based IDS watches for questionable activ-\nity on a single computer system. A network-based IDS watches for questionable activity being \nperformed over the network medium.\nHost-Based IDS\nBecause the attention of a host-based IDS is focused on a single computer (whereas a network-\nbased IDS must monitor the activity on an entire network), it can examine events in much \ngreater detail than a network-based IDS can. A host-based IDS is able to pinpoint the files and \nprocesses compromised or employed by a malicious user to perform unauthorized activity.\nHost-based IDSs can detect anomalies undetected by network-based IDSs; however, a host-\nbased IDS cannot detect network-only attacks or attacks on other systems. Because a host-based \nIDS is installed on the computer being monitored, crackers can discover the IDS software and \ndisable it or manipulate it to hide their tracks. A host-based IDS has some difficulty with detect-\ning and tracking down denial of service (DoS) attacks, especially those of a bandwidth con-\nsumption nature. A host-based IDS also consumes resources from the computer being \nmonitored, thereby reducing the performance of that system. A host-based IDS is limited by the \nauditing capabilities of the host operating system and applications.\nHost-based IDSs are considered more costly to manage than network-based IDSs. Host-\nbased IDSs require that an installation on each server be monitored and require administrative \nattention at each point of installation, while network-based IDSs usually only require a single \ninstallation point. Host-based IDSs have other disadvantages as well; for example, they cause \na significant host system performance degradation and they are easier for an intruder to dis-\ncover and disable.\nNetwork-Based IDS\nNetwork-based IDSs detect attacks or event anomalies through the capture and evaluation of \nnetwork packets. A single network-based IDS is capable of monitoring a large network if \ninstalled on a backbone of that network, where a majority of the network traffic occurs. Some \n" }, { "page_number": 92, "text": "Intrusion Detection\n47\nversions of network-based IDSs use remote agents to collect data from various subnets and \nreport to a central management console. Network-based IDSs are installed onto single-purpose \ncomputers. This allows them to be hardened against attack, reduces the number of vulnera-\nbilities to the IDS, and allows the IDS to operate in stealth mode. In stealth mode, the IDS is \ninvisible to the network and intruders would have to know of its exact location and system \nidentification to discover it. A network-based IDS has little negative affect on overall network \nperformance, and because it is deployed on a single-purpose system, it doesn’t adversely affect \nthe performance of any other computer.\nOn networks with extremely large volumes of traffic, a network-based IDS may be unable to \nkeep up with the flow of data. This could cause the IDS to miss an attack that occurred during \nhigh traffic levels. Network-based IDSs do not usually work well on switched networks, espe-\ncially if the routers do not have a monitoring port. Network-based IDSs are used to monitor the \ncontent of traffic if it is encrypted during transmission over the network medium. They are usu-\nally able to detect the initiation of an attack or the ongoing attempts to perpetrate an attack \n(including DoS), but they are unable to provide information about whether an attack was suc-\ncessful or which specific systems, user accounts, files, or applications were affected.\nOften, a network-based IDS can provide some limited functionality for discovering the source \nof an attack by performing Reverse Address Resolution Protocol (RARP) or Domain Name Sys-\ntem (DNS) lookups. However, because most attacks are launched by malicious individuals whose \nidentity is masked through spoofing, this is not usually a fully reliable system capability.\nAn IDS should not be viewed as a single universal security solution. It is only part of a multi-\nfaceted security solution for an environment. Although an IDS can offer numerous benefits, there \nare several drawbacks to consider. A host-based IDS may not be able to examine every detail if \nthe host system is overworked and insufficient execution time is granted to the IDS processes. A \nnetwork-based IDS can suffer the same problem if the network traffic load is high and it is unable \nto process packets efficiently and swiftly. A network-based IDS is also unable to examine the con-\ntents of encrypted traffic. A network-based IDS is not an effective network-wide solution on \nswitched networks because it is unable to view all network traffic if it is not placed on a mirror \nport (i.e., a port specifically configured to send all data to the IDS). An IDS may initially produce \nnumerous false alarms and requires significant management on an ongoing basis.\nA switched network is often a preventative measure against rogue sniffers. \nJust like an IDS off of a switch, if the switch is not configured to mirror all traffic, \nthen only a small portion of network traffic will be accessible. However, numer-\nous attacks, such as MAC or ARP flooding, can cause a switch to default into \nhub mode, thus granting the attacker access to all data (as well as greatly \nreducing the efficiency and throughput of your network).\nKnowledge-Based and Behavior-Based Detection\nThere are two common means by which an IDS can detect malicious events. One way is to use \nknowledge-based detection. This is also called signature-based detection or pattern-matching \n" }, { "page_number": 93, "text": "48\nChapter 2\n\u0002 Attacks and Monitoring\ndetection. Basically, the IDS uses a signature database and attempts to match all monitored events \nto it. If events match, then the IDS assumes that an attack is taking place (or has taken place). The \nIDS vendor develops the suspect chart by examining and inspecting numerous intrusions on var-\nious systems. What results is a description, or signature, of common attack methods. An IDS using \nknowledge-based detection functions in much the same way as many antivirus applications.\nThe primary drawback to a knowledge-based IDS is that it is effective only against known \nattack methods. New attacks or slightly modified versions of known attacks often go unrecog-\nnized by the IDS. This means that the knowledge-based IDS lacks a learning model; that is, it \nis unable to recognize new attack patterns as they occur. Thus, this type of IDS is only as useful \nas its signature file is correct and up-to-date. Keeping the signature file current is an important \naspect in maintaining the best performance from a knowledge-based IDS.\nThe second detection type is behavior-based detection. A behavior-based IDS is also called \nstatistical intrusion detection, anomaly detection, and heuristics-based detection. Basically, \nbehavior-based detection finds out about the normal activities and events on your system \nthrough watching and learning. Once it has accumulated enough data about normal activity, it \ncan detect abnormal and possible malicious activities and events.\nA behavior-based IDS can be labeled an expert system or a pseudo artificial intelligence sys-\ntem because it can learn and make assumptions about events. In other words, the IDS can act \nlike a human expert by evaluating current events against known events. The more information \nprovided to a behavior-based IDS about normal activities and events, the more accurate its \nanomaly detection becomes.\nThe primary drawback of a behavior-based IDS is that it produces many false alarms. The \nnormal pattern of user and system activity can vary widely, and thus establishing a definition \nof normal or acceptable activity can be difficult. The more a security detection system creates \nfalse alarms, the less likely security administrators will heed its warnings, just as in the fable of \nthe boy who cried wolf. Over time, the IDS can become more efficient and accurate, but the \nlearning process takes considerable time. Using known behaviors, activity statistics, and heu-\nristic evaluation of current versus previous events, a behavior-based IDS can detect unforeseen, \nnew, and unknown vulnerabilities, attacks, and intrusion methods.\nAlthough knowledge-based and behavior-based detection methods do have their differences, \nboth employ an alarm-signal system. When an intrusion is recognized or detected, an alarm is \ntriggered. The alarm system can notify administrators via e-mail or pop-up messages or by exe-\ncuting scripts to send pager messages. In addition to administrator notification, the alarm sys-\ntem can record alert messages in log and audit files as well as generate violation reports detailing \nthe detected intrusions and discoveries of vulnerabilities.\nIDS-Related Tools\nIntrusion detection systems are often deployed in concert with several other components. These IDS-\nrelated tools expand the usefulness and capabilities of IDSs and make them more efficient and less \nprone to false positives. These tools include honey pots, padded cells, and vulnerability scanners.\nHoney pots are individual computers or entire networks created to serve as a snare for \nintruders. They look and act like legitimate networks, but they are 100 percent fake. Honey pots \n" }, { "page_number": 94, "text": "Penetration Testing\n49\ntempt intruders by containing unpatched and unprotected security vulnerabilities as well as by \nhosting attractive and tantalizing but faux data. They are designed to grab an intruder’s atten-\ntion and direct them into the restricted playground while keeping them away from the legitimate \nnetwork and confidential resources. Legitimate users never enter the honey pot; there is no real \ndata or useful resources in the honey pot system. Thus, when honey pot access is detected, it is \nmost likely an unauthorized intruder. Honey pots are deployed to keep an intruder logged on \nand performing their malicious activities long enough for the automated IDS to detect the intru-\nsion and gather as much information about the intruder as possible. The longer the honey pot \nretains the attention of the intruder, the more time an administrator has to investigate the attack \nand potentially identify the person perpetrating the intrusion.\nThe use of honey pots raises the issue of enticement versus entrapment. A honey pot can be \nlegally used as an enticement device if the intruder discovers it through no outward efforts of the \nhoney pot owner. Placing a system on the Internet with open security vulnerabilities and active ser-\nvices with known exploits is enticement. Entrapment occurs when the honey pot owner actively \nsolicits visitors to access the site and then charges them with unauthorized intrusion. It is consid-\nered to be entrapment when you trick or encourage a perpetrator into performing an illegal or \nunauthorized action. Enticement occurs when the opportunity for illegal or unauthorized actions \nis provided but the perpetrator makes their own decision to perform the action.\nA padded cell system is similar to a honey pot, but it performs intrusion isolation using a dif-\nferent approach. When an intruder is detected by an IDS, the intruder is automatically trans-\nferred to a padded cell. The padded cell has the look and layout of the actual network, but \nwithin the padded cell the intruder can neither perform malicious activities nor access any con-\nfidential data. A padded cell is a simulated environment that offers fake data to retain an \nintruder’s interest. The transfer of the intruder into a padded cell is performed without inform-\ning the intruder that the change has occurred. Like a honey pot, the padded cell system is heavily \nmonitored and used by administrators to gather evidence for tracing and possible prosecution.\nAnother type of IDS-related tool is a vulnerability scanner. Vulnerability scanners are used to test \na system for known security vulnerabilities and weaknesses. They are used to generate reports that \nindicate the areas or aspects of the system that need to be managed to improve security. The reports \nmay recommend applying patches or making specific configuration or security setting changes to \nimprove or impose security. A vulnerability scanner is only as useful as its database of security issues. \nThus, the database must be updated from the vendor often to provide a useful audit of your system. \nThe use of vulnerability scanners in cooperation with IDSs may help reduce false positives by the IDS \nand keep the total number of overall intrusions or security violations to a minimum. When discov-\nered vulnerabilities are patched quickly and often, the system provides a more secure environment.\nPenetration Testing\nIn security terms, a penetration occurs when an attack is successful and an intruder is able to \nbreach the perimeter of your environment. The breach can be as small as reading a few bits of \ndata from your network or as big as logging in as a user with unrestricted privileges. One of the \nprimary goals of security is to prevent penetrations.\n" }, { "page_number": 95, "text": "50\nChapter 2\n\u0002 Attacks and Monitoring\nOne common method to test the strength of your security measures is to perform penetration \ntesting. Penetration testing is a vigorous attempt to break into your protected network using any \nmeans necessary. It is common for organizations to hire external consultants to perform the \npenetration testing so the testers are not privy to confidential elements of the security’s config-\nuration, network design, and other internal secrets.\nPenetration testing seeks to find any and all weaknesses in your existing security perimeter. \nOnce a weakness is discovered, countermeasures can be selected and deployed to improve the \nsecurity of the environment. One significant difference between penetration testing and actual \nattacking is that once a vulnerability is discovered, the intrusion attempt ceases before the vul-\nnerability is actually exploited and causes system damage.\nPenetration testing can be performed using automated attack tools or suites or performed \nmanually with common network utilities and scripting. Automated attack tools range from pro-\nfessional vulnerability scanners to wild, underground cracker/hacker tools discovered on the \nInternet. Tools are also often used for penetration testing performed manually, but much more \nonus is placed on knowing how to perpetrate an attack.\nPenetration testing should be performed only with the consent and knowledge of the man-\nagement staff. Performing unapproved security testing could result in productivity loss, trigger \nemergency response teams, or even cost you your job.\nRegularly staged penetration attempts are a good way to accurately judge the security mech-\nanisms deployed by an organization. Penetration testing can also reveal areas where patches or \nsecurity settings are insufficient and where new vulnerabilities have developed. To evaluate your \nsystem, benchmarking and testing tools are available for download at www.cisecurity.org.\nPenetration testing is discussed further in Chapter 14.\nMethods of Attacks\nAs discussed in Chapter 1, one of the goals of access control is to prevent unauthorized access \nto objects. This includes access into a system (a network, a service, a communications link, a \ncomputer, etc.) or access to data. In addition to controlling access, security is also concerned \nwith preventing unauthorized alteration and disclosure and providing consistent availability \n(remember the CIA Triad from Chapter 1).\nHowever, malicious entities are focused on violating the security perimeter of a system to \nobtain access to data, alter or destroy data, and inhibit valid access to data and resources. The \nactual means by which attacks are perpetrated vary greatly. Some are extremely complex and \nrequire detailed knowledge of the victimized systems and programming techniques, whereas \nothers are extremely simple to execute and require little more than an IP address and the ability \nto manipulate a few tools or scripts. But even though there are many different kinds of attacks, \nthey can be generally grouped into a handful of classifications or categories.\nThese are the common or well-known classes of attacks or attack methodologies:\n\u0002\nBrute force and dictionary\n\u0002\nDenial of service\n" }, { "page_number": 96, "text": "Methods of Attacks\n51\n\u0002\nSpoofing\n\u0002\nMan-in-the-middle attacks\n\u0002\nSpamming\n\u0002\nSniffers\n\u0002\nCrackers\nBrute Force and Dictionary Attacks\nBrute force and dictionary attacks are often discussed together because they are waged against \nthe same entity: passwords. Either type of attack can be waged against a password database file \nor against an active logon prompt.\nA brute force attack is an attempt to discover passwords for user accounts by systematically \nattempting every possible combination of letters, numbers, and symbols. With the speed of \nmodern computers and the ability to employ distributed computing, brute force attacks are \nbecoming successful even against strong passwords. With enough time, all passwords can be \ndiscovered using a brute force attack method. Most passwords of 14 characters or less can \nbe discovered within 7 days on a fast system using a brute force attack program against a stolen \npassword database file (the actual time it takes to discover passwords is dependent upon the \nencryption algorithm used to encrypt them).\nThe longer the password (or the greater the number of keys in an algorithm’s key space), the \nmore costly and time consuming a brute force attack becomes. When the number of possibilities \nis increased, the cost of performing an exhaustive attack increases as well. In other words, the \nlonger the password, the more secure against brute force attacks it becomes.\nA dictionary attack is an attempt to discover passwords by attempting to use every possible \npassword from a predefined list of common or expected passwords. This type of attack is \nnamed such because the possible password list is so long it is as if you are using the entire dic-\ntionary one word at a time to discover passwords.\nPassword attacks employ a specify cryptographic attack method known as the birthday attack \n(see Chapter 10, “PKI and Cryptographic Applications”). This attack can also be called reverse \nhash matching or the exploitation of collision. Basically, the attack exploits the fact that if two \nmessages are hashed and the hash values are the same, then the two messages are probably the \nsame. A way of expressing this in mathematical or cryptographic notation is H(M)=H(M'). Pass-\nwords are stored in an accounts database file on secured systems. However, instead of being \nstored as plain text, passwords are hashed and only their hash values are actually stored. This pro-\nvides a reasonable level of protection. However, using reverse hash matching, a password cracker \ntool looks for possible passwords (through either brute force or dictionary methods) that have the \nsame hash value as a value stored on the accounts database file. When a hash value match is dis-\ncovered, then the tool is said to have cracked the password.\nCombinations of these two password attack methodologies can be used as well. For example, \na brute force attack could use a dictionary list as the source of its guesswork.\nDictionary attacks are often successful due to the predictability of human nature to select \npasswords based on personal experiences. Unfortunately, those personal experiences are often \n" }, { "page_number": 97, "text": "52\nChapter 2\n\u0002 Attacks and Monitoring\nbroadcast to the world around you simply by the way you live and act on a daily basis. If you \nare a sports fan, your password might be based on a player’s name or a hit record. If you have \nchildren, your password might be based on their names or birth dates. If you work in a technical \nindustry, your password might be based on industry acronyms or product names. The more \ndata about a victim learned through intelligence gathering, dumpster diving, and social engi-\nneering, the more successful a custom dictionary list will be.\nProtecting passwords from brute force and dictionary attacks requires numerous security \nprecautions and rigid adherence to a strong security policy. First, physical access to systems \nmust be controlled. If a malicious entity can gain physical access to an authentication server, \nthey can often steal the password file within seconds. Once a password file is stolen, all pass-\nwords should be considered compromised.\nSecond, tightly control and monitor electronic access to password files. End users and non–\naccount administrators have no need to access the password database file for normal daily work \ntasks. If you discover an unauthorized access to the database file, investigate immediately. If you \ncannot determine that a valid access occurred, then consider all passwords compromised.\nThird, craft a password policy that programmatically enforces strong passwords and pre-\nscribe means by which end users can create stronger passwords. The stronger and longer the \npassword, the longer it will take for it to be discovered in a brute force attack. However, with \nenough time, all passwords can be discovered via brute force methods. Thus, changing pass-\nwords regularly is required to maintain security. Static passwords older than 30 days should be \nconsidered compromised even if no other aspect of a security breach has been discovered.\nFourth, deploy two-factor authentication, such as using biometrics or token devices. If pass-\nwords are not the only means used to protect the security of a network, their compromise will \nnot automatically result in a system breach.\nFifth, use account lockout controls to prevent brute force and dictionary attacks against \nlogon prompts. For those systems and services that don’t support account lockout controls, \nsuch as most FTP servers, employ extensive logging and an IDS to look for attempted fast and \nslow password attacks.\nSixth, encrypt password files with the strongest encryption available for your OS. Maintain \nrigid control over all media that have a copy of the password database file, such as backup tapes \nand some types of boot or repair disks.\nPasswords are a poor security mechanism when used as the sole deterrent against unautho-\nrized access. Brute force and dictionary attacks show that passwords alone offer little more than \na temporary blockade.\nDenial of Service\nDenial of service (DoS) attacks are attacks that prevent the system from processing or respond-\ning to legitimate traffic or requests for resources and objects. The most common form of denial \nof service attacks is transmitting so many data packets to a server that it cannot processes them \nall. Other forms of denial of service attacks focus on the exploitation of a known fault or vul-\nnerability in an operating system, service, or application. Exploiting the fault often results in \nsystem crash or 100 percent CPU utilization. No matter what the actual attack consists of, any \nattack that renders the victim unable to perform normal activities can be considered a denial of \n" }, { "page_number": 98, "text": "Methods of Attacks\n53\nservice attack. Denial of service attacks can result in system crashes, system reboots, data cor-\nruption, blockage of services, and more.\nUnfortunately, denial of service attacks based on flooding (i.e., sending sufficient traffic to \na victim to cause a DoS) a server with data are a way of life on the Internet. In fact, there are \nno known means by which denial of service flood attacks in general can be prevented. Further-\nmore, due to the ability to spoof packets or exploit legitimate Internet services, it is often impos-\nsible to trace the actual origin of an attack and apprehend the culprit.\nThere are several types of DoS flood attacks. The first, or original, type of attack employed \na single attacking system flooding a single victim with a steady stream of packets. Those packets \ncould be valid requests that were never completed or malformed or fragmented packets that \nconsume the attention of the victimized system. This simple form of DoS is easy to terminate just \nby blocking packets from the source IP address.\nAnother form of attack is called the distributed denial of service (DDoS). A distributed denial \nof service occurs when the attacker compromises several systems and uses them as launching \nplatforms against one or more victims. The compromised systems used in the attack are often \ncalled slaves or zombies. A DDoS attack results in the victims being flooded with data from \nnumerous sources. DDoS attacks can be stopped by blocking packets from the compromised \nsystems. But this can also result in blocking legitimate traffic because the sources of the flood \npackets are victims themselves and not the original perpetrator of the attack. These types of \nattacks are labeled as distributed because numerous systems are involved in the propagation of \nthe attack against the victim.\nA more recent form of DoS, called a distributed reflective denial of service (DRDoS), has \nbeen discovered. DRDoS attacks take advantage of the normal operation mechanisms of key \nInternet services, such as DNS and router update protocols. DRDoS attacks function by sending \nnumerous update, session, or control packets to various Internet service servers or routers with \na spoofed source address of the intended victim. Usually these servers or routers are part of the \nhigh-speed, high-volume Internet backbone trunks. What results is a flood of update packets, \nsession acknowledgment responses, or error messages sent to the victim. A DRDoS attack can \nresult in so much traffic that upstream systems are adversely affected by the sheer volume of \ndata focused on the victim. This type of attack is called a reflective attack because the high-speed \nbackbone systems reflect the attack to the victim. Unfortunately, these types of attacks cannot \nbe prevented because they exploit normal functions of the systems. Blocking packets from these \nkey Internet systems will effectively cut the victim off from a significant section of the Internet.\nNot all instances of DoS are the result of a malicious attack. Errors in coding operating sys-\ntems, services, and applications have resulted in DoS conditions. For example, a process failing \nto release control of the CPU or a service consuming system resources out of proportion to the \nservice requests it is handling can cause DoS conditions. Most vendors quickly release patches \nto correct these self-inflicted DoS conditions, so it is important to stay informed.\nThere have been many forms of DoS attacks committed over the Internet. Some of the more \npopular ones (“popular” meaning widespread due to affecting many systems or well known due \nto media hype) are discussed in the remainder of this section.\nA SYN flood attack is waged by breaking the standard three-way handshake used by TCP/\nIP to initiate communication sessions. Normally, a client sends a SYN packet to a server, the \nserver responds with a SYN/ACK packet to the client, and the client then responds with an ACK \n" }, { "page_number": 99, "text": "54\nChapter 2\n\u0002 Attacks and Monitoring\npacket back to the server. This three-way handshake establishes a communication session that \nis used for data transfer until the session is terminated (using a three-way handshake with FIN \nand ACK packets). A SYN flood occurs when numerous SYN packets are sent to a server but \nthe sender never replies to the server’s SYN/ACK packets with the final ACK.\nA TCP session can also be terminated with a RES (reset) packet.\nIn addition, the transmitted SYN packets usually have a spoofed source address so the SYN/\nACK response is sent somewhere other than to the actual originator of the packets. The server \nwaits for the client’s ACK packet, often for several seconds, holding open a session and con-\nsuming system resources. If a significant number of sessions are held open (e.g., through the \nreceipt of a flood of SYN packets), this results in a DoS. The server can be easily overtaxed by \nkeeping sessions that are never finalized open, thus causing a failure. That failure can be as sim-\nple as being unable to respond to legitimate requests for communications or as serious as a fro-\nzen or crashed system.\nOne countermeasure to SYN flood attacks is increasing the number of connections a server \ncan support. However, this usually requires additional hardware resources (memory, CPU \nspeed, etc.) and may not be possible for all operating systems or network services. A more useful \ncountermeasure is to reduce the timeout period for waiting for the final ACK packet. However, \nthis can also result in failed sessions from clients connected over slower links or can be hindered \nby intermittent Internet traffic. Network-based IDSs may offer some protection against sus-\ntained SYN flood attacks by noticing that numerous SYN packets originate from one or only \na few locations, resulting in incomplete sessions. An IDS could warn of the attack or dynami-\ncally block flooding attempts.\nA Smurf attack occurs when an amplifying server or network is used to flood a victim with \nuseless data. An amplifying server or network is any system that generates multiple response \npackets, such as ICMP ECHO packets or special UDP packets, from a single submitted packet. \nOne common attack is to send a message to the broadcast of a subnet or network so that every \nnode on the network produces one or more response packets. The attacker sends information \nrequest packets with the victim’s spoofed source address to the amplification system. Thus, all \nof the response packets are sent to the victim. If the amplification network is capable of pro-\nducing sufficient response packet traffic, the victim’s system will experience a DoS. Figure 2.1 \nshows the basic elements of a Smurf attack. The attacker sends multiple IMCP PING packets \nwith a source address spoofed as the victim (V) and a destination address that is the same as the \nbroadcast address of the amplification network (AN:B). The amplification network responds \nwith multiplied volumes of echo packets to the victim, thus fully consuming the victim’s con-\nnection bandwidth. Another DoS attack similar to Smurf is called Fraggle. Fraggle attacks \nemploy spoofed UDP packets rather than ICMP packets.\nCountermeasures for Smurf attacks include disabling directed broadcasts on all network \nborder routers and configuring all systems to drop ICMP ECHO packets. An IDS may be able \nto detect this type of attack, but there are no means to prevent the attack other than blocking \nthe addresses of the amplification network. This tactic is problematic because the amplification \nnetwork is usually also a victim.\n" }, { "page_number": 100, "text": "Methods of Attacks\n55\nF I G U R E\n2 . 1\nA Smurf attack\nA ping of death attack employs an oversized ping packet. Using special tools, an attacker can \nsend numerous oversized ping packets to a victim. In many cases, when the victimized system \nattempts to process the packets, an error occurs, causing the system to freeze, crash, or reboot. \nThe ping of death is more of a buffer overflow attack, but because it often results in a downed \nserver, it is considered a DoS attack. Countermeasures to the ping of death attack include keep-\ning up-to-date with OS and software patches, properly coding in-house applications to prevent \nbuffer overflows, avoiding running code with system- or root-level privileges, and blocking ping \npackets at border routers/firewalls.\nA WinNuke attack is a specialized assault against Windows 95 systems. Out-of-band TCP \ndata is sent to a victim’s system, which causes the OS to freeze. Countermeasures for this attack \nconsist of updating Windows 95 with the appropriate patch or changing to a different OS.\nA stream attack occurs when a large number of packets are sent to numerous ports on the \nvictim system using random source and sequence numbers. The processing performed by \nthe victim system attempting to make sense of the data will result in a DoS. Countermeasures \ninclude patching the system and using an IDS for dynamic blocking.\nA teardrop attack occurs when an attacker exploits a bug in operating systems. The bug \nexists in the routines used to reassemble (i.e., resequence) fragmented packets. An attacker \nsends numerous specially formatted fragmented packets to the victim, which causes the system \nto freeze or crash. Countermeasures for this attack include patching the OS and deploying an \nIDS for detection and dynamic blocking.\nA land attack occurs when the attacker sends numerous SYN packets to a victim and the \nSYN packets have been spoofed to use the same source and destination IP address and port \nnumber as the victim. This causes the system to think it sent a TCP/IP session opening packet \nto itself, which causes a system failure and usually results in a system freeze, crash, or reboot. \nCountermeasures for this attack include patching the OS and deploying an IDS for detection \nand dynamic blocking.\nSpoofing Attacks\nSpoofing is the art of pretending to be something other than what you are. Spoofing attacks con-\nsist of replacing the valid source and/or destination IP address and node numbers with false ones. \nSpoofing is involved in most attacks because it grants attackers the ability to hide their identity \nS: V\nD: AN:B\nAmplification Network\nAttacker\nICMP Packet\nHeader\nVictim\n" }, { "page_number": 101, "text": "56\nChapter 2\n\u0002 Attacks and Monitoring\nthrough misdirection. Spoofing is employed when an intruder uses a stolen username and pass-\nword to gain entry, when an attacker changes the source address of a malicious packet, or when \nan attacker assumes the identity of a client to fool a server into transmitting controlled data.\nTwo specific types of spoofing attacks are impersonation and masquerading. Ultimately, these \nattacks are the same: someone is able to gain access to a secured system by pretending to be some-\none else. These attacks often result in an unauthorized person gaining access to a system through \na valid user account that has been compromised. Impersonation is considered a more active attack \nbecause it requires the capture of authentication traffic and the replay of that traffic in such a way \nas to gain access to the system. Masquerading is considered a more passive attack because the \nattacker uses previously stolen account credentials to log on to a secured system.\nCountermeasures to spoofing attacks include patching the OS and software, enabling source/\ndestination verification on routers, and employing an IDS to detect and block attacks. As a gen-\neral rule of thumb, whenever your system detects spoofed information, it should record relevant \ndata elements into a log file; then the system should drop or delete the spoof itself.\nMan-in-the-Middle Attacks\nA man-in-the-middle attack occurs when a malicious user is able to gain a position between the \ntwo endpoints of a communication’s link. There are two types of man-in-the-middle attacks. \nOne involves copying or sniffing the traffic between two parties; this is basically a sniffer attack \n(see the next section). The other involves attackers positioning themselves in the line of com-\nmunication where they act as a store-and-forward or proxy mechanism (see Figure 2.2). The \nattacker functions as the receiver for data transmitted by the client and the transmitter for data \nsent to the server. The attacker is invisible to both ends of the communication link and is able \nto alter the content or flow of traffic. Through this type of attack, the attacker can collect logon \ncredentials or sensitive data as well as change the content of the messages exchanged between \nthe two endpoints.\nTo perform this type of attack, the attacker must often alter routing information and DNS \nvalues, steal IP addresses, or defraud ARP lookups to impersonate the server from the perspec-\ntive of the client and to impersonate the client from the perspective of the server.\nAn offshoot of a man-in-the-middle attack is known as a hijack attack. In this type of attack, \na malicious user is positioned between a client and server and then interrupts the session and \ntakes it over. Often, the malicious user impersonates the client to extract data from the server. \nThe server is unaware that any change in the communication partner has occurred. The client \nis aware that communications with the server have ceased, but no indication as to why the com-\nmunications were terminated is available.\nF I G U R E\n2 . 2\nA man-in-the-middle attack\nAttacker\nClient \nServer\n" }, { "page_number": 102, "text": "Methods of Attacks\n57\nAnother type of attack, a replay attack (also known as a playback attack), is similar to \nhijacking. A malicious user records the traffic between a client and server; then the packets sent \nfrom the client to the server are played back or retransmitted to the server with slight variations \nof the time stamp and source IP address (i.e., spoofing). In some cases, this allows the malicious \nuser to restart an old communication link with a server. Once the communication session is \nreopened, the malicious user can attempt to obtain data or additional access. The captured traf-\nfic is often authentication traffic (i.e., that which includes logon credentials, such as username \nand password), but it could also be service access traffic or message control traffic. Replay \nattacks can be prevented by employing complex sequencing rules and time stamps to prevent \nretransmitted packets from being accepted as valid.\nCountermeasures to these types of attacks require improvement in the session establishment, \nidentification, and authentication processes. Some man-in-the-middle attacks are thwarted \nthrough patching the OS and software. An IDS cannot usually detect a man-in-the-middle or \nhijack attack, but it can often detect the abnormal activities occurring via “secured” commu-\nnication links. Operating systems and many IDSs can often detect and block replay attacks.\nSniffer Attacks\nA sniffer attack (also known as a snooping attack) is any activity that results in a malicious user \nobtaining information about a network or the traffic over that network. A sniffer is often a packet-\ncapturing program that duplicates the contents of packets traveling over the network medium into \na file. Sniffer attacks often focus on the initial connections between clients and servers to obtain \nlogon credentials (e.g., usernames and passwords), secret keys, and so on. When performed prop-\nerly, sniffing attacks are invisible to all other entities on the network and often precede spoofing or \nhijack attacks. A replay attack (discussed in the preceding section) is a type of sniffer attack.\nCountermeasures to prevent or stop sniffing attacks require improvement in physical access \ncontrol, active monitoring for sniffing signatures (such as looking for packet delay, additional \nrouting hops, or lost packets, which can be performed by some IDSs), and using encrypted traf-\nfic over internal and external network connections.\nSpamming Attacks\nSpam is the term describing unwanted e-mail, newsgroup, or discussion forum messages. Spam \ncan be as innocuous as an advertisement from a well-meaning vendor or as malignant as floods \nof unrequested messages with viruses or Trojan horses attached. Spam is usually not a security \nthreat but rather a type of denial of service attack. As the level of spam increases, locating or \naccessing legitimate messages can be difficult. In addition to the nuisance value, spam consumes \na significant portion of Internet resources (in the form of bandwidth and CPU processing), \nresulting in overall slower Internet performance and lower bandwidth availability for everyone.\nSpamming attacks are directed floods of unwanted messages to a victim’s e-mail inbox or \nother messaging system. Such attacks cause DoS issues by filling up storage space and prevent-\ning legitimate messages from being delivered. In extreme cases, spamming attacks can cause sys-\ntem freezes or crashes and interrupt the activity of other users on the same subnet or ISP.\n" }, { "page_number": 103, "text": "58\nChapter 2\n\u0002 Attacks and Monitoring\nSpam attack countermeasures include using e-mail filters, e-mail proxies, and IDSs to detect, \ntrack, and terminate spam flood attempts.\nCrackers\nCrackers are malicious users intent on waging an attack against a person or system. Crackers \nmay be motivated by greed, power, or recognition. Their actions can result in stolen property \n(data, ideas, etc.), disabled systems, compromised security, negative public opinion, loss of mar-\nket share, reduced profitability, and lost productivity.\nA term commonly confused with crackers is hackers, who are technology enthusiasts with no \nmalicious intent. Many authors and the media often use the term hacker when they are actually \ndiscussing issues relating to crackers.\nThwarting a cracker’s attempts to breach your security or perpetrate DoS attacks requires \nvigilant effort to keep systems patched and properly configured. IDSs and honey pot systems \noften offer means to detect and gather evidence to prosecute crackers once they have breached \nyour controlled perimeter.\nAccess Control Compensations\nAccess control is used to regulate or specify which objects a subject can access and what type \nof access is allowed or denied. There are numerous attacks designed to bypass or subvert access \ncontrol. These are discussed in the previous sections. In addition to the specific countermeasures \nfor each of these attacks, there are some measures that can be used to help compensate for access \ncontrol violations. A compensation measure is not a direct prevention of a problem but rather \na means by which you can design resiliency into your environment to provide support for a \nquick recovery or response.\nBackups are the best means to compensate against access control violations. With reliable \nbackups and a mechanism to restore data, any corruption or file-based asset loss can be \nrepaired, corrected, or restored promptly. RAID technology can provide fault tolerance to allow \nfor quick recovery in the event of a device failure or severe access violation.\nIn general, avoiding single points of failure and deploying fault tolerant systems can help to \nensure that the loss of use or control over a single system, device, or asset does not directly lead \nto the compromise or failure of your entire network environment. Fault tolerance counter-\nmeasures are designed to combat threats to design reliability. Having backup communication \nroutes, mirrored servers, clustered systems, failover systems, and so on can provide instant auto-\nmatic or quick manual recovery in the event of an access control violation.\nYour business continuity plan should include procedures for dealing with access control vio-\nlations that threaten the stability of your mission-critical processes. Likewise, you should \ninclude in your insurance coverage categories of assets for which you may require compensation \nin the event of severe access control violations.\n" }, { "page_number": 104, "text": "Exam Essentials\n59\nSummary\nManaging a system’s access control involves a thorough understanding of system monitoring \nand common forms of malicious attacks. Monitoring a system provides the basis for account-\nability of authenticated users. Audit trails and logging files provide details about valid and \nunauthorized activities as well as system stability and performance. The use of an IDS can sim-\nplify the process of examining the copious amount of data gathered through monitoring.\nThere are two types of IDSs: host based and network based. A host-based IDS is useful for \ndetecting specific intrusions on single systems. A network-based IDS is useful for detecting overall \naberrant network activity. There are two types of detection methods employed by IDSs: knowl-\nedge based and behavior based. A knowledge-based IDS uses a database of attack signatures to \ndetect intrusion attempts. However, it fails to recognize new attack methods. A behavior-based \nIDS uses learned patterns of activity to detect abnormal events, but it produces numerous false \npositives until it has gained sufficient knowledge about the system it is monitoring.\nHoney pots and padded cells are useful tools for preventing malicious activity from occurring \non the actual network while enticing the intruder to remain long enough to gather evidence for \nprosecution.\nVulnerability scanners are signature-based detection tools that scan a system for a list of \nknown vulnerabilities. These tools produce reports indicating the discovered vulnerabilities and \nprovide recommendations on improving system security.\nPenetration testing is a useful mechanism for testing the strength and effectiveness of \ndeployed security measures and an organization’s security policy. Be sure to obtain manage-\nment approval before performing a penetration test.\nThere are numerous methods of attacks that intruders perpetrate against systems. Some of \nthe more common attacks include brute force, dictionary, denial of service, spoofing, man-in-\nthe-middle, spamming, and sniffing attacks. Each type of attack employs different means to \ninfiltrate, damage, or interrupt systems and each has unique countermeasures to prevent them.\nExam Essentials\nUnderstand the use of monitoring in relation to access controls.\nMonitoring is used to hold \nsubjects accountable for their actions and to detect abnormal or malicious activities.\nUnderstand the need for intrusion detection systems (IDSs) and that they are only one compo-\nnent in a security policy.\nAn IDS is needed to automate the process of discovering anomalies \nin subject activity and system event logs. IDSs are primarily used to detect intrusions or \nattempted intrusions. An IDS alone will not secure a system. It must be used in cooperation with \naccess controls, physical security, and maintaining secure systems on the network.\nKnow the limits of using host-based IDSs.\nHost-based IDSs can monitor activity on a single \nsystem only. In addition, they can be discovered by attackers and disabled.\n" }, { "page_number": 105, "text": "60\nChapter 2\n\u0002 Attacks and Monitoring\nList the pros and cons of network-based IDSs.\nNetwork-based IDSs can monitor activity on \nthe network medium, and they can be made invisible to attackers. They do not, however, work \nwell on switched networks.\nBe able to explain the differences between knowledge-based and behavior-based IDS detection \nmethods.\nKnowledge-based detection employs a database of attack signatures. Behavior-\nbased detection learns what is normal about a system and assumes that all unknown activities \nare abnormal or possible signs of intrusion.\nUnderstand the purpose of a honey pot and a padded cell.\nA honey pot is a fake system or net-\nwork that is designed to lure intruders with fake data to keep them on the system long enough to \ngather tracking information. A padded cell is a simulated environment that intruders are seamlessly \nmoved into once they are detected on the system. The simulated environment varies from the real \nenvironment only in that the data is fake and therefore malicious activities cause no harm.\nBe able to explain the purpose of vulnerability scanners and penetration testing.\nVulnerabil-\nity scanners are used to detect known security vulnerabilities and weaknesses. They are used to \ngenerate reports that indicate the areas or aspects of the system that need to be managed to \nimprove security. Penetration testing is used to test the strength and effectiveness of deployed \nsecurity measures with an authorized attempted intrusion attack.\nKnow how brute force and dictionary attacks work.\nBrute force and dictionary attacks are \ncarried out against a password database file or the logon prompt of a system. They are designed \nto discover passwords. In brute force attacks, all possible combinations of keyboard characters \nare used, whereas a predefined list of possible passwords is used in a dictionary attack.\nUnderstand the need for strong passwords.\nStrong passwords make password cracking utili-\nties less successful. Strong passwords are dynamic passwords and should be strengthened by \nusing two-factor authentication, enabling account lockouts, and using strong encryption on the \npassword database file.\nKnow what denial of service (DoS) attacks are.\nDoS attacks prevent the system from \nresponding to legitimate requests for service. There are two types: traffic flooding and fault \nexploitation.\nBe able to explain how the SYN flood DoS attack works.\nThe SYN flood DoS attack takes \nadvantage of the TCP/IP three-way handshake to inhibit a system by requesting numerous con-\nnection sessions but failing to provide the final acknowledgment packet.\nKnow how the Smurf DoS attack works.\nSmurf attacks employ an amplification network to \nsend numerous response packets to a victim.\nKnow how ping of death DoS attacks work.\nPing of death attacks send numerous oversized \nping packets to the victim, causing the victim to freeze, crash, or reboot.\nKnow how the WinNuke DoS attack works.\nOnly Windows 95 systems are vulnerable to \nWinNuke. WinNuke sends out-of-band TCP/IP data to the victim, causing the OS to freeze.\nUnderstand stream DoS attacks.\nStream attacks send a large number of packets to numerous \nports on the victim system by using random source and sequence numbers. The processing per-\nformed by the victim system attempting to make sense of the data will result in a DoS.\n" }, { "page_number": 106, "text": "Exam Essentials\n61\nBe able to explain teardrop DoS attacks.\nA teardrop attack occurs when an attacker exploits \na bug in operating systems. The bug exists in the routines used to reassemble fragmented pack-\nets. An attacker sends numerous specially formatted fragmented packets to the victim, which \ncauses the system to freeze or crash.\nUnderstand land DoS attacks.\nA land attack occurs when an attacker sends numerous SYN \npackets to a victim and the SYN packets have been spoofed to use the same source and desti-\nnation IP address and port number as the victim’s. This causes the victim to think it sent a TCP/\nIP session opening packet to itself, which in turn causes a system failure, usually resulting in a \nfreeze, crash, or reboot.\nBe able to list the countermeasures to all types of DoS attacks and to spoofing, man-in-the-\nmiddle, sniffer, and spamming attacks.\nCountermeasures include patching the OS for vulner-\nabilities, using firewalls and routers to filter and/or verify traffic, altering system/protocol con-\nfiguration, and using IDSs.\nUnderstand spoofing attacks.\nSpoofing attacks are any form of attack that uses modified pack-\nets in which the valid source and/or destination IP address and node numbers are replaced with \nfalse ones. Spoofing grants the attacker the ability to hide their identity through misdirection.\nUnderstand man-in-the-middle attacks.\nA man-in-the-middle attack occurs when a malicious \nuser is able to gain position between the two endpoints of a communications link. There are two \ntypes of man-in-the-middle attacks. One involves copying or sniffing the traffic between two \nparties; this is basically a sniffer attack. The other involves the attacker being positioned in the \nline of communication where they act as a store-and-forward or proxy mechanism.\nBe able to explain hijack attacks.\nThe hijack attack is offshoot of a man-in-the-middle attack. \nIn this type of attack, a malicious user positions himself between a client and server and then \ninterrupts the session and takes it over. Often, the malicious user impersonates the client so they \ncan extract data from the server. The server is unaware that any change in the communication \npartner has occurred.\nUnderstand replay or playback attacks.\nIn a replay attack, a malicious user records the traffic \nbetween a client and server. Then the packets sent from the client to the server are played back \nor retransmitted to the server with slight variations of the time stamp and source IP address (i.e., \nspoofing). In some cases, this allows the malicious user to restart an old communication link \nwith a server.\nKnow what sniffer attacks are.\nA sniffer attack (or snooping attack) is any activity that results \nin a malicious user obtaining information about a network or the traffic over that network. A \nsniffer is often a packet-capturing program that duplicates the contents of packets traveling over \nthe network medium into a file.\nUnderstanding spamming attacks.\nSpam is the term describing unwanted e-mail, newsgroup, or \ndiscussion forum messages. Spam can be as innocuous as an advertisement from a well-meaning \nvendor or as malignant as floods of unrequested messages with viruses or Trojan horses attached. \nSpam is usually not a security threat but rather a type of denial of service attack. As the level of spam \nincreases, locating or accessing legitimate messages can be difficult.\n" }, { "page_number": 107, "text": "62\nChapter 2\n\u0002 Attacks and Monitoring\nReview Questions\n1.\nWhat is used to keep subjects accountable for their actions while they are authenticated to a system?\nA. Access controls\nB. Monitoring\nC. Account lockout\nD. Performance reviews\n2.\nWhich of the following tools is the most useful in sorting through large log files when searching \nfor intrusion-related events?\nA. Text editor\nB. Vulnerability scanner\nC. Password cracker\nD. IDS\n3.\nAn intrusion detection system (IDS) is primarily designed to perform what function?\nA. Detect abnormal activity\nB. Detect system failures\nC. Rate system performance\nD. Test a system for vulnerabilities\n4.\nIDSs are capable of detecting which type of abnormal or unauthorized activities? (Choose all \nthat apply.)\nA. External connection attempts\nB. Execution of malicious code\nC. Unauthorized access attempts to controlled objects\nD. None of the above\n5.\nWhich of the following is true for a host-based IDS?\nA. It monitors an entire network.\nB. It monitors a single system.\nC. It’s invisible to attackers and authorized users.\nD. It’s ineffective on switched networks.\n6.\nWhich of the following types of IDS is effective only against known attack methods?\nA. Host-based\nB. Network-based\nC. Knowledge-based\nD. Behavior-based\n" }, { "page_number": 108, "text": "Review Questions\n63\n7.\nWhich type of IDS can be considered an expert system?\nA. Host-based\nB. Network-based\nC. Knowledge-based\nD. Behavior-based\n8.\nWhich of the following is a fake network designed to tempt intruders with unpatched and \nunprotected security vulnerabilities and false data?\nA. IDS\nB. Honey pot\nC. Padded cell\nD. Vulnerability scanner\n9.\nWhen a padded cell is used by a network for protection from intruders, which of the following \nis true?\nA. The data offered by the padded cell is what originally attracts the attacker.\nB. Padded cells are a form of entrapment.\nC. The intruder is seamlessly transitioned into the padded cell once they are detected.\nD. Padded cells are used to test a system for known vulnerabilities.\n10. Which of the following is true regarding vulnerability scanners?\nA. They actively scan for intrusion attempts.\nB. They serve as a form of enticement.\nC. They locate known security holes.\nD. They automatically reconfigure a system to a more secured state.\n11. When using penetration testing to verify the strength of your security policy, which of the fol-\nlowing is not recommended?\nA. Mimicking attacks previously perpetrated against your system\nB. Performing the attacks without managements consent\nC. Using manual and automated attack tools\nD. Reconfiguring the system to resolve any discovered vulnerabilities\n12. Which of the following attacks is an attempt to test every possible combination against a security \nfeature in order to bypass it?\nA. Brute force attack\nB. Spoofing attack\nC. Man-in-the-middle attack\nD. Denial of service attack\n" }, { "page_number": 109, "text": "64\nChapter 2\n\u0002 Attacks and Monitoring\n13. Which of the following is not a valid measure to take to improve protection against brute force \nand dictionary attacks?\nA. Enforce strong passwords through a security policy.\nB. Maintain strict control over physical access.\nC. Require all users to log in remotely.\nD. Use two-factor authentication.\n14. Which of the following is not considered a denial of service attack?\nA. Teardrop\nB. Smurf\nC. Ping of death\nD. Spoofing\n15. A SYN flood attack works by what mechanism?\nA. Exploiting a packet processing glitch in Windows 95\nB. Using an amplification network to flood a victim with packets\nC. Exploiting the three-way handshake used by TCP/IP\nD. Sending oversized ping packets to a victim\n16. Which of the following attacks sends packets with the victim’s IP address as both the source and \ndestination?\nA. Land\nB. Spamming\nC. Teardrop\nD. Stream\n17.\nIn what type of attack are packets sent to a victim using invalid resequencing numbers?\nA. Stream\nB. Spamming\nC. Distributed denial of service\nD. Teardrop\n18. Spoofing is primarily used to perform what activity?\nA. Send large amounts of data to a victim.\nB. Cause a buffer overflow.\nC. Hide the identity of an attacker through misdirection.\nD. Steal user accounts and passwords.\n" }, { "page_number": 110, "text": "Review Questions\n65\n19. Spamming attacks occur when numerous unsolicited messages are sent to a victim. Because \nenough data is sent to the victim to prevent legitimate activity, it is also known as what?\nA. Sniffing\nB. Denial of service\nC. Brute force attack\nD. Buffer overflow attack\n20. What type of attack occurs when malicious users position themselves between a client and server \nand then interrupt the session and takes it over?\nA. Man-in-the-middle\nB. Spoofing\nC. Hijack\nD. Cracking\n" }, { "page_number": 111, "text": "66\nChapter 2\n\u0002 Attacks and Monitoring\nAnswers to Review Questions\n1.\nB. Accountability is maintained by monitoring the activities of subject and objects as well as of core \nsystem functions that maintain the operating environment and the security mechanisms.\n2.\nD. In most cases, when sufficient logging and auditing is enabled to monitor a system, so much \ndata is collected that the important details get lost in the bulk. For automation and real-time \nanalysis of events, an intrusion detection system (IDS) is required.\n3.\nA. An IDS automates the inspection of audit logs and real-time system events to detect abnormal \nactivity. IDSs are generally used to detect intrusion attempts, but they can also be employed to \ndetect system failures or rate overall performance.\n4.\nA, B, C. IDSs watch for violations of confidentiality, integrity, and availability. Attacks recog-\nnized by IDSs can come from external connections (such as the Internet or partner networks), \nviruses, malicious code, trusted internal subjects attempting to perform unauthorized activities, \nand unauthorized access attempts from trusted locations.\n5.\nB. A host-based IDS watches for questionable activity on a single computer system. A network-\nbased IDS watches for questionable activity being performed over the network medium, can be \nmade invisible to users, and is ineffective on switched networks.\n6.\nC. A knowledge-based IDS is effective only against known attack methods, which is its primary \ndrawback.\n7.\nD. A behavior-based IDS can be labeled an expert system or a pseudo artificial intelligence sys-\ntem because it can learn and make assumptions about events. In other words, the IDS can act like \na human expert by evaluating current events against known events.\n8.\nB. Honey pots are individual computers or entire networks created to serve as a snare for intrud-\ners. They look and act like legitimate networks, but they are 100 percent fake. Honey pots tempt \nintruders with unpatched and unprotected security vulnerabilities as well as attractive and tan-\ntalizing but faux data.\n9.\nC. When an intruder is detected by an IDS, they are transferred to a padded cell. The transfer \nof the intruder into a padded cell is performed automatically, without informing the intruder \nthat the change has occurred. The padded cell is unknown to the intruder before the attack, so \nit cannot serve as an enticement or entrapment. Padded cells are used to detain intruders, not to \ndetect vulnerabilities.\n10. C. Vulnerability scanners are used to test a system for known security vulnerabilities and weak-\nnesses. They are not active detection tools for intrusion, they offer no form of enticement, and \nthey do not configure system security. In addition to testing a system for security weak-\nnesses, they produce evaluation reports and make recommendations.\n11. B. Penetration testing should be performed only with the knowledge and consent of the man-\nagement staff. Unapproved security testing could result in productivity loss or trigger emergency \nresponse teams. It could even cost you your job.\n" }, { "page_number": 112, "text": "Answers to Review Questions\n67\n12. A. A brute force attack is an attempt to discover passwords for user accounts by systematically \nattempting every possible combination of letters, numbers, and symbols.\n13. C. Strong password policies, physical access control, and two-factor authentication all improve \nthe protection against brute force and dictionary password attacks. Requiring remote logons has \nno direct affect on password attack protection; in fact, it may offer sniffers more opportunities \nto grab password packets from the data stream.\n14. D. Spoofing is the replacement of valid source and destination IP and port addresses with false \nones. It is often used in DoS attacks but is not considered a DoS attack itself. Teardrop, Smurf, \nand ping of death are all DoS attacks.\n15. C. A SYN flood attack is waged by breaking the standard three-way handshake used by TCP/IP \nto initiate communication sessions. Exploiting a packet processing glitch in Windows 95 is a Win-\nNuke attack. The use of an amplification network is a Smurf attack. Oversized ping packets are \nused in a ping of death attack.\n16. A. In a land attack, the attacker sends a victim numerous SYN packets that have been spoofed \nto use the same source and destination IP address and port number as the victim’s. The victim \nthen thinks it sent a TCP/IP session-opening a packet to itself.\n17.\nD. In a teardrop attack, an attacker exploits a bug in operating systems. The bug exists in the \nroutines used to reassemble (i.e., resequence) fragmented packets. An attacker sends numerous \nspecially formatted fragmented packets to the victim, which causes the system to freeze or crash.\n18. C. Spoofing grants the attacker the ability to hide their identity through misdirection. It is there-\nfore involved in most attacks.\n19. B. A spamming attack is a type of denial of service attack. Spam is the term describing unwanted \ne-mail, newsgroup, or discussion forum messages. It can be an advertisement from a well-mean-\ning vendor or a floods of unrequested messages with viruses or Trojan horses attached.\n20. C. In a hijack attack, which is an offshoot of a man-in-the-middle attack, a malicious user is \npositioned between a client and server and then interrupts the session and takes it over.\n" }, { "page_number": 113, "text": "" }, { "page_number": 114, "text": "Chapter\n3\nISO Model, Network \nSecurity, and \nProtocols\nTHE CISSP EXAM TOPICS COVERED IN THIS \nCHAPTER INCLUDE:\n\u0001 International Organization for Standardization/Open \nSystems Interconnection (ISO/OSI) Layers and \nCharacteristics\n\u0001 Communications and Network Security\n\u0001 Internet/Intranet/Extranet Components\n\u0001 Network Services\n" }, { "page_number": 115, "text": "Computer systems and computer networks are complex entities. \nThey combine hardware and software components to create a sys-\ntem that can perform operations and calculations beyond the \ncapabilities of humans. From the integration of communication devices, storage devices, pro-\ncessing devices, security devices, input devices, output devices, operating systems, software, ser-\nvices, data, and people emerge computers and networks. The CISSP CBK states that a thorough \nknowledge of the hardware and software components a system comprises is an essential element \nof being able to implement and maintain security.\nThe Telecommunications and Network Security domain for the CISSP certification exam \ndeals with topics related to network components (primarily network devices and protocols); \nspecifically, how they function and how they are relevant to security. This domain is discussed \nin this chapter and in Chapter 4, “Communications Security and Countermeasures.” Be sure to \nread and study the materials in both chapters to ensure complete coverage of the essential mate-\nrial for the CISSP certification exam.\nOSI Model\nCommunications between computers over networks is made possible by the use of protocols. A \nprotocol is a set of rules and restrictions that define how data is transmitted over a network \nmedium (e.g., twisted-pair cable, wireless transmission, and so on). Protocols make computer-to-\ncomputer communications possible. In the early days of network development, many companies \nhad their own proprietary protocols, which meant interaction between computers of different \nvendors was often difficult if not impossible. In an effort to eliminate this problem, the Interna-\ntional Organization for Standardization (ISO) developed the OSI model for protocols in the early \n1980s. ISO Standard 7498 defines the OSI Reference Model (also called the OSI model).\nHistory of the OSI Model\nThe OSI model wasn’t the first or only movement to streamline networking protocols or estab-\nlish a common communications standard. In fact, the most widely used protocol today, the \nTCP/IP protocol (which was based upon the DARPA model, also known now as the TCP/IP \nmodel), was developed in the early 1970s.\nThe Open Systems Interconnection (OSI) protocol was developed to establish a common \ncommunication structure or standard for all computer systems. The actual OSI protocol was \nnever widely adopted, but the theory behind the OSI protocol, the OSI model, was readily \n" }, { "page_number": 116, "text": "OSI Model\n71\naccepted. The OSI model serves as an abstract framework, or theoretical model, for how pro-\ntocols should function in an ideal world on ideal hardware. Thus, the OSI model has become \na common reference point against which all protocols can be compared and contrasted.\nOSI Functionality\nThe OSI model divides networking tasks into seven distinct layers. Each layer is responsible for \nperforming specific tasks or operations toward the ultimate goal of supporting data exchange \n(i.e., network communication) between two computers. The layers are always numbered from \nbottom to top (see Figure 3.1). They are referred to by either their name or their layer number. \nFor example, layer 3 is also known as the Network layer. The layers are ordered specifically to \nindicate how information flows through the various levels of communication. Layers are said \nto communicate with three other layers. Each layer communicates directly with the layer above \nit as well as the layer below it plus the peer layer on a communication partner system.\nThe OSI model is an open network architecture guide for network product vendors. This \nstandard, or guide, provides a common foundation for the development of new protocols, net-\nworking services, and even hardware devices. By working from the OSI model, vendors are able \nto ensure that their products will integrate with products from other companies and be sup-\nported by a wide range of operating systems. If vendors developed their own networking frame-\nwork, interoperability between products from different vendors would be next to impossible.\nThe real benefit of the OSI model is found in its expression of how networking actually func-\ntions. In the most basic sense, network communications occur over a physical connection. This \nis true even if wireless networking devices are employed. Physical devices establish channels \nthrough which electronic signals can pass from one computer to another. These physical device \nchannels are only one type of the seven logical channel types defined by the OSI model. Each \nlayer of the OSI model communicates via a logical channel with its peer layer on another com-\nputer. This enables protocols based on the OSI model to support a type of authentication by \nbeing able to identify the remote communication entity as well as authenticate the source of the \nreceived data.\nF I G U R E\n3 . 1\nA representation of the OSI model\nApplication\n7\nPresentation\n6\nSession\n5\nTransport\n4\nNetwork\n3\nData Link\n2\nPhysical\n1\n" }, { "page_number": 117, "text": "72\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\nEncapsulation/Deencapsulation\nProtocols based on the OSI model employ a mechanism called encapsulation. As the message is \nencapsulated at each layer, it grows in size. Encapsulation occurs as the data moves down \nthrough the OSI model layers from Application to Physical. The inverse action occurring as data \nmoves up through the OSI model layers from the Physical to Application is known as deencap-\nsulation. The encapsulation/deencapsulation process is as follows:\n1.\nThe Application layer creates a message.\n2.\nThe Application layer passes the message to the Presentation layer.\n3.\nThe Presentation layer encapsulates the message by adding information to it. Information \nis added at the beginning of the message (called a header) and at the end of the message \n(called a footer), as shown in Figure 3.2.\n4.\nThe process of passing the message down and adding layer-specific information continues \nuntil the message reaches the Physical layer.\n5.\nAt the Physical layer, the message is converted into electrical impulses that represent bits \nand is transmitted over the physical connection.\n6.\nThe receiving computer captures the bits from the physical connection and re-creates the \nmessage in the Physical layer.\n7.\nThe Physical layer strips off its information and sends the message up to the Data Link layer.\n8.\nThe Data Link layer strips its information off and sends the message up to the Network layer.\n9.\nThis process of deencapsulation is performed until the message reaches the Application layer.\n10. When the message reaches the Application layer, the data in the message is sent to the \nintended software recipient.\nThe information removed by each layer contains instructions, checksums, and so on that can only \nbe understood by the peer layer that originally added or created the information (see Figure 3.3). \nThis information is what creates the logical channel that enables peer layers on different com-\nputers to communicate.\nF I G U R E\n3 . 2\nA representation of OSI model encapsulation\nApplication\nPresentation\nSession\nTransport\nNetwork\nData Link\nPhysical\nDATA\nHeader\nFooter\nDATA\nDATA\nDATA\nDATA\nDATA\nDATA\n" }, { "page_number": 118, "text": "OSI Model\n73\nF I G U R E\n3 . 3\nA representation of the OSI model peer layer logical channels\nThe message sent into the protocol stack at the Application layer (layer 7) is called the data \nor PDU (protocol data unit). Once it is encapsulated by the Presentation layer (layer 6), it is \ncalled a protocol data unit (PDU). It retains the label of PDU until it reaches the Transport layer \n(layer 4), where it is called a segment. In the Network layer (layer 3), it is called a packet or a \ndatagram. In the Data Link layer (layer 2), it is called a frame. In the Physical layer (layer 1), the \ndata has been converted into bits for transmission over the physical connection medium. Figure 3.4 \nshows how each layer changes the data through this process.\nOSI Layers\nUnderstanding the functions and responsibilities of each layer of the OSI model will help you \nunderstand how network communications function, how attacks can be perpetrated against \nnetwork communications, and how security can be implemented to protect network commu-\nnications. Each layer, starting with the bottom layer, is discussed in the following sections.\nF I G U R E\n3 . 4\nThe OSI model data names\nApplication\nPresentation\nSession\nTransport\nNetwork\nData Link\nPhysical\nApplication\nPresentation\nSession\nTransport\nNetwork\nData Link\nPhysical\nApplication\nPresentation\nSession\nTransport\nNetwork\nData Link\nPhysical\nPDU\nPDU\nPDU\nSegment\nPacket/Datagram\nFrame\nBits\n" }, { "page_number": 119, "text": "74\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\nFor more info on the TCP/IP stack, do a search for “TCP/IP” at Wikipedia \n(en.wikipedia.org).\nPhysical Layer\nThe Physical layer (layer 1) accepts the frame from the Data Link layer and converts the frame \ninto bits for transmission over the physical connection medium. The Physical layer is also \nresponsible for receiving bits from the physical connection medium and converting them back \ninto a frame to be used by the Data Link layer.\nThe Physical layer contains the device drivers that tell the protocol how to employ the hard-\nware for the transmission and reception of bits. Located within the Physical layer are electrical \nspecifications, protocols, and interface standards such as the following:\n\u0002\nEIA/TIA-232 and EIA/TIA-449\n\u0002\nX.21\n\u0002\nHigh-Speed Serial Interface (HSSI)\n\u0002\nSynchronous Optical Network (SONET)\n\u0002\nV.24 and V.35\nThrough the device drivers and these standards, the Physical layer controls throughput rates, \nhandles synchronization, manages line noise and medium access, and determines whether to use \ndigital or analog signals or light pulses to transmit or receive data over the physical hardware \ninterface.\nNetwork hardware devices that function at layer 1, the Physical layer, are network interface \ncards (NICs), hubs, repeaters, concentrators, and amplifiers. These devices perform hardware-\nbased signal operations, such as sending a signal from one port out on all other ports (a hub) \nor amplifying the signal to support greater transmission distances (a repeater).\nData Link Layer\nThe Data Link layer (layer 2) is responsible for formatting the packet from the Network layer \ninto the proper format for transmission. The proper format is determined by the hardware and \nthe technology of the network. There are numerous possibilities, such as Ethernet (IEEE 802.3), \nToken Ring (IEEE 802.5), asynchronous transfer mode (ATM), Fiber Distributed Data Inter-\nface (FDDI), and Copper DDI (CDDI). Within the Data Link layer resides the technology-\nspecific protocols that convert the packet into a properly formatted frame. Once the frame is \nformatted, it is sent to the Physical layer for transmission.\nThe following list includes some of the protocols found within the Data Link layer:\n\u0002\nSerial Line Internet Protocol (SLIP)\n\u0002\nPoint-to-Point Protocol (PPP)\n\u0002\nAddress Resolution Protocol (ARP)\n\u0002\nReverse Address Resolution Protocol (RARP)\n" }, { "page_number": 120, "text": "OSI Model\n75\n\u0002\nLayer 2 Forwarding (L2F)\n\u0002\nLayer 2 Tunneling Protocol (L2TP)\n\u0002\nPoint-to-Point Tunneling Protocol (PPTP)\n\u0002\nIntegrated Services Digital Network (ISDN)\nPart of the processing performed on the data within the Data Link layer includes adding the \nhardware source and destination addresses to the frame. The hardware address is the Media \nAccess Control (MAC) address, which is a 6-byte address written in hexadecimal notation. The \nfirst 3 bytes of the address indicate the vendor or manufacturer of the physical network inter-\nface. The last 3 bytes represent a unique number assigned to that interface by the manufacturer. \nNo two devices can have the same MAC address.\nAmong the protocols at the Data Link layer (layer 2) of the OSI model, the two you should \nbe familiar with are Address Resolution Protocol (ARP) and Reverse Address Resolution Pro-\ntocol (RARP). ARP is used to resolve IP addresses into MAC addresses. Traffic on a network \nsegment (e.g., cables across a hub) is directed from its source system to its destination system \nusing MAC addresses. RARP is used to resolve MAC addresses into IP addresses.\nThe Data Link layer contains two sublayers: the Logical Link Control (LLC) sublayer and \nthe MAC sublayer. Details about these sublayers are not critical for the CISSP exam.\nNetwork hardware devices that function at layer 2, the Data Link layer, are switches and \nbridges. These devices support MAC-based traffic routing. Switches receive a frame on one \nport and send it out another port based on the destination MAC address. MAC address des-\ntinations are used to determine whether a frame is transferred over the bridge from one net-\nwork to another.\nNetwork Layer\nThe Network layer (layer 3) is responsible for adding routing and addressing information to the \ndata. The Network layer accepts the segment from the Transport layer and adds information to \nit to create a packet. The packet includes the source and destination IP addresses.\nThe routing protocols are located at this layer and include the following:\n\u0002\nInternet Control Message Protocol (ICMP)\n\u0002\nRouting Information Protocol (RIP)\n\u0002\nOpen Shortest Path First (OSPF)\n\u0002\nBorder Gateway Protocol (BGP)\n\u0002\nInternet Group Management Protocol (IGMP)\n\u0002\nInternet Protocol (IP)\n\u0002\nInternet Protocol Security (IPSec)\n\u0002\nInternetwork Packet Exchange (IPX)\n\u0002\nNetwork Address Translation (NAT)\n\u0002\nSimple Key Management for Internet Protocols (SKIP)\n" }, { "page_number": 121, "text": "76\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\nThe Network layer is responsible for providing routing or delivery information, but it is not \nresponsible for verifying guaranteed delivery (that is the responsibility of the Transport layer). \nThe Network layer also manages error detection and node data traffic (i.e., traffic control).\nRouters are among the network hardware devices that function at layer 3, along with brout-\ners. Routers determine the best logical path for the transmission of packets based on speed, \nhops, preference, and so on. Routers use the destination IP address to guide the transmission of \npackets. A brouter, working primarily in layer 3 but in layer 2 when necessary, is a device that \nattempts to route first but if that fails defaults to bridging.\nTransport Layer\nThe Transport layer (layer 4) is responsible for managing the integrity of a connection and con-\ntrolling the session. It accepts a PDU from the Session layer and converts it into a segment. The \nTransport layer controls how devices on the network are addressed or referenced, establishes \ncommunication connections between nodes (also known as devices), and defines the rules of a \nsession. Session rules specify how much data each segment can contain, how to verify the integ-\nrity of data transmitted, and how to determine if data has been lost. Session rules are established \nthrough a handshaking process. (You should recall the discussion of the SYN/ACK three-way \nhandshake for TCP/IP from Chapter 2, “Attacks and Monitoring.”)\nThe Transport layer establishes a logical connection between two devices and provides end-\nto-end transport services to ensure data delivery. This layer includes mechanisms for segmen-\ntation, sequencing, error checking, controlling the flow of data, error correction, multiplexing, \nand network service optimization. The following protocols operate within the Transport layer:\n\u0002\nTransmission Control Protocol (TCP)\n\u0002\nUser Datagram Protocol (UDP)\n\u0002\nSequenced Packet Exchange (SPX)\nSession Layer\nThe Session layer (layer 5) is responsible for establishing, maintaining, and terminating com-\nmunication sessions between two computers. It manages dialog discipline or dialog control \n(simplex, half-duplex, full-duplex), establishes checkpoints for grouping and recovery, and \nretransmits PDUs that have failed or been lost since the last verified checkpoint. The following \nprotocols operate within the Session layer:\n\u0002\nSecure Sockets Layer (SSL)\n\u0002\nTransport Layer Security (TLS)\n\u0002\nNetwork File System (NFS)\n\u0002\nStructured Query Language (SQL)\n\u0002\nRemote Procedure Call (RPC)\nCommunication sessions can operate in one of three different discipline or control modes:\nSimplex\nOne-way direction communication\nHalf-duplex\nTwo-way communication, but only one direction can send data at a time\nFull-duplex\nTwo-way communication, in which data can be sent in both directions simultaneously\n" }, { "page_number": 122, "text": "OSI Model\n77\nPresentation Layer\nThe Presentation layer (layer 6) is responsible for transforming data received from the Application \nlayer into a format that any system following the OSI model can understand. It imposes common or \nstandardized structure and formatting rules onto the data. The Presentation layer is also responsible \nfor encryption and compression. Thus, it acts as an interface between the network and applications. \nIt is what allows various applications to interact over a network, and it does so by ensuring that the \ndata formats are supported by both systems. Most file or data formats operate within this layer. This \nincludes formats for images, video, sound, documents, e-mail, web pages, control sessions, and so \non. The following list includes some of the format standards that exist within the Presentation layer:\n\u0002\nAmerican Standard Code for Information Interchange (ASCII)\n\u0002\nExtended Binary-Coded Decimal Interchange Mode (EBCDIC)\n\u0002\nTagged Image File Format (TIFF)\n\u0002\nJoint Photographic Experts Group (JPEG)\n\u0002\nMoving Picture Experts Group (MPEG)\n\u0002\nMusical instrument digital interface (MIDI)\nApplication Layer\nThe Application layer (layer 7) is responsible for interfacing user applications, network services, \nor the operating system itself with the protocol stack. It allows applications to communicate \nwith the protocol stack. The Application layer determines whether a remote communication \npartner is available and accessible. It also ensures that sufficient resources are available to sup-\nport the requested communications.\nThe application itself is not located within this layer; rather, the protocols and services required \nto transmit files, exchange messages, connect to remote terminals, and so on are found here. \nNumerous application-specific protocols are found within this layer, such as the following:\n\u0002\nHypertext Transfer Protocol (HTTP)\n\u0002\nFile Transfer Protocol (FTP)\n\u0002\nLine Print Daemon (LPD)\n\u0002\nSimple Mail Transfer Protocol (SMTP)\n\u0002\nTelnet\n\u0002\nTrivial File Transfer Protocol (TFTP)\n\u0002\nElectronic Data Interchange (EDI)\n\u0002\nPost Office Protocol version 3 (POP3)\n\u0002\nInternet Message Access Protocol (IMAP)\n\u0002\nSimple Network Management Protocol (SNMP)\n\u0002\nNetwork News Transport Protocol (NNTP)\n\u0002\nSecure Remote Procedure Call (S-RPC)\n\u0002\nSecure Electronic Transaction (SET)\n" }, { "page_number": 123, "text": "78\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\nThere is a network device (or service) that works at the Application layer, namely the gate-\nway. However, an Application layer gateway is a very specific type of component. It serves as \na protocol translation tool. For example, an IP-to-IPX gateway takes inbound communications \nfrom TCP/IP and translates them over to IPX/SPX for outbound transmission.\nTCP/IP Model\nThe TCP/IP model (also called the DARPA or the DOD model) consists of only four layers, as \nopposed to the OSI Reference Model’s seven. These four layers can be compared to the seven \nlayers of the OSI model (refer to Figure 3.5). The four layers of the TCP/IP model are Applica-\ntion, Host-to-Host, Internet, and Network Access. The TCP/IP protocol suite was developed \nbefore the OSI Reference Model was created. The designers of the OSI Reference Model took \ncare to ensure that the TCP/IP protocol suite fit their model due to its established deployment \nin networking.\nThe TCP/IP model’s Application layer corresponds to layers 5, 6, and 7 of the OSI model. \nThe TCP/IP model’s Host-to-Host layer corresponds to layer 4 from the OSI model. The TCP/\nIP model’s Internet layer corresponds to layer 3 from the OSI model. The TCP/IP model’s Net-\nwork Access layer corresponds to layers 1 and 2 from the OSI model.\nIt has become common practice (through confusion, misunderstanding, and probably lazi-\nness) to also call the TCP/IP model layers by their OSI model layer equivalent names. The TCP/\nIP model's Application layer is already using a name borrowed from the OSI, so that one’s a \nsnap. The TCP/IP model's Host-to-Host layer is sometimes called the Transport layer (the OSI \nmodel's fourth layer). The TCP/IP model's Internet layer is sometimes called the Network layer \n(the OSI model's third layer). And the TCP/IP model's Network Access layer is sometimes called \nthe Data Link layer (the OSI model's second layer).\nF I G U R E\n3 . 5\nComparing the OSI model with the TCP/IP model\nApplication\nPresentation\nSession\nTransport\nNetwork\nData Link\nHost-to-Host\nInternet\nNetwork Access\nPhysical\nApplication\nOSI Model\nTCP/IP Model\n" }, { "page_number": 124, "text": "Communications and Network Security\n79\nSince the TCP/IP model layer names and the OSI model layer names can be \nused interchangeably, it is important to know which model is being addressed \nin various contexts. Unless informed otherwise, always assume the OSI model \nprovides the basis for discussion because it’s the most widely used network \nreference model.\nCommunications and Network Security\nEstablishing security on a network involves more than just managing the OS and software. You \nmust also address physical issues, including cabling, topology, and technology.\nNetwork Cabling\nThe type of connectivity media employed in a network is important to the network’s design, lay-\nout, and capabilities. Without the right cabling, a network may not be able to span your entire \nenterprise or it may not support the necessary traffic volume. In fact, the most common causes \nof network failure (i.e., violations of availability) are caused by cable failures or misconfigura-\ntions. So it is important for you to understand that different types of network devices and tech-\nnologies are used with different types of cabling. Each cable type has unique useful lengths, \nthroughput rates, and connectivity requirements.\nLANs vs. WANs\nThere are two basic types of networks: LANs and WANs. A local area network (LAN) is a self-\nenclosed network typically spanning a single floor or building. LANs usually employ low- to \nmoderate-speed technologies. Wide area network (WAN) is the term usually assigned to the \nlong-distance connections between geographically remote networks. WANs often employ \nhigh-speed connections, but they can also employ low-speed dial-up links as well as leased \nconnection technologies.\nWAN connections and communication links can include private circuit technologies and \npacket-switching technologies. Common private circuit technologies include dedicated or \nleased lines and PPP, SLIP, ISDN, and DSL connections. Packet-switching technologies include \nX.25, Frame Relay, asynchronous transfer mode (ATM), Synchronous Data Link Control \n(SDLC), and High-Level Data Link Control (HDLC). Packet-switching technologies use virtual cir-\ncuits instead of dedicated circuits. A virtual circuit is created only when needed, which makes \nfor efficient use of the medium and is extremely cost effective.\n" }, { "page_number": 125, "text": "80\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\nCoaxial Cable\nCoaxial cable, also called coax, was a popular networking cable type used throughout the \n1970s and 1980s. In the early 1990s, its use quickly declined due to the popularity of twisted-\npair wiring (explained in more detail later). Coaxial cable has a center core of copper wire sur-\nrounded by a layer of insulation, which is in turn surrounded by a conductive braided shielding \nand encased in a final insulation sheath.\nThe center copper core and the braided shielding layer act as two independent conductors, \nthus allowing two-way communications over a coaxial cable. The design of coaxial cable makes \nit fairly resistant to electromagnetic interference (EMI) and able to support high bandwidths (in \ncomparison to other technologies of the time period), and it offers longer usable lengths than \ntwisted-pair. It ultimately failed to retain its place as the popular networking cable technology \ndue to twisted-pair’s much lower cost and ease of installation. Coaxial cable requires the use of \nsegment terminators, whereas twisted-pair does not. Coaxial cable is bulkier and has a larger \nminimum arc radius than twisted-pair. (The arc radius is the maximum distance the cable can \nbe bent before damaging the internal conductors.) Additionally, with the widespread deploy-\nment of switched networks, the issues of cable distance became moot due to the implementation \nof hierarchical wiring patterns.\nThere are two main types of coaxial cable: thinnet and thicknet. Thinnet, also known as \n10Base2, was commonly used to connect systems to backbone trunks of thicknet cabling. Thin-\nnet can span distances of 185 meters and provide throughput up to 10Mbps. Thicknet, also \nknown as 10Base5, can span 500 meters and provide throughput up to 10Mbps.\nThe most common problems with coax cable are as follows:\n\u0002\nBending the coax cable past its maximum arc radius and thus breaking the center conductor\n\u0002\nDeploying the coax cable in a length greater than its maximum recommended length (e.g., \n185 m for 10Base2 or 500 m for 10Base5)\n\u0002\nNot properly terminating the ends of the coax cable with a 50 ohm resistor\nBaseband and Broadband\nThe naming convention used to label most network cable technologies follows the syntax \nXXyyyyZZ. XX represents the maximum speed the cable type offers, such as 10Mbps for a \n10Base2 cable. yyyy represents the baseband or broadband aspect of the cable, such as baseband \nfor a 10Base2 cable. Baseband cables can transmit only a single signal at a time. Broadband cables \ncan transmit multiple signals simultaneously. Most networking cables are baseband cables. How-\never, when used in specific configurations, coaxial cable can be used as a broadband connection, \nsuch as with cable modems. ZZ either represents the maximum distance the cable can be used or \nacts as shorthand to represent the technology of the cable, such as the approximately 200 meters \nfor 10Base2 cable (actually 185 meters, but it’s rounded up to 200), or T or TX for twisted-pair \nin 10Base-T or 100Base-TX. (Note that 100Base-TX is implemented using two CAT 5 UTP or \nSTP cables, one issued for receiving, the other for transmitting.)\nTable 3.1 shows the important characteristics for the most common network cabling types.\n" }, { "page_number": 126, "text": "Communications and Network Security\n81\nTwisted-Pair\nTwisted-pair cabling is extremely thin and flexible compared to coaxial cable. It is made up of \nfour pairs of wires that are twisted around each other and then sheathed in a PVC insulator. If \nthere is a metal foil wrapper around the wires underneath the external sheath, the wire is known \nas shielded twisted-pair (STP). The foil provides additional protection from external EMI. \nTwisted-pair cabling without the foil is known as unshielded twisted-pair (UTP). UTP is most \noften referred to as just 10Base-T.\nThe wires that make up UTP and STP are small, thin copper wires that are twisted in pairs. \nThe twisting of the wires provides protection from external radio frequencies and electric and \nmagnetic interference and reduces crosstalk between pairs. Crosstalk occurs when data trans-\nmitted over one set of wires is picked up by another set of wires due to radiating electromagnetic \nfields produced by the electrical current. Each wire pair within the cable is twisted at a different \nrate (i.e., twists per inch); thus, the signals traveling over one pair of wires cannot cross over \nonto another pair of wires. The tighter the twist (the more twists per inch), the more resistant \nthe cable is to internal and external interference and crosstalk and thus the capacity for through-\nput (that is, higher bandwidth) is greater.\nThere are several classes of UTP cabling. The various categories are created through the use \nof tighter twists of the wire pairs, variations in the quality of the conductor, and variations in \nthe quality of the external shielding. Table 3.2 shows the UTP categories.\nT A B L E\n3 . 1\nImportant Characteristics for Common Network Cabling Types\nType\nMax Speed\nDistance\nDifficulty of \nInstallation\nSusceptibility \nto EMI\nCost\n10Base2\n10Mbps\n185 m\nMedium\nMedium\nMedium\n10Base5\n10Mbps\n500 m\nHigh\nLow\nHigh\n10Base-T \n(UTP)\n10Mbps\n100 m\nLow\nHigh\nVery low\nSTP\n155Mbps\n100 m\nMedium\nMedium\nHigh\n100Base-T/\n100Base-TX\n100Mbps\n100 m\nLow\nHigh\nLow\n1000Base-T\n1Gbps\n100 m\nLow\nHigh\nMedium\nFiber-optic\n2Gbps\n2 k\nVery high\nNone\nVery high\n" }, { "page_number": 127, "text": "82\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\nThe following problems are the most common with twisted-pair cabling:\n\u0002\nUsing the wrong category of twisted-pair cable for high-throughput networking\n\u0002\nDeploying a twisted-pair cable longer than its maximum recommended length (i.e., 100 m).\n\u0002\nUsing UTP in environments with significant interference\nConductors\nThe distance limitations of conductor-based network cabling is due to the resistance of the \nmetal used as a conductor. Copper, the most popular conductor, is one of the best and least \nexpensive room-temperature conductors available. However, it is resistant to the flow of elec-\ntrons. This resistance results in a degradation of signal strength and quality over the length of \nthe cable.\nPlenum cable is a type of cabling sheathed with a special material that does not \nrelease toxic fumes when burned, as does traditional PVC coated wiring. Often \nplenum grade cable must be used to comply with building codes, especially if \nthe building has enclosed spaces where people are likely to be found that could \ntrap gases.\nThe maximum length defined for each cable type indicates the point at which the level of deg-\nradation could begin to interfere with the efficient transmission of data. This degradation of the \nT A B L E\n3 . 2\nUTP Categories\nUTP Category\nThroughput\nNotes\nCat 1\nVoice only\nNot suitable for networks, but usable by modems\nCat 2\n4Mbps\nNot suitable for most networks, often employed for \nhost-to-terminal connections on mainframes\nCat 3\n10Mbps\nPrimarily used in 10Base-T Ethernet networks (offers \nonly 4Mpbs when used on Token Ring networks)\nCat 4\n16Mbps\nPrimarily used in Token Ring networks\nCat 5\n100Mbps\nUsed in 100Base-TX, FDDI, and ATM networks\nCat 6\n155Mbps\nUsed in high-speed networks\nCat 7\n1Gbps\nUsed on gigabit-speed networks\n" }, { "page_number": 128, "text": "Communications and Network Security\n83\nsignal is known as attenuation. It is often possible to use a cable segment that is longer than the \ncable is rated for, but the number of errors and retransmissions will be increased over that cable \nsegment, ultimately resulting in poor network performance. Attenuation is more pronounced as \nthe speed of the transmission increases. It is recommended to use shorter cable lengths as the \nspeed of the transmission increases.\nLong cable lengths can often be supplemented through the use of repeaters or concentrators. \nA repeater is just a signal amplification device, much like the amplifier for your car or home stereo. \nThe repeater boosts the signal strength of an incoming data stream and rebroadcasts it through its \nsecond port. A concentrator does the same thing except it has more than just two ports. However, \nthe use of more than four repeaters in a row is discouraged (see the sidebar “3-4-5 Rule”).\nAn alternative to conductor-based network cabling is fiber-optic cable. Fiber-optic cables \ntransmit pulses of light rather than electricity. This has the advantage of being extremely fast \nand nearly impervious to tapping. However, it is difficult to install and expensive; thus, the \nsecurity and performance it offers comes at a steep price.\nWireless\nIn addition to wire-based network connectivity media, we must include wireless connectivity. \nWireless network interfaces are widely used as an alternative to running UTP cabling through-\nout a work area. Wireless networking is based on IEEE 802.11b and 802.11a standards. \n802.11b devices can transmit data up to 11Mbps. 802.11a devices can transmit data up to \n54Mbps. Wireless networking uses connection hubs that can support one to dozens of wireless \nNICs. The primary drawback of wireless networking is that the signals connecting the NICs to \nthe hubs may not be encrypted. Virtual private networks (VPNs) or other traffic encryption \nmechanisms must be employed to provide security for the connections. A wireless link is more \nsusceptible to eavesdropping because the signals can often be detected blocks away, whereas \nUTP cables require direct physical access to tap into the traffic.\n3-4-5 Rule\nThe 3-4-5 rule is used whenever Ethernet or other IEEE 802.3 shared-access networks are \ndeployed in a tree topology (i.e., a central trunk with various splitting branches). This rule \ndefines the number of repeaters/concentrators and segments that can be used in a network \ndesign. The rule states that between any two nodes (a node can be any type of processing \nentity, such as a server, client, router), there can be a maximum of five segments connected by \nfour repeaters/concentrators and that only three of those five segments can be populated (i.e., \nhave additional or other user, server, or networking device connections).\nThe 3-4-5 rule does not apply to switched networks.\n" }, { "page_number": 129, "text": "84\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\nLAN Technologies\nThere are three main types of local area network (LAN) technologies: Ethernet, Token Ring, \nand FDDI. There are a handful of other LAN technologies, but they are not as widely used as \nthese three. Most of the differences between LAN technologies occurs at and below the Data \nLink layer.\nEthernet\nEthernet is a shared-media LAN technology (also known as a broadcast technology). That \nmeans it allows numerous devices to communicate over the same medium but requires that each \ndevice take turns communicating and perform collision detection and avoidance. Ethernet \nemploys broadcast and collision domains. A broadcast domain is a physical grouping of systems \nin which all of the systems in the group receive a broadcast sent by a single system in the group. \nA broadcast is a message transmitted to a specific address that indicates that all systems are the \nintended recipients.\nA collision domain consists of groupings of systems within which a data collision occurs if \ntwo systems transmit simultaneously. A data collision takes place when two transmitted mes-\nsages attempt to use the network medium at the same time. It causes one or both of the messages \nto be corrupted.\nEthernet can support full-duplex communications (i.e., full two-way) and usually employs \ncoaxial or twisted-pair cabling. Ethernet is most often deployed on star or bus topologies. Ether-\nnet is based on the IEEE 802.3 standard. Individual units of Ethernet data are called frames. Fast \nEthernet supports 100Mbps throughput. Gigabit Ethernet supports 1000Gbps throughput.\nToken Ring\nToken Ring employs a token-passing mechanism to control which systems can transmit data \nover the network medium. The token travels in a logical loop among all members of the LAN. \nToken Ring can be employed on ring or star network topologies. It is rarely used today due to \nits performance limitations, higher cost compared to Ethernet, and increased difficulty in \ndeployment and management.\nFiber Distributed Data Interface (FDDI)\nFiber Distributed Data Interface (FDDI) is a high-speed token-passing technology that employs \ntwo rings with traffic flowing in opposite directions. FDDI is often used as a backbone for large \nenterprise networks. Its dual-ring design allows for self-healing by removing the failed segment \nfrom the loop and creating a single loop out of the remaining inner and outer ring portions. \nFDDI is expensive but was often used in campus environments before Fast Ethernet and Gigabit \nEthernet were developed.\nSub-technologies\nMost networks comprise numerous technologies rather than a single technology. For example, \nEthernet is not just a single technology but a superset of sub-technologies that support its common \n" }, { "page_number": 130, "text": "Communications and Network Security\n85\nand expected activity and behavior. Ethernet includes the technologies of digital communications, \nsynchronous communications, and baseband communications, and it supports broadcast, multi-\ncast, and unicast communications and Carrier-Sense Multiple Access with Collision Detection \n(CSMA/CD). Many of the LAN technologies, such as Ethernet, Token Ring, and FDDI, may \ninclude many of the sub-technologies described in the following sections.\nAnalog and Digital\nOne sub-technology is the mechanism used to actually transmit communication signals over a \nphysical medium, such as a cable. There are two types: analog and digital. Analog communi-\ncations occur with a continuous signal that varies in frequency, amplitude, phase, voltage, and \nso on. The variances in the continuous signal produce a wave shape (as opposed to the square \nshape of a digital signal). The actual communication occurs by variances in the constant signal. \nDigital communications occur through the use of a discontinuous electrical signal and a state \nchange or on-off pulses.\nSynchronous and Asynchronous\nSome communications are synchronized with some sort of clock or timing activity. Communi-\ncations are either synchronous or asynchronous. Synchronous communications rely upon a tim-\ning or clocking mechanism based upon either an independent clock or a time stamp embedded \nin the data stream. Synchronous communications are typically able to support very high rates \nof data transfer. Asynchronous communications rely upon a stop and start delimiter bit to man-\nage transmission of data. Due to the use of delimiter bits and the stop and start nature of its \ntransmission, asynchronous communication is best suited for smaller amounts of data. Stan-\ndard modems over normal telephone lines are good examples of asynchronous communication.\nBaseband and Broadband\nHow many communications can occur simultaneously over a cable segment depends on \nwhether you use baseband technology or broadband technology. Baseband technology can sup-\nport only a single communication channel. It uses a direct current applied to the cable. A current \nthat is on represents the binary signal of 1, and a current that is off represents the binary signal \nof 0. Ethernet is a baseband technology. Broadband technology can support multiple simulta-\nneous signals. Broadband uses frequency modulation to support numerous channels, each sup-\nporting a distinct communication session. Broadband is suitable for high-throughput rates, \nespecially when several channels are multiplexed. Cable television and cable modems, ISDN, \nDSL, T1, and T3 are examples of broadband technologies.\nBroadcast, Multicast, and Unicast\nAnother sub-technology determines how many destinations a single transmission can reach. \nThe options are broadcast, multicast, and unicast. A broadcast technology supports communi-\ncations to all possible recipients. A multicast technology supports communications to multiple \nspecific recipients. A unicast technology supports only a single communication to a specific \nrecipient.\n" }, { "page_number": 131, "text": "86\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\nLAN Media Access\nFinally, there are at least five LAN media access technologies that are used to avoid or prevent \ntransmission collisions.\nCarrier Sense Multiple Access (CSMA)\nThe LAN media access technology that performs \ncommunications using the following steps:\n1.\nThe host listens to the LAN media to determine if it is in use.\n2.\nIf the LAN media is not being used, the host transmits its communication.\n3.\nThe host waits for an acknowledgment.\n4.\nIf no acknowledgment is received after a timeout period, the host starts over at step 1.\nCarrier-Sense Multiple Access with Collision Avoidance (CSMA/CA)\nThe LAN media \naccess technology that performs communications using the following steps:\n1.\nThe host has two connections to the LAN media: inbound and outbound. The host lis-\ntens on the inbound connection to determine if the LAN media is in use.\n2.\nIf the LAN media is not being used, the host requests permission to transmit.\n3.\nIf permission is not granted after a timeout period, the host starts over at step 1.\n4.\nIf permission is granted, the host transmits its communication over the outbound \nconnection.\n5.\nThe host waits for an acknowledgment.\n6.\nIf no acknowledgment is received after a timeout period, the host starts over at step 1.\nAppleTalk and 802.11 wireless networking are examples of networks that employ CSMA/CA \ntechnologies.\nCarrier-Sense Multiple Access with Collision Detection (CSMA/CD)\nThe LAN media access \ntechnology that performs communications using the following steps:\n1.\nThe host listens to the LAN media to determine if it is in use.\n2.\nIf the LAN media is not being used, the host transmits its communication.\n3.\nWhile transmitting, the host listens for collisions (i.e., two or more hosts transmitting \nsimultaneously).\n4.\nIf a collision is detected, the host transmits a jam signal.\n5.\nIf a jam signal is received, all hosts stop transmitting. Each host waits a random period \nof time and then starts over at step 1.\nEthernet networks employ the CSMA/CD technology.\nToken Passing\nThe LAN media access technology that performs communications using a dig-\nital token. Possession of the token allows a host to transmit data. Once its transmission is com-\nplete, it releases the token on to the next system. Token passing is used by Token Ring networks, \nsuch as FDDI.\n" }, { "page_number": 132, "text": "Communications and Network Security\n87\nPolling\nThe LAN media access technology that performs communications using a master-\nslave configuration. One system is labeled as the primary system. All other systems are labeled \nas secondary. The primary system polls or inquires of each secondary system in turn whether \nthey have a need to transmit data. If a secondary system indicates a need, it is granted permis-\nsion to transmit. Once its transmission is complete, the primary system moves on to poll the \nnext secondary system. Synchronous Data Link Control (SDLC) uses polling.\nNetwork Topologies\nThe physical layout and organization of computers and networking devices is known as the net-\nwork topology. The logical topology is the grouping of networked systems into trusted collec-\ntives. The physical topology is not always the same as the logical topology. There are four basic \ntopologies of the physical layout of a network: ring, bus, star, and mesh.\nRing Topology\nA ring topology connects each system as points on a circle (see Figure 3.6). The connection \nmedium acts as a unidirectional transmission loop. Only one system can transmit data at a time. \nTraffic management is performed by a token. A token is a digital hall pass that travels around the \nring until a system grabs it. A system in possession of the token can transmit data. Data and the \ntoken are transmitted to a specific destination. As the data travels around the loop, each system \nchecks to see if it is the intended recipient of the data. If not, it passes the token on. If so, it reads \nthe data. Once the data is received, the token is released and returns to traveling around the loop \nuntil another system grabs it. If any one segment of the loop is broken, all communication around \nthe loop ceases. Some implementations of ring topologies employ a fault tolerance mechanism, \nsuch as dual loops running in opposite directions, to prevent single points of failure.\nF I G U R E\n3 . 6\nA ring topology\n" }, { "page_number": 133, "text": "88\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\nBus Topology\nA bus topology connects each system to a trunk or backbone cable. All systems on the bus can \ntransmit data simultaneously, which can result in collisions. A collision occurs when two sys-\ntems transmit data at the same time; the signals interfere with each other. To avoid this, the systems \nemploy a collision avoidance mechanism that basically “listens” for any other currently occur-\nring traffic. If traffic is heard, the system waits a few moments and listens again. If no traffic is \nheard, the system transmits its data. When data is transmitted on a bus topology, all systems on \nthe network hear the data. If the data is not addressed to a specific system, that system just \nignores the data. The benefit of a bus topology is that if a single segment fails, communications \non all other segments continue uninterrupted. However, the central trunk line remains a single \npoint of failure.\nThere are two types of bus topologies: linear and tree. A linear bus topology employs a single \ntrunk line with all systems directly connected to it. A tree topology employs a single trunk line \nwith branches that can support multiple systems. Figure 3.7 illustrates both types.\nF I G U R E\n3 . 7\nA linear topology and a tree bus topology\nStar Topology\nA star topology employs a centralized connection device. This device can be a simple hub or \nswitch. Each system is connected to the central hub by a dedicated segment (see Figure 3.8). If \nany one segment fails, the other segments can continue to function. However, the central hub \nis a single point of failure. Generally, the star topology uses less cabling than other topologies \nand makes the identification of damaged cables easier.\nA logical bus and a logical ring can be implemented as a physical star. Ethernet is a bus-based \ntechnology. It can be deployed as a physical star, but the hub device is actually a logical bus con-\nnection device. Likewise, Token Ring is a ring-based technology. It can be deployed as a phys-\nical star using a multistation access unit (MAU). An MAU allows for the cable segments to be \ndeployed as a star while internally the device makes logical ring connections.\nLinear\nTree\n" }, { "page_number": 134, "text": "Communications and Network Security\n89\nF I G U R E\n3 . 8\nA star topology\nF I G U R E\n3 . 9\nA mesh topology\nMesh Topology\nA mesh topology connects systems to other systems using numerous paths (see Figure 3.9). A full \nmesh topology connects each system to all other systems on the network. A partial mesh topology \nconnects many systems to many other systems. Mesh topologies provide redundant connections \nto systems, allowing multiple segment failures without seriously affecting connectivity.\nTCP/IP Overview\nThe most widely used protocol is TCP/IP, but it is not just a single protocol; rather, it is a pro-\ntocol stack comprising dozens of individual protocols (see Figure 3.10). TCP/IP is a platform-\nindependent protocol based on open standards. However, this is both a benefit and a drawback. \n" }, { "page_number": 135, "text": "90\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\nTCP/IP can be found in just about every available operating system, but it consumes a signifi-\ncant amount of resources and is relatively easy to hack into because it was designed for ease of \nuse rather than for security.\nF I G U R E\n3 . 1 0\nThe four layers of TCP/IP and its component protocols\nTCP/IP can be secured using VPN links between systems. VPN links are encrypted to add pri-\nvacy, confidentiality, and authentication and to maintain data integrity. Protocols used to estab-\nlish VPNs are Point-to-Point Tunneling Protocol (PPTP), Layer 2 Tunneling Protocol (L2TP), \nand Internet Protocol Security (IPSec). Another method is to employ TCP wrappers. A TCP \nwrapper is an application that can serve as a basic firewall by restricting access based on user \nIDs or system IDs. Using TCP wrappers is a form of port-based access control.\nTransport Layer Protocols\nThe two primary Transport layer protocols of TCP/IP are TCP and UDP. TCP is a connection-\noriented protocol, whereas UDP is a connectionless protocol. When a communication connec-\ntion is established between two systems, it is done using ports. TCP and UDP each have 65,536 \nports. Since port numbers are 16-digit binary numbers, the total number of ports is 216, or \n65,536, numbered from 0 through 65,535. A port (also called a socket) is little more than an \naddress number that both ends of the communication link agree to use when transferring data. \nPorts allow a single IP address to be able to support multiple simultaneous communications, \neach using a different port number.\nThe first 1,024 of these ports (0–1,023) are called the well-known ports or the service ports. \nThis is because they have standardized assignments as to the services they support. For example, \nport 80 is the standard port for Web (HTTP) traffic, port 23 is the standard port for Telnet, and \nport 25 is the standard port for SMTP.\nApplication\nPresentation\nSession\nTransport\nNetwork\nData Link\nPhysical\nProcess/\nApplication\nHost-to-Host\nInternet\nNetwork\nAccess\nFTP\nTFTP\nTCP\nICMP\nEthernet\nTelnet\nSMTP\nUDP\nARP\nFast Ethernet\nSNMP\nNFS\nRARP\nToken Ring\nLPD\nX Window\nIP\nFDDI\n" }, { "page_number": 136, "text": "Communications and Network Security\n91\nTransmission Control Protocol (TCP) operates at layer 4 (the Transport layer) of the OSI \nmodel. It supports full-duplex communications, is connection oriented, and employs reliable \nvirtual circuits. TCP is connection-oriented because it employs a handshake process between \ntwo systems to establish a communication session. Upon completion of this handshake process, \na communication session that can support data transmission between the client and server is \nestablished. The three-way handshake process is as follows:\n1.\nThe client sends a SYN (synchronize) packet to the server.\n2.\nThe server responds with a SYN/ACK (synchronize and acknowledge) packet back to \nthe client.\n3.\nThe client responds with an ACK (acknowledge) packet back to the server.\nThe segments of a TCP transmission are sequenced. This allows the receiver to rebuild the \noriginal communication by reordering received segments back into their proper arrangement in \nspite of the order in which they were received. Data communicated through a TCP session is \nperiodically verified with an acknowledgement signal. The acknowledgement is a hash value of \nall previously transmitted data. If the server’s own hash of received data does not match the \nhash value sent by the client, the server asks the client to resend the last collection of data. The \nnumber of packets transmitted before an acknowledge packet is sent is known as the transmis-\nsion window. Data flow is controlled through a mechanism called sliding windows. TCP is able \nto use different sizes of windows (i.e., a different number of transmitted packets) before sending \nan acknowledgement. Larger windows allow for faster data transmission, but they should be \nused only on reliable connections where lost or corrupted data is minimal. Smaller windows \nshould be used when the communication connection is unreliable. TCP should be employed \nwhen delivery of data is required. The IP header protocol field value for TCP is 6. The protocol \nfield value is the label or flag found in the header of every IP packet that tells the receiving sys-\ntem what type of packet it is. Think of it like the label on a mystery meat package wrapped in \nbutcher paper you pull out of the deep freeze. Without the label, you would have to open it and \ninspect it to figure out what it was. But with the label, you can search or filter quickly to find \nitems of interest.\nUser Datagram Protocol (UDP) also operates at layer 4 (the Transport layer) of the OSI \nmodel. It is a connectionless “best effort” communications protocol. It offers no error detection \nor correction, does not use sequencing, does not use flow control mechanisms, does not use a \nvirtual circuit, and is considered unreliable. UDP has very low overhead and thus can transmit \ndata quickly. However, UDP should be used only when delivery of data is not essential. UDP \nis often employed by real-time or streaming communications for audio or video. The IP header \nprotocol field value for UDP is 17.\nNetwork Layer Protocols\nAnother important protocol in the TCP/IP protocol suite operates at the Network layer of the \nOSI model—namely Internet Protocol (IP). IP provides route addressing for data packets. Sim-\nilar to UDP, IP is connectionless and is an unreliable datagram service. IP does not offer guar-\nantees that packets will be delivered or that packets will be delivered in the correct order, nor \n" }, { "page_number": 137, "text": "92\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\ndoes it guarantee that packets will not be delivered more than once. Thus, you must employ \nTCP on IP to gain reliable and controlled communication sessions.\nOther protocols at the OSI model Network layer include ICMP, IGMP, and NAT.\nICMP\nInternet Control Message Protocol (ICMP) is used to determine the health of a network or a \nspecific link. ICMP is utilized by ping, TRACEROUTE, PATHPING, and other network \nmanagement tools. The ping utility employs ICMP echo packets and bounces them off remote \nsystems. Thus, ping can be used to determine if the remote system is online, if the remote sys-\ntem is responding promptly, whether the intermediary systems are supporting communica-\ntions, and the level of performance efficiency at which the intermediary systems are communicating. \nping includes a redirect function that allows the echo responses to be sent to a different des-\ntination than the system of origin. Unfortunately, this ICMP capability is often exploited in \nvarious forms of bandwidth-based denial of service attacks. The IP header protocol field value \nfor ICMP is 1.\nIGMP\nInternet Group Management Protocol (IGMP) allows systems to support multicasting. Multi-\ncasting is the transmission of data to multiple specific recipients. (RFC 1112 discusses the \nrequirements to perform IGMP multicasting.) IGMP is used by IP hosts to register their dynamic \nmulticast group membership. It is also used by connected routers to discover these groups. The \nIP header protocol field value for IGMP is 2.\nARP and Reverse ARP\nAddress Resolution Protocol (ARP) and Reverse Address Resolution Protocol (RARP) are two \nimportant protocols you need to be familiar with. ARP is used to resolve IP addresses (32-bit \nbinary number for logical addressing) into MAC (Media Access Control) addresses. MAC \naddresses are the six-digit hexadecimal numbers (48-bit binary numbers for hardware address-\ning) assigned by manufacturers to network interface cards. Traffic on a network segment (e.g., \ncables across a hub) is directed from its source system to its destination system using MAC \naddresses. RARP is used to resolve MAC addresses into IP addresses.\nNAT\nNetwork Address Translation (NAT) was developed to allow private networks to use any IP \naddress set without causing collisions or conflicts with public Internet hosts with the same IP \naddresses. In effect, NAT translates the IP addresses of your internal clients to leased addresses \noutside of your environment. Most often, a private network employs the private IP addresses \ndefined in RFC 1918. The private IP address ranges are 10.0.0.0–10.255.255.255 (an entire Class \nA range), 172.16.0.0–172.31.255.255 (16 Class B ranges), and 192.168.0.0–192.168.255.255 \n(255 Class C ranges). These ranges of IP addresses are defined by default on routers as non-\nroutable. They are reserved for use by private networks. Attempting to use these addresses directly \non the Internet is futile because all publicly accessible routers will drop data packets containing a \nsource or destination IP address from these ranges.\n" }, { "page_number": 138, "text": "Communications and Network Security\n93\nFrequently, security professionals refer to NAT when they really mean PAT. By \ndefinition, NAT maps one internal IP address to one external IP address. How-\never, Port Address Translation (PAT) maps one internal IP address to an exter-\nnal IP address and port number combination. Thus, PAT can theoretically \nsupport 65,536 (232) simultaneous communications from internal clients over a \nsingle external leased IP address. So with NAT, you must lease as many public \nIP addresses as simultaneous communications you wish to have, while with \nPAT you can lease fewer IP addresses and obtain a reasonable 100:1 ratio of \ninternal clients to external leased IP addresses.\nNAT can be used in two modes: static and dynamic. Static mode NAT is used when a specific \ninternal client's IP address is assigned a permanent mapping to a specific external public IP \naddress. This allows for external entities to communicate with systems inside of your network \neven if you are using the RFC 1918 IP addresses. Dynamic mode NAT is used to grant multiple \ninternal clients access to a few leased public IP addresses. Thus, a large internal network can still \naccess the Internet without having to lease a large block of public IP addresses. This keeps public \nIP address usage abuse to a minimum and helps keep Internet access costs to a minimum. In a \ndynamic mode NAT implementation, the NAT system maintains a database of mappings so \nthat all response traffic from Internet services are properly routed back to the original internal \nrequesting client. Often NAT is combined with a proxy server or proxy firewall to provide addi-\ntional Internet access and content caching features. NAT is not directly compatible with IPSec \nbecause it modifies packet headers, which IPSec relies upon to prevent security violations.\nAutomatic Private IP Addressing (APIPA)\nAPIPA, or Automatic Private IP Addressing, not to be confused with RFC 1918, assigns an IP \naddress to a system in the event of a DHCP assignment failure. APIPA is primarily a feature of \nWindows. APIPA assigns each failed DHCP client with an IP address from the range of \n169.254.0.1 to 169.254.255.254 along with the default Class B subnet mask of 255.255.0.0. This \nallows the system to communicate with other APIPA configured clients within the same broadcast \ndomain but not with any system across a router or with a correctly assigned IP address.\nIt is a good idea to know how to convert between decimal, binary, and even \nhexadecimal. Also, don't forget about how to covert from a dotted-decimal \nnotation IP address (such as 172.16.1.1) to its binary equivalent (that is, \n10101100000100000000000100000001). And it is probably not a bad idea to \nbe able to convert the 32-bit binary number to a single decimal number (that \nis, 2886729985).\nIP Classes\nBasic knowledge of IP addressing and IP classes is a must for any security professional. If you \nare rusty on addressing, subnetting, classes, and other related topics, take the time to refresh \nyourself. Table 3.3 and Table 3.4 provide a quick overview of the key details of classes and \ndefault subnets.\n" }, { "page_number": 139, "text": "94\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\nThe Loopback Address\nAnother IP address range that you should be careful not to confuse with RFC 1918 is the loop-\nback address. The loopback address is purely a software entity. It is an IP address used to create \na software interface that connects back to itself via the TCP/IP protocol. The loopback address \nallows for testing of local network settings in spite of missing, damaged, or nonfunctional net-\nwork hardware and/or related device drivers. Technically, the entire 127.x.x.x network is \nreserved for loopback use. However, only the 127.0.0.1 address is widely used. Windows XP \nSP2 (and possibly other OS updates) recently restricted the client to use only 127.0.0.1 as the \nloopback address. This caused several applications that used other addresses in the upper \nranges of the 127.x.x.x network services to fail. In restricting client use to only 127.0.0.1, \nMicrosoft has attempted to open up a wasted Class A address. Even if this tactic is successful \nfor Microsoft, it will only affect the modern Windows systems.\nT A B L E\n3 . 3\nIP Classes\nClass\nFirst Binary Digits\nDecimal Range of First Octet\nA\n0\n1–126\nB\n10\n128–191\nC\n110\n192–223\nD\n1110\n224–239\nE\n1111\n240–255\nT A B L E\n3 . 4\nIP Classes Default Subnet Masks\nClass\nDefault Subnet Mask\nCIDR Equivalent\nA\n255.0.0.0\n/8\nB\n255.255.0.0\n/16\nC\n255.255.255.0\n/24\nD\n255.0.0.0\n/8\nE\n255.0.0.0\n/8\n" }, { "page_number": 140, "text": "Communications and Network Security\n95\nAnother option for subnetting is to use Classless Inter-Domain Routing (CIDR). CIDR uses \nmask bits rather than a full dotted-decimal notation subnet mask. Thus, instead of 255.255.0.0, \na CIDR is added to the IP address after a slash, e.g., 172.16.1.1/16.\nCommon Application Layer Protocols\nIn the Application layer of the TCP/IP model (which includes the Session, Presentation, and Appli-\ncation layers of the OSI model) reside numerous application- or service-specific protocols. A basic \nknowledge of these protocols and their relevant service ports is important for the CISSP exam:\nTelnet, port 23\nA terminal emulation network application that supports remote connectivity \nfor executing commands and running applications but that does not support transfer of files.\nFile Transfer Protocol (FTP), ports 20, 21\nA network application that supports an exchange \nof files that requires anonymous or specific authentication.\nTrivial File Transfer Protocol (TFTP), port 69\nA network application that supports an \nexchange of files that does not require authentication.\nSimple Mail Transfer Protocol (SMTP), port 25\nA protocol used to transmit e-mail messages \nfrom a client to an e-mail server and from one e-mail server to another.\nPost Office Protocol (POP3), port 110\nA protocol used to pull e-mail messages from an inbox \non an e-mail server down to an e-mail client.\nInternet Mail Authentication Protocol (IMAP 4), port 143\nA protocol used to pull e-mail \nmessages from an inbox on an e-mail server down to an e-mail client. IMAP is more secure than \nPOP3 and offers the ability to pull headers down from the e-mail server as well as to delete mes-\nsages directly off the e-mail server without having to download to the local client first.\nDynamic Host Configuration Protocol (DHCP), ports 67 and 68\nDHCP uses port 67 for \nserver point-to-point response and port 68 for client request broadcast. It is used to assign TCP/\nIP configuration settings to systems upon bootup. DHCP enables centralized control of network \naddressing.\nHyperText Transport Protocol (HTTP), port 80\nThis is the protocol used to transmit web \npage elements from a web server to web browsers.\nSecure Sockets Layer (SSL), port 443\nA VPN-like security protocol that operates at the ses-\nsion layer. SSL was originally designed to support secured Web communications (HTTPS) but \nis capable of securing any Application-layer protocol communications.\nLine Print Daemon (LPD)\nA network service that is used to spool print jobs and to send print \njobs to printers.\nX Window\nA GUI API for operating systems.\nBootstrap Protocol (BootP)\nA protocol used to connect diskless workstations to a network \nthrough auto-assignment of IP configuration and download of basic OS elements. BootP is the \nforerunner to Dynamic Host Configuration Protocol (DHCP).\n" }, { "page_number": 141, "text": "96\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\nNetwork File System (NFS)\nA network service used to support file sharing between dissimilar \nsystems.\nSimple Network Management Protocol (SNMP), port 161\nA network service used to collect net-\nwork health and status information by polling monitoring devices from a central monitoring station.\nTCP/IP’s vulnerabilities are numerous. Improperly implemented TCP/IP stacks in various \noperating systems are vulnerable to buffer overflows, SYN flood attacks, various DoS attacks, \nfragment attacks, over-sized packet attacks, spoofing attacks, man-in-the-middle attacks, \nhijack attacks, and coding error attacks.\nIn addition to these intrusive attacks, TCP/IP (as well as most protocols) is also subject to \npassive attacks via monitoring or sniffing. Network monitoring is the act of monitoring traffic \npatterns to obtain information about a network. Packet sniffing is the act of capturing packets \nfrom the network in hopes of extracting useful information from the packet contents. Effective \npacket sniffers can extract usernames, passwords, e-mail addresses, encryption keys, credit card \nnumbers, IP addresses, system names, and so on.\nInternet/Intranet/Extranet Components\nThe Internet is the global network of interconnected networks that provides the wealth of infor-\nmation we know as the World Wide Web. The Internet is host to countless information services \nand numerous applications, including the Web, e-mail, FTP, Telnet, newsgroups, chat, and so \non. The Internet is also home to malicious persons whose primary goal is to locate your com-\nputer and extract valuable data from it, use it to launch further attacks, or damage it in some \nway. You should be familiar with the Internet and able to readily identify its benefits and draw-\nbacks from your own online experiences. Due to the success and global use of the Internet, many \nof its technologies were adapted or integrated into the private business network. This created \ntwo new forms of networks: intranets and extranets.\nAn intranet is a private network that is designed to host the same information services found \non the Internet. Networks that rely upon external servers (i.e., ones positioned on the public \nInternet) to provide information services internally are not considered intranets. Intranets pro-\nvide users with access to the Web, e-mail, and other services on internal servers that are not \naccessible to anyone outside of the private network.\nAn extranet is a cross between the Internet and an intranet. An extranet is a section of an \norganization’s network that has been sectioned off so that it acts as an intranet for the private \nnetwork but also serves information out to the public Internet. An extranet is often reserved for \nuse by specific partners or customers. It is rarely on a public network. An extranet for public \nconsumption is typically labeled a demilitarized zone (DMZ), or perimeter network.\nWhen you’re designing a secure network (whether a private network, an intranet, or an \nextranet), there are numerous networking devices that must be evaluated. Not all of these com-\nponents are necessary for a secure network, but they are all common network devices that may \nhave an impact on network security.\n" }, { "page_number": 142, "text": "Internet/Intranet/Extranet Components\n97\nFirewalls\nFirewalls are essential tools in managing and controlling network traffic. A firewall is a network \ndevice used to filter traffic and is typically deployed between a private network and a link to the \nInternet, but it can be deployed between departments within an organization. Without firewalls, \nit would not be possible to restrict malicious traffic from the Internet from entering into your \nprivate network. Firewalls filter traffic based on a defined set of rules, also called filters or access \ncontrol lists. They are basically a set of instructions that are used to distinguish authorized traf-\nfic from unauthorized and/or malicious traffic. Only authorized traffic is allowed to cross the \nsecurity barrier provided by the firewall.\nFirewalls are useful for blocking or filtering traffic. They are most effective against unre-\nquested traffic and attempts to connect from outside the private network and for blocking \nknown malicious data, messages, or packets based on content, application, protocol, port, or \nsource address. They are capable of hiding the structure and addressing scheme of a private net-\nwork from the public. Most firewalls offer extensive logging, auditing, and monitoring capa-\nbilities, as well as alarms and basic intrusion detection system (IDS) functions. Firewalls are \nunable to block viruses or malicious code transmitted through otherwise authorized communi-\ncation channels, prevent unauthorized but accidental or intended disclosure of information by \nusers, prevent attacks by malicious users already behind the firewall, or protect data after it \npasses out of or into the private network.\nIn addition to logging network traffic activity, firewalls should log several other events as well:\n\u0002\nReboot of the firewall\n\u0002\nProxies or dependencies that cannot or didn't start\n\u0002\nProxies or other important services that have crashed or restarted\n\u0002\nChanges to the firewall configuration file\n\u0002\nA configuration or system error while the firewall is running\nFirewalls are only one part of an overall security solution. With a firewall, many of the secu-\nrity mechanisms are concentrated in one place, and thus they may be a single point of failure. \nFirewall failure is most commonly caused by human error and misconfiguration. Firewalls pro-\nvide protection only against traffic that crosses the firewall from one subnet to another. They \noffer no protection against traffic within a subnet (i.e., behind a firewall).\nThere are four basic types of firewalls: static packet-filtering firewalls, application-level gate-\nway firewalls, circuit-level gateway firewalls, and stateful inspection firewalls. There are also \nways to create hybrid or complex gateway firewalls by combining two or more of these firewall \ntypes into a single firewall solution. In most cases, having a multilevel firewall provides greater \ncontrol over filtering traffic. Regardless, let’s look at the various firewall types and discuss fire-\nwall deployment architectures as well.\nStatic Packet-Filtering Firewall\nA static packet-filtering firewall filters traffic by examining data from a message header. Usu-\nally, the rules are concerned with source, destination, and port addresses. Using static filtering, \na firewall is unable to provide user authentication or to tell whether a packet originated from \n" }, { "page_number": 143, "text": "98\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\ninside or outside the private network, and it is easily fooled with spoofed packets. Static packet-\nfiltering firewalls are known as first-generation firewalls; they operate at layer 3 (the Network \nlayer) of the OSI model. They can also be called screening routers or common routers.\nApplication-Level Gateway Firewall\nAn application-level gateway firewall is also called a proxy firewall. A proxy is a mechanism \nthat copies packets from one network into another; the copy process also changes the source \nand destination address to protect the identity of the internal or private network. An applica-\ntion-level gateway firewall filters traffic based on the Internet service (i.e., application) used to \ntransmit or receive the data. Each type of application must have its own unique proxy server. \nThus, an application-level gateway firewall comprises numerous individual proxy servers. This \ntype of firewall negatively affects network performance because each packet must be examined \nand processed as it passes through the firewall. Application-level gateways are known as sec-\nond-generation firewalls, and they operate at the Application layer (layer 7) of the OSI model.\nCircuit-Level Gateway Firewalls\nCircuit-level gateway firewalls are used to establish communication sessions between trusted \npartners. They operate at the Session layer (layer 5) of the OSI model. SOCKS (SOCKetS, as in \nTCP/IP ports) is a common implementation of a circuit-level gateway firewall. Circuit-level \ngateway firewalls, also known as circuit proxies, manage communications based on the circuit, \nnot the content of traffic. They permit or deny forwarding decisions based solely on the end-\npoint designations of the communication circuit (i.e., the source and destination addresses and \nservice port numbers). Circuit-level gateway firewalls are considered second-generation fire-\nwalls because they represent a modification of the application-level gateway firewall concept.\nStateful Inspection Firewalls\nStateful inspection firewalls evaluate the state or the context of network traffic. By examining \nsource and destination addresses, application usage, source of origin, and the relationship \nbetween current packets and the previous packets of the same session, stateful inspection firewalls \nare able to grant a broader range of access for authorized users and activities and actively watch \nfor and block unauthorized users and activities. Stateful inspection firewalls generally operate \nmore efficiently than application-level gateway firewalls. They are known as third-generation fire-\nwalls, and they operate at Network and Transport layers (layers 3 and 4) of the OSI model.\nMultihomed Firewalls\nSome firewall systems have more than one interface. For instance, a multihomed firewall must \nhave at least two interfaces to filter traffic (they’re also known as dual-homed firewalls). All \nmultihomed firewalls should have IP forwarding disabled to force the filtering rules to control \nall traffic rather than allowing a software-supported shortcut between one interface and \nanother. A bastion host or a screened host is just a firewall system logically positioned between \na private network and an untrusted network. Usually, the bastion host is located behind the \nrouter that connects the private network to the untrusted network. All inbound traffic is routed \nto the bastion host, which in turn acts as a proxy for all of the trusted systems within the private \n" }, { "page_number": 144, "text": "Internet/Intranet/Extranet Components\n99\nnetwork. It is responsible for filtering traffic coming into the private network as well as for pro-\ntecting the identity of the internal client. A screened subnet is similar to the screened host in con-\ncept, except a subnet is placed between two routers and the bastion host is located within that \nsubnet. All inbound traffic is directed to the bastion host, and only traffic proxied by the bastion \nhost can pass through the second router into the private network.\nFirewall Deployment Architectures\nThere are three commonly recognized firewall deployment architectures: single-tier, two-tier, \nand three-tier (also known as multitier). As you can see in Figure 3.11, a single-tier deployment \nplaces the private network behind a firewall, which is then connected through a router to the \nInternet (or some other untrusted network). Single-tier deployments are useful against generic \nattacks only. This architecture offers only minimal protection.\nA two-tier deployment architecture uses a firewall with three or more interfaces. This allows \nfor a DMZ or a publicly accessible extranet. The DMZ is used to host information server sys-\ntems to which external users should have access. The firewall routes traffic to the DMZ or the \ntrusted network according to its strict filtering rules. This architecture introduces a moderate \nlevel of routing and filtering complexity.\nF I G U R E\n3 . 1 1\nThree firewall deployment architectures\nInternet\nRouter\nFirewall\nRouter\nFirewall\nPrivate Network\nSingle-Tier\nInternet\nPrivate Network\nTwo-Tier\nDMZ\nRouter\nFirewall\nFirewall\nFirewall\nInternet\nThree-Tier\nDMZ\nTransaction\nSubnet\nBackoffice\nSubnet\n" }, { "page_number": 145, "text": "100\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\nA three-tier deployment architecture is the deployment of multiple subnets between the pri-\nvate network and the Internet separated by firewalls. Each subsequent firewall has more strin-\ngent filtering rules to restrict traffic to only trusted sources. The outermost subnet is usually a \nDMZ. A middle subnet can serve as a transaction subnet where systems needed to support com-\nplex web applications in the DMZ reside. The third or back-end subnet can support the private \nnetwork. This architecture is the most secure; however, it is also the most complex to design, \nimplement, and manage.\nOther Network Devices\nThere are numerous devices used in the construction of a network. Strong familiarity with the \ncomponents of network building can assist you in designing an IT infrastructure that avoids sin-\ngle points of failure and provides strong support for availability.\nRepeaters, concentrators, and amplifiers\nRepeaters, concentrators, and amplifiers are used to \nstrengthen the communication signal over a cable segment as well as connect network segments \nthat use the same protocol. These devices can be used to extend the maximum length of a specific \ncable type by deploying one or more repeaters along a lengthy cable run. Repeaters, concentrators, \nand amplifiers operate at OSI layer 1. Systems on either side of a repeater, concentrator, or ampli-\nfier are part of the same collision domain and broadcast domain.\nHubs\nHubs are used to connect multiple systems in a star topology and connect network segments \nthat use the same protocol. They repeat inbound traffic over all outbound ports. This ensures that \nthe traffic will reach its intended host. A hub is a multiport repeater. Hubs operate at OSI layer 1. \nSystems on either side of a hub are part of the same collision and broadcast domains.\nSwitches\nRather than using a hub, you might consider using a switch, or intelligent hub. \nSwitches know the addresses of the systems connected on each outbound port. Instead of \nrepeating traffic on every outbound port, a switch repeats only traffic out of the port on which \nthe destination is known to exist. Switches offer greater efficiency for traffic delivery, create sep-\narate collision domains, and improve the overall throughput of data. Switches can also create \nseparate broadcast domains when used to create VLANs. In such configurations, broadcasts are \nallowed within a single VLAN but not allowed to cross unhindered from one VLAN to another. \nSwitches operate primarily at OSI layer 2. When switches have additional features, such as rout-\ning, they can operate at OSI layer 3 as well (such as when routing between VLANs). Systems on \neither side of a switch operating at layer 2 are part of the same broadcast domain but are in dif-\nferent collision domains. Systems on either side of a switch operating at layer 3 are part of different \nbroadcast domains and different collision domains. Switches are used to connect network seg-\nments that use the same protocol.\nBridges\nA bridge is used to connect two networks together, even networks of different topol-\nogies, cabling types, and speeds, in order to connect network segments that use the same pro-\ntocol. A bridge forwards traffic from one network to another. Bridges that connect networks \nusing different transmission speeds may have a buffer to store packets until they can be for-\nwarded on to the slower network. This is known as a store-and-forward device. Bridges operate \n" }, { "page_number": 146, "text": "Internet/Intranet/Extranet Components\n101\nat OSI layer 2. Systems on either side of a bridge are part of the same broadcast domain but are \nin different collision domains.\nRouters\nRouters are used to control traffic flow on networks and are often used to connect \nsimilar networks and control traffic flow between the two. They can function using statically \ndefined routing tables or they can employ a dynamic routing system. There are numerous \ndynamic routing protocols, such as RIP, OSPF, and BGP. Routers operate at OSI layer 3. Sys-\ntems on either side of a router are part of different broadcast domains and different collision \ndomains. Routers are used to connect network segments that use the same protocol.\nBrouter\nBrouters are combination devices comprising a router and a bridge. A brouter \nattempts to route first, but if that fails it defaults to bridging. Thus, a brouter operates primarily \nat layer 3 but can operate at layer 2 when necessary. Systems on either side of a brouter oper-\nating at layer 3 are part of different broadcast domains and different collision domains. Systems \non either side of a brouter operating at layer 2 are part of the same broadcast domain but are \nin different collision domains. Brouters are used to connect network segments that use the same \nprotocol.\nGateways\nA gateway connects networks that are using different network protocols. A gate-\nway is responsible for transferring traffic from one network to another by transforming the for-\nmat of that traffic into a form compatible with the protocol or transport method used by each \nnetwork. Gateways, also known as protocol translators, can be stand-alone hardware devices \nor a software service. Systems on either side of a gateway are part of different broadcast \ndomains and different collision domains. Gateway are used to connect network segments that \nuse different protocols. There are many types of gateways, including data, mail, application, \nsecure, and Internet. Gateways typically operate at OSI layer 7.\nProxies\nA proxy is a form of gateway that does not translate across protocols. Instead, proxies \nserve as mediators, filters, caching servers, and even NAT/PAT servers for a network. A proxy \nperforms a function or requests a service on behalf of another system and connects network seg-\nments that use the same protocol. Proxies are most often used in the context of providing clients \non a private network with Internet access while protecting the identity of the clients. A proxy \naccepts requests from clients, alters the source address of the requester, maintains a mapping of \nrequests to clients, and sends the altered request packets out. Once a reply is received, the proxy \nserver determines which client it is destined for by reviewing its mappings and then sends the \npackets on to the client. Systems on either side of a proxy are part of different broadcast \ndomains and different collision domains.\nLAN extender\nA LAN extender is a remote access, multilayer switch used to connect distant \nnetworks over WAN links. This is a strange beast of a device in that it creates WANs, but mar-\nketers of this device steer clear of the WAN term and use only the LAN or extended LAN terms. \nThe idea behind this device was to make the terminology easier to understand and easier to sell \nthan a normal WAN device with complex concepts and terms tied to it. Ultimately, it was the \nexact same product as a WAN switch or WAN router. (We agree with Douglas Adams, who \nbelieves the marketing people should be shipped out with the lawyers and phone sanitizers on \nthe first ship to the far end of the universe.)\n" }, { "page_number": 147, "text": "102\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\nRemote Access Security Management\nTelecommuting, or remote connectivity, has become a common feature of business computing. \nRemote access is the ability of a distant client to establish a communication session with a net-\nwork. This can take the form of using a modem to dial up directly to a remote access server, con-\nnecting to a network over the Internet through a VPN, or even connecting to a terminal server \nsystem through a thin-client connection. The first two examples use fully capable clients. They \nestablish connections just as if they were directly connected to the LAN. The last example, with \nterminal server, establishes a connection from a thin client. In such a situation, all computing \nactivities occur on the terminal server system rather than on the distant client.\nWhen remote access capabilities are deployed in any environment, security must be consid-\nered and implemented to provide protection for your private network against remote access \ncomplications. Remote access users should be strongly authenticated before being granted \naccess. Only those users who specifically need remote access for their assigned work tasks \nshould be granted permission to establish remote connections. All remote communications \nshould be protected from interception and eavesdropping. This usually requires an encryption \nsolution that provides strong protection for both the authentication traffic as well as all data \ntransmission.\nWhen outlining your remote access security management strategy, be sure to address the fol-\nlowing issues:\nRemote connectivity technology\nEach type of connection has its own unique security issues. \nFully examine every aspect of your connection options. This can include modems, DSL, ISDN, \nwireless networking, and cable modems.\nTransmission protection\nThere are several forms of encrypted protocols, encrypted connec-\ntion systems, and encrypted network services or applications. Use the appropriate combination \nof secured services for your remote connectivity needs. This can include VPNs, SSL, TLS, Secure \nShell (SSH), IPSec, and L2TP.\nAuthentication protection\nIn addition to protecting data traffic, you must also ensure that all \nlogon credentials are properly secured. This requires the use of an authentication protocol and \nmay mandate the use of a centralized remote access authentication system. This can include \nPassword Authentication Protocol (PAP), Challenge Handshake Authentication Protocol \n(CHAP), Extensible Authentication Protocol (EAP), Remote Authentication Dial-In User \nService (RADIUS), and Terminal Access Controller Access Control System (TACACS).\nRemote user assistance\nRemote access users may periodically require technical assistance. \nYou must have a means established to provide this as efficiently as possible. This can include \naddressing software and hardware issues, user training issues, and so on.\nThe ability to use remote access or establish a remote connection should be tightly con-\ntrolled. As mentioned earlier, only those users who require remote access for their work tasks \nshould be granted such access. You can control and restrict use of remote connectivity by using \nfilters, rules, or access controls based on user identity, workstation identity, protocol, applica-\ntion, content, and time of day. To provide protection and restriction of remote access only to \n" }, { "page_number": 148, "text": "Network and Protocol Security Mechanisms\n103\nauthorized users, you can use callback and caller ID. Callback is a mechanism that disconnects \na remote user upon initial contact and then immediately attempts to reconnect to them using a \npredefined phone number (i.e., the number defined in the user account's security database). \nCallback does have a user-defined mode. However, this mode is not used for security; it is used \nto reverse toll charges to the company rather than charging the remote client. Caller ID verifi-\ncation can be used for the same purpose as callback—by verifying the physical location (via \nphone number) of the authorized user.\nIt should be a standard element in your security policy that no unauthorized modems be \npresent on any system connected to the private network. You may need to further specify this \npolicy by indicating that portable systems must either remove their modems before connecting \nto the network or boot with a hardware profile that disables the modem’s device driver.\nNetwork and Protocol Security \nMechanisms\nTCP/IP is the primary protocol used on most networks and on the Internet. It is a robust pro-\ntocol, but it has numerous security deficiencies. In an effort to improve the security of TCP/IP, \nmany subprotocols, mechanisms, or applications have been developed to protect the confiden-\ntiality, integrity, and availability of transmitted data. It is important to remember that even with \nthe single foundational protocol of TCP/IP, there are literally hundreds, if not thousands, of \nindividual protocols, mechanisms, and applications in use across the Internet. Some of them are \ndesigned to provide security services. Some protect integrity, others confidentiality, and others \nprovide authentication and access control. In the next sections, some of the more common net-\nwork and protocol security mechanisms are discussed.\nVPN Protocols\nA virtual private network (VPN) protocol is used to establish a secured tunnel for communica-\ntions across an untrusted network. That network can be the Internet or a private network. The \nVPN can link two networks or two individual systems. VPNs can link clients, servers, routers, \nfirewalls, and switches. VPNs are also helpful in providing security for legacy applications that \nrely upon risky or vulnerable communication protocols or methodologies, especially when com-\nmunicating across a network.\nPoint-to-Point Tunneling Protocol (PPTP) is an enhancement of PPP that creates encrypted \ntunnels between communication endpoints. PPTP is used on VPNs, but it is often replaced by \nthe Layer 2 Tunneling Protocol (L2TP), which uses IPSec to provide traffic encryption for \nVPNs. L2TP was created by combining elements of PPTP and L2F (Layer 2 Forwarding), a VPN \nprotocol from Cisco.\nIP Security (IPSec) is a standards-based mechanism for providing encryption for point-to-\npoint TCP/IP traffic. IPSec has two primary components or functions: Authentication Header \n(AH) and Encapsulating Security Payload (ESP). AH provides authentication, integrity, and \n" }, { "page_number": 149, "text": "104\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\nnonrepudiation. ESP provides encryption to protect the confidentiality of transmitted data, but \nit can also perform limited authentication. IPSec is often used in a VPN in either transport or \ntunnel mode. In transport mode, the IP packet data is encrypted but the header of the packet is \nnot. In tunnel mode, the entire IP packet is encrypted and a new header is added to the packet \nto govern transmission through the tunnel. IPSec functions at layer 3 of the OSI model.\nTable 3.5 illustrates the main characteristics of VPN protocols.\nA VPN device is a network add-on device used to create VPN tunnels separately from server \nor client OSes. Use of VPN devices is transparent to networked systems.\nSecure Communications Protocols\nProtocols that provide security services for application-specific communication channels are \ncalled secure communication protocols. Simple Key Management for IP (SKIP) is an encryption \ntool used to protect sessionless datagram protocols. SKIP was designed to integrate with IPSec \nand functions at layer 3. SKIP is able to encrypt any subprotocol of the TCP/IP suite.\nSoftware IP encryption (SWIPE) is another layer 3 security protocol for IP. It provides \nauthentication, integrity, and confidentiality using an encapsulation protocol.\nSecure Remote Procedure Call (S-RPC) is an authentication service and is simply a means to \nprevent unauthorized execution of code on remote systems.\nSecure Sockets Layer (SSL) is an encryption protocol developed by Netscape to protect the \ncommunications between a web server and a web browser. SSL can be used to secure Web, \ne-mail, FTP, or even Telnet traffic. It is a session-oriented protocol that provides confidentiality \nand integrity. SSL is deployed using a 40-bit key or a 128-bit key.\nT A B L E\n3 . 5\nVPN Characteristics\nVPN Protocol\nNative\nAuthentication\nProtection\nNative Data\nEncryption\nProtocols\nSupported\nDial-Up\nLinks\nSupported\nNumber of\nSimultaneous\nConnections\nPPTP\nY\nN\nIP only\nY\nSingle point to \npoint\nL2F\nY\nN\nIP only\nY\nSingle point to \npoint\nL2TP\nY\nN (Can use \nIPSec)\nAny\nY\nSingle point to \npoint\nIPSec\nY\nY\nIP only\nN\nMultiple\n" }, { "page_number": 150, "text": "Network and Protocol Security Mechanisms\n105\nE-Mail Security Solutions\nE-mail is inherently insecure. Internet e-mail relies primarily upon Simple Mail Transfer Protocol \n(SMTP). SMTP provides no security services. In fact, all e-mail transmitted over the Internet is \ntransmitted in cleartext. Thus, messages that are intercepted or subjected to eavesdropping attacks \ncan be easily read. The only means to provide protection for e-mail is to add encryption to the cli-\nent applications used. The following paragraphs describe four common e-mail security solutions.\nSecure Multipurpose Internet Mail Extensions (S/MIME) secures the transmission of e-mail \nand attachments. S/MIME provides protection through public key encryption and digital sig-\nnatures. Two types of messages can be formed using S/MIME—signed messages and enveloped \nmessages. A signed message provides integrity and sender authentication. An enveloped mes-\nsage provides integrity, sender authentication, and confidentiality.\nSecure Electronic Transaction (SET) is a security protocol for the transmission of transac-\ntions over the Internet. SET is based on Rivest, Shamir, and Adelman (RSA) encryption and \nData Encryption Standard (DES). It has the support of major credit card companies, such as \nVisa and MasterCard.\nPrivacy Enhanced Mail (PEM) is an e-mail encryption mechanism that provides authentica-\ntion, integrity, confidentiality, and nonrepudiation. PEM is a layer 7 protocol and uses RSA, \nDES, and X.509.\nPretty Good Privacy (PGP) is a public-private key system that uses the IDEA algorithm to \nencrypt files and e-mail messages. PGP is not a standard, but rather an independently developed \nproduct that has wide Internet grassroots support.\nDial-Up Protocols\nWhen a remote connection link is established, some protocol must be used to govern how the \nlink is actually created and to establish a common communication foundation for other proto-\ncols to work over. Dial-up protocols provide this function not only for true dial-up links but \nalso for some VPN links.\nOne of the many proprietary dial-up protocols is Microcom Networking Proto-\ncol (MNP). MNP was found on Microcom modems in the 1990s. It supports its \nown form of error control called Echoplex.\nPoint-to-Point Protocol (PPP) is a full-duplex protocol used for the transmission of TCP/IP \npackets over various non-LAN connections, such as modems, ISDN, VPNs, Frame Relay, and \nso on. PPP is widely supported and is the transport protocol of choice for dial-up Internet con-\nnections. PPP authentication is protected through the use of various protocols, such as CHAP \nor PAP. PPP is a replacement for SLIP and can support any LAN protocol, not just TCP/IP.\nSerial Line Internet Protocol (SLIP) is an older technology developed to support TCP/IP \ncommunications over asynchronous serial connections, such as serial cables or modem dial-up. \nSLIP is rarely used but is still supported on many systems. SLIP can support only IP, requires \nstatic IP addresses, offers no error detection or correction, and does not support compression.\n" }, { "page_number": 151, "text": "106\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\nAuthentication Protocols\nAfter a connection is initially established between a remote system and a server or a net-\nwork, the first activity that should take place is to verify the identity of the remote user. This \nactivity is known as authentication. There are several authentication protocols that control \nhow the logon credentials are exchanged and whether or not those credentials are encrypted \nduring transport.\nChallenge Handshake Authentication Protocol (CHAP) is one of the authentication proto-\ncols used over PPP links. CHAP encrypts usernames and passwords. It performs authentication \nusing a challenge-response dialog that cannot be replayed. CHAP also periodically reauthenti-\ncates the remote system throughout an established communication session to verify persistent \nidentity of the remote client. This activity is transparent to the user.\nPassword Authentication Protocol (PAP) is a standardized authentication protocol for PPP. \nPAP transmits usernames and passwords in the clear. It offers no form of encryption; it simply \nprovides a means to transport the logon credentials from the client to the authentication server.\nExtensible Authentication Protocol (EAP) is a framework for authentication instead of an \nactual protocol. EAP allows customized authentication security solutions, such as supporting \nsmart cards, tokens, and biometrics.\nCentralized Remote Authentication Services\nAs remote access becomes a key element in an organization's business functions, it is often \nimportant to add additional layers of security between remote clients and the private network. \nCentralized remote authentication services, such as RADIUS and TACACS, provide this extra \nlayer of protection. These mechanisms provide a separation of the authentication and authori-\nzation processes for remote clients from that performed for LAN or local clients. If the RADIUS \nor TACACS servers are ever compromised, then only remote connectivity is affected, not the \nrest of the network.\nRemote Authentication Dial-In User Service (RADIUS) is used to centralize the authentica-\ntion of remote dial-up connections. A network that employs a RADIUS server is configured so \nthe remote access server passes dial-up user logon credentials to the RADIUS server for authen-\ntication. This process is similar to the process used by domain clients sending logon credentials \nto a domain controller for authentication.\nTerminal Access Controller Access Control System (TACACS) is an alternative to RADIUS. \nTACACS is available in three versions: original TACACS, Extended TACACS (XTACACS), \nand TACACS+. TACACS integrates the authentication and authorization processes. XTA-\nCACS keeps the authentication, authorization, and accounting processes separate. TACACS+ \nimproves XTACACS by adding two-factor authentication. TACACS operates similarly to \nRADIUS and provides the same functionality as RADIUS.\n" }, { "page_number": 152, "text": "Network and Protocol Services\n107\nNetwork and Protocol Services\nAnother aspect of networking is the protocol services used to connect a LAN to WAN commu-\nnication technologies. A basic knowledge of these services is important for anyone working in \na security field or serving as a network manager. The following sections introduce some key \nissues about several WAN communication technologies.\nFrame Relay\nFrame Relay is a layer 2 connection mechanism that uses packet-switching technology to estab-\nlish virtual circuits between communication endpoints. Unlike dedicated or leased lines, for \nwhich cost is based primarily on the distance between endpoints, Frame Relay’s cost is primarily \nbased on the amount of data transferred. The Frame Relay network is a shared medium across \nwhich virtual circuits are created to provide point-to-point communications. All virtual circuits \nare independent of and invisible to each other. Companies using Frame Relay establish a Com-\nmitted Information Rate (CIR) contract that guarantees a minimum bandwidth for their com-\nmunications at all times. However, if additional bandwidth is required and the Frame Relay \nnetwork can support additional traffic, the virtual circuit can automatically expand to allow a \nhigher throughput rate. Frame Relay is a connection-oriented service.\nFrame Relay requires the use of data terminal equipment (DTE) and data circuit-termi-\nnating equipment (DCE) at each connection point. The customer owns the DTE, which acts \nlike a router or a switch and provides the customer’s network with access to the Frame \nRelay network. The Frame Relay service provider owns the DCE, which performs the actual \nRemote Access and Telecommuting Techniques\nThere are three main types of remote access techniques: service specific, remote control, and \nremote node operation. Service-specific remote access gives users the ability to remotely con-\nnect to and manipulate or interact with a single service, such as e-mail. Remote control remote \naccess grants a remote user the ability to fully control another system that is physically distant \nfrom them. The monitor and keyboard act as if they are directly connected to the remote sys-\ntem. Remote node operation is just another name for dial-up connectivity. A remote system \nconnects to a remote access server. That server provides the remote client with network ser-\nvices and possible Internet access.\nTelecommuting is performing work at a location other than the primary office. In fact, there is \na good chance that you perform some form of telecommuting as part of your current job. Tele-\ncommuting clients can use any or all of these remote access techniques to establish connec-\ntivity to the central office LAN.\n" }, { "page_number": 153, "text": "108\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\ntransmission of data over the Frame Relay as well as establishing and maintaining the vir-\ntual circuit for the customer.\nThere are two types of virtual circuits: permanent virtual circuit (PVC) and switched virtual \ncircuit (SVC). A PVC is a predefined virtual circuit that is always available. The virtual circuit \nmay be closed down when not in use, but it can be instantly reopened whenever needed. An SVC \nis more like a dial-up connection. Each time the customer needs to transmit data over Frame \nRelay, a new virtual circuit is established using the best paths currently available. A PVC is like \na two-way radio or walkie-talkie. Whenever communication is needed, you press the button \nand start talking; the radio reopens the predefined frequency automatically (i.e., the virtual cir-\ncuit). A SVC is more like a shortwave or ham radio. You must tune the transmitter and receiver \nto a new frequency every time you want to communicate with someone.\nOther WAN Technologies\nSwitched Multimegabit Data Services (SMDS) is a connectionless network communication ser-\nvice. It provides bandwidth on demand and is a preferred connection mechanism for linking \nremote LANs that communicate infrequently. SMDS is often a competitor of Frame Relay.\nX.25 is an older WAN protocol that uses a carrier switch to provide end-to-end connections \nover a shared network medium. It is the predecessor to Frame Relay and operates in much the \nsame fashion. However, X.25 use is declining due to its lower performance and throughput \nrates when compared to Frame Relay or ATM.\nAsynchronous transfer mode (ATM) is a cell-switching technology, as opposed to a packet-\nswitching technology like Frame Relay. ATM uses virtual circuits much like Frame Relay, but \nbecause it uses fixed-size frames or cells, it can guarantee throughput. This makes ATM an \nexcellent WAN technology for voice and video conferencing.\nHigh Speed Serial Interface (HSSI) is a layer 1 protocol used to connect routers and multi-\nplexers to ATM or Frame Relay connection devices.\nSynchronous Data Link Control (SDLC) is a layer 2 protocol employed by networks with \ndedicated or leased lines. SDLC was developed by IBM for remote communications with SNA \nsystems. SDLC is a bit-oriented synchronous protocol.\nHigh-Level Data Link Control (HDLC) is a layer 2 protocol used to transmit data over syn-\nchronous communication lines. HDLC is an ISO standard based on IBM’s SDLC. HDLC sup-\nports full-duplex communications and both point-to-point and multipoint connections, offers \nflow control, and includes error detection and correction.\nAvoiding Single Points of Failure\nAny element in your IT infrastructure, physical environment, or staff can be a single point of \nfailure. A single point of failure is simply any element—such as a device, service, protocol, or \ncommunication link—that would cause total or significant downtime if compromised, violated, \nor destroyed, affecting the ability of members of your organization to perform essential work \ntasks. To avoid single points of failure, you must design your networks and your physical \nenvironment with redundancy and backups by doing such things as deploying dual network \n" }, { "page_number": 154, "text": "Avoiding Single Points of Failure\n109\nbackbones. The use of systems, devices, and solutions with fault-tolerant capabilities is a means \nto improve resistance to single-point-of-failure vulnerabilities. Taking steps to establish a means to \nprovide alternate processing, failover capabilities, and quick recovery will also aid in avoiding \nsingle points of failure.\nRedundant Servers\nUsing redundant servers is one fault-tolerant deployment option. Redundant servers can take \nnumerous forms. Server mirroring is the deployment of a backup system along with the primary \nsystem. Every change made to the primary system is immediately duplicated to the secondary sys-\ntem. Electronic vaulting is the collection of changes on a primary system into a transaction or \nchange document. Periodically, the change document is sent to an offsite duplicate server where \nthe changes are applied. This is also known as batch processing because changes are duplicated \nover intervals rather than in real time. Remote journaling is the same as electronic vaulting \nexcept that changes are sent immediately to the offsite duplicate server rather than in batches. \nThis provides a more real-time server backup. Database shadowing is remote journaling to \nmore than one destination duplicate server. There may be one or more local duplicates and one \nor more offsite duplicates.\nAnother type of redundant server is a cluster or server farm. Clustering means deploying two \nor more duplicate servers in such a way as to share the workload of a mission-critical application. \nUsers see the clustered systems as a single entity. A cluster controller manages traffic to and among \nthe clustered systems to balance the workload across all clustered servers. As changes occur on one \nof the clustered systems, they are immediately duplicated to all other cluster partners.\nFailover Solutions\nWhen backup systems or redundant servers exist, there needs to be a means by which you can \nswitch over to the backup in the event the primary system is compromised or fails. Rollover, or \nfailover, is redirecting workload or traffic to a backup system when the primary system fails. \nRollover can be automatic or manual. Manual rollover, also known as cold rollover, requires \nan administrator to perform some change in software or hardware configuration to switch the \ntraffic load over from the down primary to a secondary server. With automatic rollover, also \nknown as hot rollover, the switch from primary to secondary system is performed automatically \nas soon as a problem is encountered. Fail-secure, fail-safe, and fail-soft are terms related to these \nissues. A system that is fail-secure is able to resort to a secure state when an error or security vio-\nlation is encountered. Fail-safe is a similar feature, but human safety is protected in the event of \na system failure. However, these two terms are often used interchangeably to mean a system \nthat is secure after a failure. Fail-soft describes a refinement of the fail-secure capability: only the \nportion of a system that encountered or experienced the failure or security breach is disabled or \nsecured, while the rest of the system continues to function normally.\nA specific implementation of a fail-secure system would be the use of TFTP servers to store \nnetwork device configurations. In the event of a system failure, configuration corruption, or \npower outage, most network devices (such as routers and switches) can be hard-coded to pull \n" }, { "page_number": 155, "text": "110\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\ntheir configuration file from a TFTP server upon reboot. In this way, essential network devices \ncan self-restore quickly.\nPower failure is always a single point of failure. If electrical power is lost, all electronic \ndevices will cease to function. Addressing this weakness is important if 24/7 uptime is essential \nto your organization. Ways to combat power failure or fluctuation issues include power con-\nditioners (i.e., surge protectors), uninterruptible power supplies, and onsite electric generators.\nRAID\nWithin individual systems, storage devices can be a single point of failure. Redundant Array of \nIndependent Disks (RAID) is a storage device mechanism that uses multiple hard drives in \nunique combinations to produce a storage solution that provides better throughput as well as \nresistance to device failure. The two primary storage techniques employed by RAID are mirror-\ning and striping. Striping can be further enhanced by storing parity information. Parity infor-\nmation enables on-the-fly recovery or reconstruction of data lost due to the failure of one or \nmore drives. There are several levels or forms of RAID. Some of the more common RAID levels \nare listed in Table 3.6.\nRAID can be implemented in hardware or in software. Hardware-based RAID offers more \nreliable performance and fault tolerance protection. Hardware-based RAID performs all pro-\ncessing necessary for multidrive access on the drive controllers. Software-based RAID performs \nT A B L E\n3 . 6\nCommon RAID Levels\nRAID Level\nDescription\n0\nStriping\n1\nMirroring\n2\nHamming code parity\n3\nByte-level parity\n4\nBlock-level parity\n5\nInterleave parity\n6\nSecond parity data\n10\nRAID levels 1 + 0\n15\nRAID levels 1 + 5\n" }, { "page_number": 156, "text": "Summary\n111\nthe processing as part of the operating system. Thus, system resources are consumed in manag-\ning and using RAID when it is deployed through software.\nThere are three forms of RAID drive swapping: hot, cold, and warm. Hot-swappable RAID \nallows for failed drives to be removed and replaced while the host server remains up and run-\nning. Cold-swappable RAID systems require the host server to be fully powered down before \nfailed drives can be removed and replaced. Warm-swappable RAID allows for failed drives to \nbe removed and replaced by disabling the RAID configuration via software, then replacing the \ndrive, and then reenabling the RAID configuration. RAID is a specific technology example of \nFault Resistant Disk Systems (FRDS).\nNo matter what fault-tolerant designs and mechanisms you employ to avoid single points of \nfailure, no environment’s security precautions are complete without a backup solution. Backups \nare the only means of providing reliable insurance against minor and catastrophic losses of your \ndata. For a backup system to provide protection, it must be configured to store all data neces-\nsary to support your organization. It must perform the backup operation as quickly and effi-\nciently as possible. The backups must be performed on a regular basis, such as daily, weekly, or \nin real time. And backups must be periodically tested to verify that they are functioning and that \nyour restore processes are adequate. An untested backup cannot be assumed to work.\nSummary\nDesigning, deploying, and maintaining security on a network requires intimate knowledge of \nthe technologies involved in networking. This includes protocols, services, communication \nmechanisms, topologies, cabling, and networking devices.\nThe OSI model is a standard against which all protocols are evaluated. Understanding how \nthe OSI model is used and how it applies to real-world protocols can help system designers and \nsystem administrators improve security.\nThere is a wide range of hardware components that can be used to construct a network, not \nthe least of which is the cabling used to tie all the devices together. Understanding the strengths \nand weaknesses of each cabling type is part of designing a secure network.\nThere are three common LAN technologies: Ethernet, Token Ring, and FDDI. Each can be \nused to deploy a secure network. There are also several common network topologies: ring, bus, \nstar, and mesh.\nMost networks employ TCP/IP as the primary protocol. However, there are numerous sub-\nprotocols, supporting protocols, services, and security mechanisms that can be found in a TCP/\nIP network. A basic understanding of these various entities can aid in designing and deploying \na secure network. These components include IPSec, SKIP, SWIPE, SSL, S/MIME, SET, PEM, \nPGP, PPP, SLIP, PPTP, L2TP, CHAP, PAP, RADIUS, TACACS, S-RPC, Frame Relay, SMDS, \nX.25, ATM, HSSI, SDLC, HDLC, and ISDN.\nRemote access security management requires that security system designers address the hard-\nware and software components of the implementation along with policy issues, work task \nissues, and encryption issues.\n" }, { "page_number": 157, "text": "112\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\nIn addition to routers, hubs, switches, repeaters, gateways, and proxies, firewalls are an \nimportant part of a network’s security. There are four primary types of firewalls: static packet-\nfiltering, application-level gateway, circuit-level gateway, and stateful inspection.\nAvoiding single points of failure includes incorporating fault-tolerant systems and solutions \ninto an environment’s design. When designing a fault-tolerant system, you should make sure \nyou include redundant or mirrored systems, use TFTP servers, address power issues, use RAID, \nand maintain a backup solution.\nExam Essentials\nKnow the OSI model layers and what protocols are found in each.\nThe seven layers and pro-\ntocols supported by each of the layers of the OSI model are as follows:\n\u0002\nApplication: HTTP, FTP, LPD, SMTP, Telnet, TFTP, EDI, POP3, IMAP, SNMP, \nNNTP, S-RPC, and SET\n\u0002\nPresentation: encryption protocols, such as RSA and DES, and format types, such as \nASCII, EBCDIC, TIFF, JPEG, MPEG, and MIDI\n\u0002\nSession: SSL, TLS, NFS, SQL, and RPC\n\u0002\nTransport: SPX, TCP, and UDP\n\u0002\nNetwork: ICMP, RIP, OSPF, BGP, IGMP, IP, IPSec, IPX, NAT, and SKIP\n\u0002\nData Link: SLIP, PPP, ARP, RARP, L2F, L2TP, PPTP, FDDI, ISDN\n\u0002\nPhysical: EIA/TIA-232, EIA/TIA-449, X.21, HSSI, SONET, V.24, and V.35\nKnow the TCP/IP model and how it relates to the OSI model.\nThe TCP/IP model has four \nlayers: Application, Host-to-Host, Internet, and Network Access.\nKnow the different cabling types and their lengths and maximum throughput rates.\nThis \nincludes STP, 10Base-T (UTP), 10Base2 (thinnet), 10Base5 (thicknet), 100Base-T, 1000Base-T, \nand fiber-optic. You should also be familiar with UTP categories 1 through 7.\nBe familiar with the common LAN technologies.\nThese are Ethernet, Token Ring, and FDDI. \nAlso be familiar with analog vs. digital communications; synchronous vs. asynchronous com-\nmunications; baseband vs. broadband communications; broadcast, multicast, and unicast com-\nmunications; CSMA, CSMA/CA, CSMA/CD, token passing, and polling.\nKnow the standard network topologies.\nThese are ring, bus, star, and mesh.\nHave a thorough knowledge of TCP/IP.\nKnow the difference between TCP and UDP; be \nfamiliar with the four TCP/IP layers and how they correspond to the OSI model. In addition, \nunderstand the usage of the well-known ports and be familiar with the subprotocols.\nKnow the common network devices.\nCommon network devices are firewalls, routers, hubs, \nbridges, repeaters, switches, gateways, and proxies.\n" }, { "page_number": 158, "text": "Exam Essentials\n113\nUnderstand the different types of firewalls.\nThere are four basic types of firewalls: static \npacket-filtering, application-level gateway, circuit-level gateway, and stateful inspection.\nUnderstand the issues around remote access security management.\nRemote access security \nmanagement requires that security system designers address the hardware and software com-\nponents of an implementation along with issues related to policy, work tasks, and encryption.\nBe familiar with the various protocols and mechanisms that may be used on LANs and \nWANs.\nThese are IPSec, SKIP, SWIPE, SSL, S/MIME, SET, PEM, PGP, PPP, SLIP, PPTP, \nL2TP, CHAP, PAP, EAP, RADIUS, TACACS, and S-RPC.\nKnow the protocol services used to connect to LAN and WAN communication technologies.\nThese are Frame Relay, SMDS, X.25, ATM, HSSI, SDLC, HDLC, and ISDN.\nUnderstand the issues around single points of failure.\nAvoiding single points of failure \nincludes incorporating fault-tolerant systems and solutions into an environment’s design. Fault-\ntolerant systems include redundant or mirrored systems, TFTP servers, and RAID. You should \nalso address power issues and maintain a backup solution.\n" }, { "page_number": 159, "text": "114\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\nReview Questions\n1.\nWhat is layer 4 of the OSI model?\nA. Presentation\nB. Network\nC. Data Link\nD. Transport\n2.\nWhat is encapsulation?\nA. Changing the source and destination addresses of a packet\nB. Adding a header and footer to data as it moves down the OSI stack\nC. Verifying a person’s identity\nD. Protecting evidence until it has been properly collected\n3.\nWhich OSI model layer manages communications in simplex, half-duplex, and full-duplex \nmodes?\nA. Application\nB. Session\nC. Transport\nD. Physical\n4.\nWhich of the following is the least resistant to EMI?\nA. Thinnet\nB. 10Base-T UTP\nC. 10Base5\nD. Coaxial cable\n5.\nWhich of the following cables has the most twists per inch?\nA. STP\nB. UTP\nC. 100Base-T\nD. 1000Base-T\n6.\nWhich of the following is not true?\nA. Fiber-optic cable offers very high throughput rates.\nB. Fiber-optic cable is difficult to install.\nC. Fiber-optic cable is expensive.\nD. Communications over fiber-optic cable can be tapped easily.\n" }, { "page_number": 160, "text": "Review Questions\n115\n7.\nWhich of the following is not one of the most common LAN technologies?\nA. Ethernet\nB. ATM\nC. Token Ring\nD. FDDI\n8.\nWhich networking technology is based on the IEEE 802.3 standard?\nA. Ethernet\nB. Token Ring\nC. FDDI\nD. HDLC\n9.\nWhat is a TCP wrapper?\nA. An encapsulation protocol used by switches\nB. An application that can serve as a basic firewall by restricting access based on user IDs or \nsystem IDs\nC. A security protocol used to protect TCP/IP traffic over WAN links\nD. A mechanism to tunnel TCP/IP through non-IP networks\n10. Which of the following protocols is connectionless?\nA. TCP\nB. UDP\nC. IP\nD. FTP\n11. By examining source and destination address, application usage, source of origin, and the rela-\ntionship between current packets with the previous packets of the same session, ____________ \nfirewalls are able to grant a broader range of access for authorized users and activities and \nactively watch for and block unauthorized users and activities.\nA. Static packet-filtering\nB. Application-level gateway\nC. Stateful inspection\nD. Circuit-level gateway\n12. _________________ firewalls are known as third-generation firewalls.\nA. Application-level gateway\nB. Stateful inspection\nC. Circuit-level gateway\nD. Static packet-filtering\n" }, { "page_number": 161, "text": "116\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\n13. Which of the following is not true regarding firewalls?\nA. They are able to log traffic information.\nB. They are able to block viruses.\nC. They are able to issue alarms based on suspected attacks.\nD. They are unable to prevent internal attacks.\n14. Which of the following is not a routing protocol?\nA. OSPF\nB. BGP\nC. RPC\nD. RIP\n15. A ___________________ is an intelligent hub because it knows the addresses of the systems con-\nnected on each outbound port. Instead of repeating traffic on every outbound port, it repeats \nonly traffic out of the port on which the destination is known to exist.\nA. Repeater\nB. Switch\nC. Bridge\nD. Router\n16. ___________________ is a standards-based mechanism for providing encryption for point-to-\npoint TCP/IP traffic.\nA. UDP\nB. SSL\nC. IPSec\nD. SDLC\n17.\nWhich public-private key security system was developed independently of industry standards \nbut has wide Internet grassroots support?\nA. SLIP\nB. PGP\nC. PPTP\nD. PAP\n18. What authentication protocol offers no encryption or protection for logon credentials?\nA. PAP\nB. CHAP\nC. SSL\nD. RADIUS\n" }, { "page_number": 162, "text": "Review Questions\n117\n19. ___________________ is a layer 2 connection mechanism that uses packet-switching technology \nto establish virtual circuits between the communication endpoints.\nA. ISDN\nB. Frame Relay\nC. SMDS\nD. ATM\n20. Which of the following IP addresses is not a private IP address as defined by RFC 1918?\nA. 10.0.0.18\nB. 169.254.1.119\nC. 172.31.8.204\nD. 192.168.6.43\n" }, { "page_number": 163, "text": "118\nChapter 3\n\u0002 ISO Model, Network Security, and Protocols\nAnswers to Review Questions\n1.\nD. The Transport layer is layer 4. The Presentation layer is layer 6, the Data Link layer is layer 2, \nand the Network layer is layer 3.\n2.\nB. Encapsulation is adding a header and footer to data as it moves through the Presentation layer \ndown the OSI stack.\n3.\nB. Layer 5, Session, manages simplex (one-direction), half-duplex (two-way, but only one direc-\ntion can send data at a time), and full-duplex (two-way, in which data can be sent in both direc-\ntions simultaneously) communications.\n4.\nB. 10Base-T UTP is the least resistant to EMI because it is unshielded. Thinnet (10Base2) and \nthicknet (10Base5) are both a type of coaxial cable, which is shielded against EMI.\n5.\nD. 1000Base-T offers 1000Mbps throughput and thus must have the greatest number of twists \nper inch. The tighter the twist (i.e., the number of twists per inch), the more resistant the cable \nis to internal and external interference and crosstalk and thus the greater the capacity is for \nthroughput (i.e., higher bandwidth).\n6.\nD. Fiber-optic cable is difficult to tap.\n7.\nB. Ethernet, Token Ring, and FDDI are common LAN technologies. ATM is more common in \na WAN environment.\n8.\nA. Ethernet is based on the IEEE 802.3 standard.\n9.\nB. A TCP wrapper is an application that can serve as a basic firewall by restricting access based \non user IDs or system IDs.\n10. B. UDP is a connectionless protocol.\n11. C. Stateful inspection firewalls are able to grant a broader range of access for authorized users \nand activities and actively watch for and block unauthorized users and activities.\n12. B. Stateful inspection firewalls are known as third-generation firewalls.\n13. B. Most firewalls offer extensive logging, auditing, and monitoring capabilities as well as alarms \nand even basic IDS functions. Firewalls are unable to block viruses or malicious code transmitted \nthrough otherwise authorized communication channels, prevent unauthorized but accidental or \nintended disclosure of information by users, prevent attacks by malicious users already behind \nthe firewall, or protect data after it passed out of or into the private network.\n14. C. There are numerous dynamic routing protocols, including RIP, OSPF, and BGP, but RPC is \nnot a routing protocol.\n15. B. A switch is an intelligent hub. It is considered to be intelligent because it knows the addresses \nof the systems connected on each outbound port.\n16. C. IPSec, or IP Security, is a standards-based mechanism for providing encryption for point-to-\npoint TCP/IP traffic.\n" }, { "page_number": 164, "text": "Answers to Review Questions\n119\n17.\nB. Pretty Good Privacy (PGP) is a public-private key system that uses the IDEA algorithm to \nencrypt files and e-mail messages. PGP is not a standard but rather an independently developed \nproduct that has wide Internet grassroots support.\n18. A. PAP, or Password Authentication Protocol, is a standardized authentication protocol for \nPPP. PAP transmits usernames and passwords in the clear. It offers no form of encryption. It sim-\nply provides a means to transport the logon credentials from the client to the authentication \nserver.\n19. B. Frame Relay is a layer 2 connection mechanism that uses packet-switching technology to \nestablish virtual circuits between the communication endpoints. The Frame Relay network is a \nshared medium across which virtual circuits are created to provide point-to-point communica-\ntions. All virtual circuits are independent of and invisible to each other.\n20. B. The 169.254.x.x. subnet is in the APIPA range, which is not part of RFC 1918. The addresses \nin RFC 1917 are 10.0.0.0–10.255.255.255, 172.16.0.0–172.31.255.255, and 192.168.0.0–\n192.168.255.255.\n" }, { "page_number": 165, "text": "" }, { "page_number": 166, "text": "Chapter\n4\nCommunications \nSecurity and \nCountermeasures\nTHE CISSP EXAM TOPICS COVERED IN THIS \nCHAPTER INCLUDE:\n\u0001 Communications Security Techniques\n\u0001 Packet and Circuit Switching\n\u0001 WAN Technologies\n\u0001 E-Mail Security\n\u0001 Facsimile Security\n\u0001 Secure Voice Communications\n\u0001 Security Boundaries\n\u0001 Network Attacks and Countermeasures\n" }, { "page_number": 167, "text": "Data residing in a static form on a storage device is fairly simple \nto secure. As long as physical access control is maintained and \nreasonable logical access controls are implemented, stored files \nremain confidential, retain their integrity, and are available to authorized users. However, once \ndata is used by an application or transferred over a network connection, the process of securing \nit becomes much more difficult.\nCommunications security covers a wide range of issues related to the transportation of elec-\ntronic information from one place to another. That transportation may be between systems on \nopposite sides of the planet or between systems on the same business network. Data becomes \nvulnerable to a plethora of threats to its confidentiality, integrity, and availability once it is \ninvolved in any means of transportation. Fortunately, many of these threats can be reduced or \neliminated with the appropriate countermeasures.\nCommunications security is designed to detect, prevent, and even correct data transportation \nerrors (i.e., integrity protection). This is done to sustain the security of networks while support-\ning the need to exchange and share data. This chapter takes a look at the many forms of com-\nmunications security, vulnerabilities, and countermeasures.\nThe Telecommunications and Network Security domain for the CISSP certification exam \ndeals with topics of communications security and vulnerability countermeasures. This domain \nis discussed in this chapter and in the preceding chapter (Chapter 3). Be sure to read and study \nthe materials from both chapters to ensure complete coverage of the essential material for the \nCISSP certification exam.\nVirtual Private Network (VPN)\nA virtual private network (VPN) is simply a communication tunnel that provides point-to-point \ntransmission of both authentication and data traffic over an intermediary network. Most VPNs \nuse encryption to protect the encapsulated traffic, but encryption is not necessary for the con-\nnection to be considered a VPN. VPNs are most commonly associated with establishing secure \ncommunication paths through the Internet between two distant networks. However, VPNs can \nexist anywhere, including within private networks or between end-user systems connected to an \nISP. VPNs provide confidentiality and integrity over insecure or untrusted intermediary net-\nworks. VPNs do not provide or guarantee availability.\n" }, { "page_number": 168, "text": "Virtual Private Network (VPN)\n123\nTunneling\nBefore you can truly understand VPNs, you must first understand tunneling. Tunneling is the net-\nwork communications process that protects the contents of protocol packets by encapsulating \nthem in packets of another protocol. The encapsulation is what creates the logical illusion of a \ncommunications tunnel over the untrusted intermediary network. This virtual path exists between \nthe encapsulation and the deencapsulation entities located at the ends of the communication.\nIn fact, sending a letter to your grandmother involves the use of a tunneling system. You cre-\nate the personal letter (the primary content protocol packet) and place it in an envelope (the tun-\nneling protocol). The envelope is delivered through the postal service (the untrusted \nintermediary network) to its intended recipient.\nThe Need for Tunneling\nTunneling can be used in many situations, such as when you’re bypassing firewalls, gateways, \nproxies, or other traffic control devices. The bypass is achieved by encapsulating the restricted \ncontent inside packets that are authorized for transmission. The tunneling process prevents the \ntraffic control devices from blocking or dropping the communication because such devices \ndon’t know what the packets actually contain.\nTunneling is often used to enable communications between otherwise disconnected systems. \nIf two systems are separated by a lack of network connectivity, a communication link can be \nestablished by a modem dial-up link or other remote access or wide area network (WAN) net-\nworking service. The actual LAN traffic is encapsulated in whatever communication protocol \nis used by the temporary connection, such as Point-to-Point Protocol (PPP) in the case of modem \ndial-up. If two networks are connected by a network employing a different protocol, the pro-\ntocol of the separated networks can often be encapsulated within the intermediary network’s \nprotocol to provide a communication pathway.\nRegardless of the actual situation, tunneling protects the contents of the inner protocol and \ntraffic packets by encasing, or wrapping, it in an authorized protocol used by the intermediary \nnetwork or connection. Tunneling can be used if the primary protocol is not routable and to \nkeep the total number of protocols supported on the network to a minimum.\nIf the act of encapsulating a protocol involves encryption, tunneling can provide a means to \ntransport sensitive data across untrusted intermediary networks without fear of losing confi-\ndentiality and integrity.\nTunneling Drawbacks\nTunneling is not without its problems. It is generally an inefficient means of communicating \nbecause all protocols include their own error detection, error handling, acknowledgment, and ses-\nsion management features, so using more than one protocol at a time compounds the overhead \nrequired to communicate a single message. Furthermore, tunneling creates either larger packets or \nmore numerous packets that in turn consume additional network bandwidth. Tunneling can \nquickly saturate a network if sufficient bandwidth is not available. In addition, tunneling is a \npoint-to-point communication mechanism and is not designed to handle broadcast traffic.\n" }, { "page_number": 169, "text": "124\nChapter 4\n\u0002 Communications Security and Countermeasures\nHow VPNs Work\nNow that you understand the basics of tunneling, let’s discuss the details of VPNs. A VPN link \ncan be established over any other network communication connection. This could be a typical \nLAN cable connection, a wireless LAN connection, a remote access dial-up connection, a WAN \nlink, or even a client using an Internet connection for access to an office LAN. A VPN link acts \njust like a typical direct LAN cable connection; the only possible difference would be speed \nbased on the intermediary network and on the connection types between the client system and \nthe server system. Over a VPN link, a client can perform the exact same activities and access the \nsame resources they could if they were directly connected via a LAN cable.\nVPNs can be used to connect two individual systems or two entire networks. The only dif-\nference is that the transmitted data is protected only while it is within the VPN tunnel. Remote \naccess servers or firewalls on the network’s border act as the start points and endpoints for \nVPNs. Thus, traffic is unprotected within the source LAN, protected between the border VPN \nservers, and then unprotected again once it reaches the destination LAN.\nVPN links through the Internet for connecting to distant networks are often inexpensive alter-\nnatives to direct links or leased lines. The cost of two high-speed Internet links to local ISPs to sup-\nport a VPN is often significantly less than the cost of any other connection means available.\nImplementing VPNs\nVPNs can be implemented using software or hardware solutions. In either case, there are four \ncommon VPN protocols: PPTP, L2F, L2TP, and IPSec. PPTP, L2F, and L2TP operate at the \nData Link layer (layer 2) of the OSI model. PPTP and IPSec are limited for use on IP networks, \nwhereas L2F and L2TP can be used to encapsulate any LAN protocol.\nPoint-to-Point Tunneling Protocol (PPTP) is an encapsulation protocol developed from the \ndial-up protocol Point-to-Point Protocol (PPP). PPTP creates a point-to-point tunnel between \ntwo systems and encapsulates PPP packets. PPTP offers protection for authentication traffic \nthrough the same authentication protocols supported by PPP; namely, Microsoft Challenge \nHandshake Authentication Protocol (MS-CHAP), Challenge Handshake Authentication Proto-\ncol (CHAP), Password Authentication Protocol (PAP), Extensible Authentication Protocol \n(EAP), and Shiva Password Authentication Protocol (SPAP). The initial tunnel negotiation pro-\ncess used by PPTP is not encrypted. Thus, the session establishment packets that include the IP \naddress of the sender and receiver—and can include usernames and hashed passwords—could \nbe intercepted by a third party.\nCisco developed its own VPN protocol called Layer 2 Forwarding (L2F), which is a mutual \nauthentication tunneling mechanism. However, L2F does not offer encryption. L2F was not \nwidely deployed and was soon replaced by L2TP.\nLayer 2 Tunneling Protocol (L2TP) was derived by combining elements from both PPTP and \nL2F. L2TP creates a point-to-point tunnel between communication endpoints. It lacks a built-\nin encryption scheme, but it typically relies upon IPSec as its security mechanism. L2TP also \nsupports TACACS+ and RADIUS, whereas PPTP does not.\nThe most commonly used VPN protocol is now IPSec. IP Security (IPSec) is both a stand-\nalone VPN protocol and the security mechanism for L2TP, and it can only be used for IP traffic. \n" }, { "page_number": 170, "text": "Network Address Translation\n125\nIPSec provides for secured authentication as well as encrypted data transmission. It operates at \nthe Network layer (layer 3) and can be used in transport mode or tunnel mode. In transport \nmode, the IP packet data is encrypted but the header of the packet is not. In tunnel mode, the \nentire IP packet is encrypted and a new header is added to the packet to govern transmission \nthrough the tunnel.\nNetwork Address Translation\nHiding the identity of internal clients, masking the design of your private network, and keeping \npublic IP address leasing costs to a minimum is made simple through the use of NAT. Network \nAddress Translation (NAT) is a mechanism for converting the internal IP addresses found in \npacket headers into public IP addresses for transmission over the Internet. NAT offers numer-\nous benefits, such as being able to connect an entire network to the Internet using only a single \n(or just a few) leased public IP addresses. NAT allows you to use the private IP addresses defined \nin RFC 1918 in a private network while still being able to communicate with the Internet. NAT \nprotects a network by hiding the IP addressing scheme and network topography from the Inter-\nnet. It also provides protection by restricting connections so that only connections originating \nfrom the internal protected network are allowed back into the network from the Internet. Thus, \nmost intrusion attacks are automatically repelled.\nNAT can be found in a number of hardware devices and software products, including fire-\nwalls, routers, gateways, and proxies. It can only be used on IP networks and operates at the \nNetwork layer (layer 3).\nPrivate IP Addresses\nThe use of NAT has proliferated recently due to the increased scarcity of public IP addresses and \nsecurity concerns. With only roughly four billion addresses (2^32) available in IPv4, the world \nhas simply deployed more devices using IP than there are unique IP addresses available. Fortu-\nnately, the early designers of the Internet and the TCP/IP protocol had good foresight and put \naside a few blocks of addresses for private unrestricted use. These IP addresses, commonly \ncalled the private IP addresses, are defined in RFC 1918. They are as follows:\n\u0002\n10.0.0.0–10.255.255.255 (a full Class A range)\n\u0002\n172.16.0.0–172.31.255.255 (16 Class B ranges)\n\u0002\n192.168.0.0–192.168.255.255 (255 Class C ranges)\nAll routers and traffic-directing devices are configured by default not to forward traffic to or \nfrom these IP addresses. In other words, the private IP addresses are not routed by default. Thus, \nthey cannot be directly used to communicate over the Internet. However, they can be easily used \non private networks where routers are not employed or where slight modifications to router \nconfigurations are made. The use of the private IP addresses in conjunction with NAT greatly \nreduces the cost of connecting to the Internet by allowing fewer public IP addresses to be leased \nfrom an ISP.\n" }, { "page_number": 171, "text": "126\nChapter 4\n\u0002 Communications Security and Countermeasures\nStateful NAT\nNAT operates by maintaining a mapping between requests made by internal clients, a client’s \ninternal IP address, and the IP address of the Internet service contacted. When a request packet \nis received by NAT from a client, it changes the source address in the packet from the client’s \nto the NAT server’s. This change is recorded in the NAT mapping database along with the des-\ntination address. Once a reply is received from the Internet server, NAT matches the reply’s \nsource address to an address stored in its mapping database and then uses the linked client \naddress to redirect the response packet to its intended destination. This process is known as \nstateful NAT because it maintains information about the communication sessions between cli-\nents and external systems.\nNAT can operate on a one-to-one basis with only a single internal client able to communicate \nover one of its leased public IP addresses at a time. This type of configuration can result in a bot-\ntleneck if more clients attempt Internet access than there are public IP addresses. For example, \nif there are only five leased public IP addresses, the sixth client must wait until an address is \nreleased before its communications can be transmitted out over the Internet. Other forms of \nNAT employ multiplexing techniques in which port numbers are used to allow the traffic from \nmultiple internal clients to be managed on a single leased public IP address.\nSwitching Technologies\nWhen two systems (individual computers or LANs) are connected over multiple intermediary \nnetworks, the task of transmitting data packets from one to the other is a complex process. To \nsimplify this task, switching technologies were developed. The first switching technology is cir-\ncuit switching.\nCircuit Switching\nCircuit switching was originally developed to manage telephone calls over the public switched \ntelephone network. In circuit switching, a dedicated physical pathway is created between the \ntwo communicating parties. Once a call is established, the links between the two parties remain \nthe same throughout the conversation. This provides for fixed or known transmission times, \nuniform level of quality, and little or no loss of signal or communication interruptions. Circuit-\nswitching systems employ permanent, physical connections. However, the term permanent \napplies only to each communication session. The path is permanent throughout a single con-\nversation. Once the path is disconnected, if the two parties communicate again, a different path \nmay be assembled. During a single conversation, the same physical or electronic path is used \nthroughout the communication and is used only for that one communication. Circuit switching \ngrants exclusive use of a communication path to the current communication partners. Only \nafter a session has been closed can a pathway be reused by another communication.\n" }, { "page_number": 172, "text": "Switching Technologies\n127\nPacket Switching\nEventually, as computer communications increased as opposed to voice communications, a new \nform of switching was developed. Packet switching occurs when the message or communication \nis broken up into small segments (usually fixed-length packets, depending on the protocols and \ntechnologies employed) and sent across the intermediary networks to the destination. Each seg-\nment of data has its own header that contains source and destination information. The header \nis read by each intermediary system and is used to route each packet to its intended destination. \nEach channel or communication path is reserved for use only while a packet is actually being \ntransmitted over it. As soon as the packet is sent, the channel is made available for other com-\nmunications. Packet switching does not enforce exclusivity of communication pathways. Packet \nswitching can be seen as a logical transmission technology because addressing logic dictates \nhow communications traverse intermediary networks between communication partners. Table \n4.1 shows a comparison between circuit switching and packet switching.\nVirtual Circuits\nWithin packet-switching systems are two types of communication paths, or virtual circuits. A \nvirtual circuit is a logical pathway or circuit created over a packet-switched network between \ntwo specific endpoints. There are two types of virtual circuits: permanent virtual circuits (PVCs) \nand switched virtual circuits (SVCs). A PVC is like a dedicated leased line; the logical circuit \nalways exists and is waiting for the customer to send data. An SVC is more like a dial-up con-\nnection because a virtual circuit has to be created before it can be used and then disassembled \nafter the transmission is complete. In either type of virtual circuit, when a data packet enters \npoint A of a virtual circuit connection, that packet is sent directly to point B or the other end \nof the virtual circuit. However, the actual path of one packet may be different than the path of \nanother packet from the same transmission. In other words, multiple paths may exist between \npoint A and point B as the ends of the virtual circuit, but any packet entering at point A will end \nup at point B.\nT A B L E\n4 . 1\nCircuit Switching vs. Packet Switching\nCircuit Switching\nPacket Switching\nConstant traffic\nBursty traffic\nFixed known delays\nVariable delays\nConnection oriented\nConnectionless\nSensitive to connection loss\nSensitive to data loss\nUsed primarily for voice\nUsed for any type of traffic\n" }, { "page_number": 173, "text": "128\nChapter 4\n\u0002 Communications Security and Countermeasures\nWAN Technologies\nWAN links and long-distance connection technologies can be divided into two primary catego-\nries: dedicated and nondedicated lines. A dedicated line is one that is indefinably and continu-\nally reserved for use by a specific customer. A dedicated line is always on and waiting for traffic \nto be transmitted over it. The link between the customer’s LAN and the dedicated WAN link \nis always open and established. A dedicated line connects two specific endpoints and only those \ntwo endpoints together. A nondedicated line is one that requires a connection to be established \nbefore data transmission can occur. A nondedicated line can be used to connect with any remote \nsystem that uses the same type of nondedicated line.\nThe following list includes some examples of dedicated lines (also called leased lines or point-\nto-point links):\nTo obtain fault tolerance with leased lines or with connections to carrier net-\nworks (i.e., Frame Relay, ATM, SONET, SMDS, X.25, etc.), you must deploy two \nredundant connections. For even greater redundancy, purchase the connec-\ntions from two different telcos or service providers. However, when you’re \nusing two different service providers, be sure they don’t connect to the same \nregional backbone or share any major pipeline. If you cannot afford to deploy \nan exact duplicate of your primary leased line, consider a nondedicated DSL, \nISDN, or cable modem connection. These less-expensive options may still pro-\nvide partial availability in the event of a primary leased line failure.\nStandard modems, DSL, and ISDN are examples of nondedicated lines. Digital subscriber \nline (DSL) is a technology that exploits the upgraded telephone network to grant consumers \nspeeds from 144Kbps to 1.5Mbps. There are numerous formats of DSL, such as ADSL, xDSL, \nCDSL, HDSL, SDSL, RASDSL, IDSL, and VDSL. Each format varies as to the specific down-\nstream and upstream bandwidth provided. The maximum distance a DSL line can be from a \ncentral office (i.e., a specific type of distribution node of the telephone network) is approxi-\nmately 1,000 meters.\nTechnology\nConnection Type\nSpeed\nDigital Signal Level 0 (DS-0)\npartial T1\n64Kbps up to 1.544Mbps\nDigital Signal Level 1 (DS-1)\nT1\n1.544Mbps\nDigital Signal Level 3 (DS-3)\nT3\n44.736Mbps\nEuropean digital transmission format 1\nEl\n2.108Mbps\nEuropean digital transmission format 3\nE3\n34.368Mbps\nCable modem or cable routers\n \nup to 1.544Mbps\n" }, { "page_number": 174, "text": "WAN Technologies\n129\nHDSL is the version of DSL that provides 1.544Mbps of full-duplex throughput \n(i.e., both upstream and downstream) over standard telephone wires (i.e., two \npairs of twisted pair cabling).\nDon’t forget about satellite connections. Satellite connections may offer high-\nspeed solutions even in locales that are inaccessible by cable-based, radio-wave-\nbased, and line-of-sight-based communications. However, satellites are consid-\nered insecure because of their large surface footprint. Communications over a \nsatellite can be intercepted by anyone. Just think of satellite radio. As long as you \nhave a receiver, you can get the signal anywhere.\nIntegrated Services Digital Network (ISDN) is a fully digital telephone network that supports \nboth voice and high-speed data communications. There are two standard classes or formats of ISDN \nservice: BRI and PRI. Basic Rate Interface (BRI) offers customers a connection with 2 B channels and \n1 D channel. The B channels support a throughput of 64Kbps and are used for data transmission. \nThe D channel is used for call establishment, management, and teardown and has a bandwidth of \n16Kbps. Even though the D channel was not designed to support data transmissions, a BRI ISDN \nis said to offer consumers 144Kbps of total throughput. Primary Rate Interface (PRI) offers con-\nsumers a connection with 2 to 23 64Kbps B channels and a single 64Kbps D channel. Thus, a PRI \ncan be deployed with as little as 192Kbps and up to 1.544Mbps. However, remember that those \nnumbers are bandwidth, not throughput, as they include the D channel, which cannot be used for \nactual data transmission (at least not in most normal commercial implementations).\nWAN Connection Technologies\nThere are numerous WAN connection technologies available to companies that need communica-\ntion services between multiple locations and even external partners. These WAN technologies vary \ngreatly in cost and throughput. However, most share the common feature of being transparent to the \nconnected LANs or systems. A WAN switch, specialized router, or border connection device pro-\nvides all of the interfacing needed between the network carrier service and a company’s LAN. The \nborder connection devices are called channel service unit/data service unit (CSU/DSU). They convert \nLAN signals into the format used by the WAN carrier network and vice versa. The CSU/DSU con-\ntains data terminal equipment/data circuit-terminating equipment (DTE/DCE), which provides the \nactual connection point for the LAN’s router (the DTE) and the WAN carrier network’s switch (the \nDCE). The CSU/DSU acts as a translator, a store-and-forward device, and a link conditioner. A \nWAN switch is simply a specialized version of a LAN switch that is constructed with a built-in CSU/\nDSU for a specific type of carrier network. There are many types of carrier networks, or WAN con-\nnection technologies, such as X.25, Frame Relay, ATM, and SMDS:\nX.25 WAN connections\nX.25 is a packet-switching technology that is widely used in Europe. \nIt uses permanent virtual circuits to establish specific point-to-point connections between two \nsystems or networks.\n" }, { "page_number": 175, "text": "130\nChapter 4\n\u0002 Communications Security and Countermeasures\nFrame Relay connections\nLike X.25, Frame Relay is a packet-switching technology that also \nuses PVCs. However, unlike X.25, Frame Relay supports multiple PVCs over a single WAN car-\nrier service connection. A key concept related to Frame Relay is the Committed Information \nRate (CIR). The CIR is the guaranteed minimum bandwidth a service provider grants to its cus-\ntomers. It is usually significantly less than the actual maximum capability of the provider net-\nwork. Each customer may have a different CIR. The service network provider may allow \ncustomers to exceed their CIR over short intervals when additional bandwidth is available. \nFrame Relay operates at layer 2 (Data Link layer) of the OSI model. It is a connection-oriented \npacket-switching technology.\nATM\nAsynchronous transfer mode (ATM) is a cell-switching WAN communication technol-\nogy. It fragments communications into fixed-length 53-byte cells. The use of fixed-length cells \nallows ATM to be very efficient and offer high throughputs. ATM can use either PVCs or SVCs. \nATM providers can guarantee a minimum bandwidth and a specific level of quality to their \nleased services. Customers can often consume additional bandwidth as needed when available \non the service network for an additional pay-as-you-go fee; this is known as bandwidth on \ndemand. ATM is a connection-oriented packet-switching technology.\nSMDS\nSwitched Multimegabit Data Service (SMDS) is a packet-switching technology. Often, \nSMDS is used to connect multiple LANs to form a metropolitan area network (MAN) or a \nWAN. SMDS supports high-speed bursty traffic, is connectionless, and supports bandwidth on \ndemand. SMDS has been mostly replaced by Frame Relay.\nSome WAN connection technologies require additional specialized protocols to support vari-\nous types of specialized systems or devices. Three of these protocols are SDLC, HDLC, and HSSI:\nSDLC\nSynchronous Data Link Control (SDLC) is used on permanent physical connections of \ndedicated leased lines to provide connectivity for mainframes, such as IBM Systems Network \nArchitecture (SNA) systems. SDLC uses polling and operates at OSI layer 2 (the Data Link layer).\nHDLC\nHigh-Level Data Link Control (HDLC) is a refined version of SDLC designed specif-\nically for serial synchronous connections. HDLC supports full-duplex communications and \nsupports both point-to-point and multipoint connections. HDLC, like SDLC, uses polling and \noperates at OSI layer 2 (the Data Link layer).\nHSSI\nHigh Speed Serial Interface (HSSI) is a DTE/DCE interface standard that defines how \nmultiplexors and routers connect to high-speed network carrier services such as ATM or Frame \nRelay. A multiplexor is a device that transmits multiple communications or signals over a single \ncable or virtual circuit. HSSI defines the electrical and physical characteristics of the interfaces \nor connection points and thus operates at OSI layer 1 (the Physical layer).\nEncapsulation Protocols\nThe Point-to-Point Protocol (PPP) is an encapsulation protocol designed to support the transmission \nof IP traffic over dial-up or point-to-point links. PPP allows for multivendor interoperability of \nWAN devices supporting serial links. All dial-up and most point-to-point connections are serial in \nnature (as opposed to parallel). PPP includes a wide range of communication services, including \n" }, { "page_number": 176, "text": "Miscellaneous Security Control Characteristics\n131\nassignment and management of IP addresses, management of synchronous communications, stan-\ndardized encapsulation, multiplexing, link configuration, link quality testing, error detection, and \nfeature or option negotiation (such as compression). PPP was originally designed to support CHAP \nand PAP for authentication. However, recent versions of PPP also support MS-CHAP, EAP, and \nSPAP. PPP can also be used to support Internetwork Packet Exchange (IPX) and DECnet protocols. \nPPP is an Internet standard documented in RFC 1661. It replaced the Serial Line Internet Protocol \n(SLIP). SLIP offered no authentication, supported only half-duplex communications, had no error \ndetection capabilities, and required manual link establishment and teardown.\nMiscellaneous Security Control \nCharacteristics\nWhen you’re selecting or deploying security controls for network communications, there are \nnumerous characteristics that should be evaluated in light of your circumstances, capabilities, \nand security policy. These issues are discussed in the following sections.\nTransparency\nJust as the name implies, transparency is the characteristic of a service, security control, or access \nmechanism that ensures that it is unseen by users. Transparency is often a desirable feature for \nsecurity controls. The more transparent a security mechanism is, the less likely a user will be able \nto circumvent it or even be aware that it exists. With transparency, there is a lack of direct evidence \nthat a feature, service, or restriction exists, and its impact on performance is minimal.\nIn some cases, transparency may need to function more as a configurable feature rather than \nas a permanent aspect of operation, such as when an administrator is troubleshooting, evalu-\nating, or tuning a system’s configurations.\nVerifying Integrity\nTo verify the integrity of a transmission, you can use a checksum called a hash total. A hash \nfunction is performed on a message or a packet before it is sent over the communication path-\nway. The hash total obtained is added to the end of the message and is called the message digest. \nOnce the message is received, the hash function is performed by the destination system and the \nresult is compared to the original hash total. If the two hash totals match, then there is a high \nlevel of certainty that the message has not been altered or corrupted during transmission. Hash \ntotals are similar to cyclic redundancy checks (CRCs) in that they both act as integrity tools. In \nmost secure transaction systems, hash functions are used to guarantee communication integrity.\nRecord sequence checking is similar to a hash total check; however, instead of verifying con-\ntent integrity, it verifies packet or message sequence integrity. Many communications services \nemploy record sequence checking to verify that no portions of a message were lost and that all \nelements of the message are in their proper order.\n" }, { "page_number": 177, "text": "132\nChapter 4\n\u0002 Communications Security and Countermeasures\nTransmission Mechanisms\nTransmission logging is a form of auditing focused on communications. Transmission logging \nrecords the particulars about source, destination, time stamps, identification codes, transmis-\nsion status, number of packets, size of message, and so on. These pieces of information may be \nuseful in troubleshooting problems and tracking down unauthorized communications or used \nagainst a system as a means to extract data about how it functions.\nTransmission error correction is a capability built into connection- or session-oriented pro-\ntocols and services. If it is determined that a message, in whole or in part, was corrupted, \naltered, or lost, a request can be made for the source to resend all or part of the message. \nRetransmission controls determine whether all or part of a message is retransmitted in the event \nthat a transmission error correction system discovers a problem with a communication. \nRetransmission controls can also determine whether multiple copies of a hash total or CRC \nvalue are sent and whether multiple data paths or communication channels are employed.\nManaging E-Mail Security\nE-mail is one of the most widely and commonly used Internet services. The e-mail infrastructure \nemployed on the Internet is primarily made up of e-mail servers using the Simple Mail Transfer \nProtocol (SMTP) to accept messages from clients, transport those messages to other servers, and \ndeposit messages into a user’s server-based inbox. In addition to e-mail servers, the infrastruc-\nture includes e-mail clients. Clients retrieve e-mail from their server-based inboxes using the \nPost Office Protocol, version 3 (POP3) or Internet Message Access Protocol (IMAP). Clients \ncommunicate with e-mail servers using SMTP. All Internet compatible e-mail systems rely upon \nthe X.400 standard for addressing and message handling.\nSendmail is the most common SMTP server for Unix systems, Exchange is the most common \nSMTP server for Microsoft systems, and GroupWise is the most common SMTP server for Nov-\nell systems. In addition to these three popular products, there are numerous alternatives, but \nthey all share the same basic functionality and compliance with Internet e-mail standards.\nIf you deploy an SMTP server, it is imperative to properly configure authentication for both \ninbound and outbound mail. SMTP is designed to be a mail relay system. This means it relays \nmail from sender to intended recipient. However, you want to avoid turning your SMTP server \ninto an open relay (also known as open relay agent, or relay agent),—an STMP server that does \nnot authenticate senders before accepting and relaying mail. Open relays are prime targets for \nspammers because they allow spammers to send out floods of e-mails by piggybacking on an \ninsecure e-mail infrastructure.\nE-Mail Security Goals\nFor e-mail, the basic mechanism in use on the Internet offers efficient delivery of messages but \nlacks controls to provide for confidentiality, integrity, or even availability. In other words, basic \n" }, { "page_number": 178, "text": "Managing E-Mail Security\n133\ne-mail is not secure. However, there are many ways to add security to e-mail. Adding security \nto e-mail may satisfy one or more of the following objectives:\n\u0002\nProvide for nonrepudiation\n\u0002\nRestrict access to messages to their intended recipients\n\u0002\nMaintain the integrity of messages\n\u0002\nAuthenticate and verify the source of messages\n\u0002\nVerify the delivery of messages\n\u0002\nClassify sensitive content within or attached to messages\nAs with any aspect of IT security, e-mail security begins in a security policy approved by \nupper management. Within the security policy, several issues must be addressed:\n\u0002\nAcceptable use policies for e-mail\n\u0002\nAccess control\n\u0002\nPrivacy\n\u0002\nE-mail management\n\u0002\nE-mail backup and retention policies\nAcceptable use policies define what activities can and cannot be performed over an organi-\nzation’s e-mail infrastructure. It is often stipulated that professional, business-oriented e-mail \nand a limited amount of personal e-mail can be sent and received. Specific restrictions are usu-\nally placed on performing personal business (i.e., work for another organization, including self-\nemployment), illegal, immoral, or offensive communications, and any other activities that \nwould have a detrimental effect on productivity, profitability, or public relations.\nAccess control over e-mail should be maintained so that users have access to only their specific \ninbox and e-mail archive databases. An extension of this rule implies that no other user, autho-\nrized or not, can gain access to an individual’s e-mail. Access control should provide for both legit-\nimate access and some level of privacy, at least from peer employees and unauthorized intruders.\nThe mechanisms and processes used to implement, maintain, and administer e-mail for an \norganization should be clarified. End users may not need to know the specifics of how e-mail is \nmanaged, but they do need to know whether e-mail is or is not considered private communication. \nE-mail has recently been the focus of numerous court cases in which archived messages were used \nas evidence. Often, this was to the chagrin of the author or recipient of those messages. If e-mail \nis to be retained (i.e., backed up and stored in archives for future use), users need to be made aware \nof this. If e-mail is to be reviewed for violations by an auditor, users need to be informed of this \nas well. Some companies have elected to retain only the last three months of e-mail archives before \nthey are destroyed, whereas others have opted to retain e-mail for up to seven years.\nUnderstanding E-Mail Security Issues\nThe first step in deploying e-mail security is to recognize the vulnerabilities specific to e-mail. \nThe protocols used to support e-mail do not employ encryption. Thus, all messages are trans-\nmitted in the form in which they are submitted to the e-mail server, which is often plain text. \n" }, { "page_number": 179, "text": "134\nChapter 4\n\u0002 Communications Security and Countermeasures\nThis makes interception and eavesdropping an easy task. However, the lack of native encryp-\ntion is one of the least important security issues related to e-mail.\nE-mail is the most common delivery mechanism for viruses, worms, Trojan horses, documents \nwith destructive macros, and other malicious code. The proliferation of support for various script-\ning languages, auto-download capabilities, and auto-execute features has transformed hyperlinks \nwithin the content of e-mail and attachments into a serious threat to every system.\nE-mail offers little in the way of source verification. Spoofing the source address of e-mail is \na simple process for even the novice hacker. E-mail headers can be modified at their source or \nat any point during transit. Furthermore, it is also possible to deliver e-mail directly to a user’s \ninbox on an e-mail server by directly connecting to the e-mail server’s SMTP port. And speaking \nof in-transit modification, there are no native integrity checks to ensure that a message was not \naltered between its source and destination.\nE-mail itself can be used as an attack mechanism. When sufficient numbers of messages are \ndirected to a single user’s inbox or through a specific STMP server, a denial of service (DoS) can \nresult. This attack is often called mailbombing and is simply a DoS performed by inundating a \nsystem with messages. The DoS can be the result of storage capacity consumption or processing \ncapability utilization. Either way the result is the same: legitimate messages cannot be delivered.\nLike e-mail flooding and malicious code attachments, unwanted e-mail can be considered an \nattack. Sending unwanted, inappropriate, or irrelevant messages is called spamming. Spamming \nis often little more than a nuisance, but it does waste system resources both locally and over the \nInternet. It is often difficult to stop spam because the source of the messages is usually spoofed.\nE-Mail Security Solutions\nImposing security on e-mail is possible, but the efforts should be in tune with the value and con-\nfidentiality of the messages being exchanged. There are several protocols, services, and solutions \navailable to add security to e-mail without requiring a complete overhaul of the entire Internet-\nbased SMTP infrastructure. These include S/MIME, MOSS, PEM, and PGP We’ll discuss \nS/MIME further in Chapter 10, “PKI and Cryptographic Applications.”\nS/MIME\nSecure Multipurpose Internet Mail Extensions (S/MIME) offers authentication and \nprivacy to e-mail through secured attachments. Authentication is provided through X.509 dig-\nital certificates. Privacy is provided through the use of Public Key Cryptography Standard \n(PKCS) encryption. Two types of messages can be formed using S/MIME: signed messages and \nenveloped messages. A signed message provides integrity and sender authentication. An envel-\noped message provides integrity, sender authentication, and confidentiality.\nMOSS\nMIME Object Security Services (MOSS) can provide authenticity, confidentiality, \nintegrity, and nonrepudiation for e-mail messages. MOSS employs Message Digest 2 (MD2) \nand MD5 algorithms; Rivest, Shamir, and Adelman (RSA) public key; and Data Encryption \nStandard (DES) to provide authentication and encryption services.\nPEM\nPrivacy Enhanced Mail (PEM) is an e-mail encryption mechanism that provides authen-\ntication, integrity, confidentiality, and nonrepudiation. PEM uses RSA, DES, and X.509.\nPGP\nPretty Good Privacy (PGP) is a public-private key system that uses the IDEA algorithm \nto encrypt files and e-mail messages. PGP is not a standard but rather an independently devel-\noped product that has wide Internet grassroots support.\n" }, { "page_number": 180, "text": "Managing E-Mail Security\n135\nThrough the use of these and other security mechanisms for e-mail and communication \ntransmissions, many of the vulnerabilities can be reduced or eliminated. Digital signatures can \nhelp eliminate impersonation. Encryption of messages reduces eavesdropping. And the use of \ne-mail filters keep spamming and mailbombing to a minimum.\nBlocking attachments at the e-mail gateway system on your network can ease the threats \nfrom malicious attachments. You can have a 100-percent no-attachments policy or block only \nthose attachments that are known or suspected to be malicious, such as attachments with exten-\nsions that are used for executable and scripting files. If attachments are an essential part of your \ne-mail communications, you’ll need to rely upon the training of your users and your antivirus \ntools for protection. Training users to avoid contact with suspicious or unexpected attachments \ngreatly reduces the risk of malicious code transference via e-mail. Antivirus software is generally \neffective against known viruses, but it offers little protection against new or unknown viruses.\nFacsimile Security\nFacsimile (fax) communications are waning in popularity due to the widespread use of e-mail. \nElectronic documents are easily exchanged as attachments to e-mail. Printed documents are \njust as easy to scan and e-mail as they are to fax. However, faxing must still be addressed in \nyour overall security plan. Most modems give users the ability to connect to a remote computer \nsystem and send and receive faxes. Many operating systems include built-in fax capabilities, \nand there are numerous fax products for computer systems. Faxes sent from a computer’s fax/\nmodem can be received by another computer or by a normal fax machine.\nEven with declining use, faxes still represent a communications path that is vulnerable to \nattack. Like any other telephone communication, faxes can be intercepted and are susceptible \nto eavesdropping. If an entire fax transmission is recorded, it can be played back by another fax \nmachine to extract the transmitted documents.\nSome of the mechanisms that can be deployed to improve the security of faxes include fax \nencryptors, link encryption, activity logs, and exception reports. A fax encryptor gives a fax \nmachine the capability to use an encryption protocol to scramble the outgoing fax signal. The use \nof an encryptor requires that the receiving fax machine support the same encryption protocol so \nit can decrypt the documents. Link encryption is the use of an encrypted communication path, \nlike a VPN link or a secured telephone link, over which to transmit the fax. Activity logs and excep-\ntion reports can be used to detect anomalies in fax activity that could be symptoms of attack.\nIn addition to the security of a fax transmission, it is also important to consider the security of \na received fax. Faxes that are automatically printed may sit in the out tray for a long period of \ntime, therefore making them subject to viewing by unintended recipients. Studies have shown \nthat adding banners of CONFIDENTIAL, PRIVATE, and so on have the opposite effect by spur-\nring the curiosity of passersby. So, disable automatic printing. Also, avoid using faxes employ-\ning ribbons or duplication cartridges that retain images of the printed faxes. Consider \nintegrating your fax system with your network so you can e-mail faxes to intended recipients \ninstead of printing them to paper.\n" }, { "page_number": 181, "text": "136\nChapter 4\n\u0002 Communications Security and Countermeasures\nSecuring Voice Communications\nThe vulnerability of voice communication is tangentially related to IT system security. How-\never, as voice communication solutions move on to the network by employing digital devices \nand Voice over IP (VoIP), securing voice communications becomes an increasingly important \nissue. When voice communications occur over the IT infrastructure, it is important to imple-\nment mechanisms to provide for authentication and integrity. Confidentially should be main-\ntained by employing an encryption service or protocol to protect the voice communications \nwhile in transit.\nNormal private branch exchange (PBX) or plain old telephone service (POTS) voice com-\nmunications are vulnerable to interception, eavesdropping, tapping, and other exploitations. \nOften, physical security is required to maintain control over voice communications within the \nconfines of your organization’s physical locations. Security of voice communications outside of \nyour organization is typically the responsibility of the phone company from which you lease ser-\nvices. If voice communication vulnerabilities are an important issue for sustaining your security \npolicy, you should deploy an encrypted communication mechanism and use it exclusively.\nSocial Engineering\nMalicious individuals can exploit voice communications through a technique known as social \nengineering. Social engineering is a means by which an unknown person gains the trust of some-\none inside of your organization. Adept individuals can convince employees that they are asso-\nciated with upper management, technical support, the help desk, and so on. Once convinced, \nthe victim is often encouraged to make a change to their user account on the system, such as \nreset their password. Other attacks include instructing the victim to open specific e-mail attach-\nments, launch an application, or connect to a specific URL. Whatever the actual activity is, it \nis usually directed toward opening a back door that the attacker can use to gain network access.\nThe people within an organization make it vulnerable to social engineering attacks. With just \na little information or a few facts, it is often possible to get a victim to disclose confidential infor-\nmation or engage in irresponsible activity. Social engineering attacks exploit human character-\nistics such as a basic trust in others and laziness. Overlooking discrepancies, being distracted, \nfollowing orders, assuming others know more than they actually do, wanting to help others, \nand fearing reprimands can also lead to attacks. Attackers are often able to bypass extensive \nphysical and logical security controls because the victim opens an access pathway from the \ninside, effectively punching a hole in the secured perimeter.\nThe only way to protect against social engineering attacks is to teach users how to respond \nand interact with voice-only communications. Here are some guidelines:\n\u0002\nAlways err on the side of caution whenever voice communications seem odd, out of place, \nor unexpected.\n\u0002\nAlways request proof of identity. This can be a driver’s license number or Social Security \nnumber, which can be easily verified. It could also take the form of having a person in the \noffice that would recognize the caller’s voice take the call. For example, if the caller claims \n" }, { "page_number": 182, "text": "Securing Voice Communications\n137\nto be a department manager, you could confirm his identity by asking his administrative \nassistant to take the call.\n\u0002\nRequire call-back authorizations on all voice-only requests for network alterations or activities.\n\u0002\nClassify information (usernames, passwords, IP addresses, manager names, dial-in num-\nbers, etc.) and clearly indicate which information can be discussed or even confirmed using \nvoice communications.\n\u0002\nIf privileged information is requested over the phone by an individual who should know \nthat giving out that particular information over the phone is against the company’s security \npolicy, ask why the information is needed and verify their identity again. This incident \nshould also be reported to the security administrator.\n\u0002\nNever give out or change passwords based on voice-only communications.\n\u0002\nAlways securely dispose of or destroy all office documentation, especially any paperwork \nor disposable media that contains information about the IT infrastructure or its security \nmechanisms.\nFraud and Abuse\nAnother voice communication threat is PBX fraud and abuse. Many PBX systems can be exploited \nby malicious individuals to avoid toll charges and hide their identity. Malicious attackers known \nas phreakers abuse phone systems in much the same way that crackers abuse computer networks. \nPhreakers may be able to gain unauthorized access to personal voice mailboxes, redirect messages, \nblock access, and redirect inbound and outbound calls. Countermeasures to PBX fraud and abuse \ninclude many of the same precautions you would employ to protect a typical computer network: \nlogical or technical controls, administrative controls, and physical controls. Here are several key \npoints to keep in mind when designing a PBX security solution:\n\u0002\nConsider replacing remote access or long-distance calling through the PBX with a credit \ncard or calling card system.\n\u0002\nRestrict dial-in and dial-out features to only authorized individuals who require such func-\ntionality for their work tasks.\n\u0002\nFor your dial-in modems, use unpublished phone numbers that are outside of the prefix \nblock range of your voice numbers.\n\u0002\nBlock or disable any unassigned access codes or accounts.\n\u0002\nDefine an acceptable use policy and train users on how to properly use the system.\n\u0002\nLog and audit all activities on the PBX and review the audit trails for security and use violations.\n\u0002\nDisable maintenance modems and accounts.\n\u0002\nChange all default configurations, especially passwords and capabilities related to admin-\nistrative or privileged features.\n" }, { "page_number": 183, "text": "138\nChapter 4\n\u0002 Communications Security and Countermeasures\n\u0002\nBlock remote calling (i.e., allowing a remote caller to dial in to your PBX and then dial-out \nagain, thus directing all toll charges to the PBX host).\n\u0002\nDeploy Direct Inward System Access (DISA) technologies to reduce PBX fraud by external \nparties.\n\u0002\nKeep the system current with vendor/service provider updates.\nAdditionally, maintaining physical access control to all PBX connection centers, phone por-\ntals, or wiring closets prevents direct intrusion from onsite attackers.\nPhreaking\nPhreaking is a specific type of hacking or cracking directed toward the telephone system. Phreak-\ners use various types of technology to circumvent the telephone system to make free long-distance \ncalls, to alter the function of telephone service, to steal specialized services, and even to cause ser-\nvice disruptions. Some phreaker tools are actual devices, whereas others are just particular ways \nof using a normal telephone. No matter what the tool or technology actually is, phreaker tools are \nreferred to as colored boxes (black box, red box, etc.). Over the years, there have been many box \ntechnologies that were developed and widely used by phreakers, but only a few of them still work \nagainst today’s telephone systems based on packet-switching. Here are a few of the phreaker tools \nyou need to recognize for the exam:\n\u0002\nBlack boxes are used to manipulate line voltages to steal long-distance services. They are \noften just custom-built circuit boards with a battery and wire clips.\n\u0002\nRed boxes are used to simulate tones of coins being deposited into a pay phone. They are \nusually just small tape recorders.\n\u0002\nBlue boxes are used to simulate 2600Hz tones to interact directly with telephone network \ntrunk systems (i.e., backbones). This could be a whistle, a tape recorder, or a digital tone \ngenerator.\n\u0002\nWhite boxes are used to control the phone system. A white box is a DTMF or dual-tone \nmultifrequency generator (i.e., a keypad). It can be a custom-built device or one of the \npieces of equipment that most telephone repair personnel use.\nCell phone security is a growing concern. Captured electronic serial numbers \n(ESNs) and mobile identification numbers (MINs) can be burned into blank \nphones to create clones. When a clone is used, the charges are billed to the \noriginal owner’s cell phone account. Furthermore, conversations and data \ntransmission can be intercepted using radio frequency scanners. Also, anyone \nin the immediate vicinity can overhear at least one side of the conversation. So, \ndon’t talk about confidential, private, or sensitive topics in public places.\n" }, { "page_number": 184, "text": "Network Attacks and Countermeasures\n139\nSecurity Boundaries\nA security boundary is the line of intersection between any two areas, subnets, or environments \nthat have different security requirements or needs. A security boundary exists between a high-\nsecurity area and a low-security one, such as between a LAN and the Internet. It is important \nto recognize the security boundaries both on your network and in the physical world. Once you \nidentify a security boundary, you need to deploy controls and mechanisms to control the flow \nof information across those boundaries.\nDivisions between security areas can take many forms. For example, objects may have dif-\nferent classifications. Each classification defines what functions can be performed by which sub-\njects on which objects. The distinction between classifications is a security boundary.\nSecurity boundaries also exist between the physical environment and the logical environ-\nment. To provide logical security, security mechanisms that are different than those used to pro-\nvide physical security must be employed. Both must be present to provide a complete security \nstructure and both must be addressed in a security policy. However, they are different and must \nbe assessed as separate elements of a security solution.\nSecurity boundaries, such as a perimeter between a protected area and an unprotected one, \nshould always be clearly defined. It’s important to state in a security policy the point at which \ncontrol ends or begins and to identify that point in both the physical and logical environments. \nLogical security boundaries are the points where electronic communications interface with \ndevices or services for which your organization is legally responsible. In most cases, that inter-\nface is clearly marked and unauthorized subjects are informed that they do not have access and \nthat attempts to gain access will result in prosecution.\nThe security perimeter in the physical environment is often a reflection of the security perim-\neter of the logical environment. In most cases, the area over which the organization is legally \nresponsible determines the reach of a security policy in the physical realm. This can be the walls \nof an office, the walls of a building, or the fence around a campus. In secured environments, \nwarning signs are posted indicating that unauthorized access is prohibited and attempts to gain \naccess will be thwarted and result in prosecution.\nWhen transforming a security policy into actual controls, you must consider each environ-\nment and security boundary separately. Simply deduce what available security mechanisms \nwould provide the most reasonable, cost-effective, and efficient solution for a specific environ-\nment and situation. However, all security mechanisms must be weighed against the value of the \nobjects they are to protect. Deploying countermeasures that cost more than the value of the pro-\ntected objects is unwarranted.\nNetwork Attacks and Countermeasures\nCommunication systems are vulnerable to attacks in much the same way any other aspect of the \nIT infrastructure is vulnerable. Understanding the threats and the possible countermeasures is \nan important part of securing an environment. Any activity or condition that can cause harm \n" }, { "page_number": 185, "text": "140\nChapter 4\n\u0002 Communications Security and Countermeasures\nto data, resources, or personnel must be addressed and mitigated if possible. Keep in mind that \nharm includes more than just destruction or damage; it also includes disclosure, access delay, \ndenial of access, fraud, resource waste, resource abuse, and loss. Common threats against com-\nmunication systems security include denial of service, eavesdropping, impersonation, replay, \nand modification.\nEavesdropping\nAs the name suggests, eavesdropping is simply listening to communication traffic for the pur-\npose of duplicating it. The duplication can take the form of recording the data to a storage \ndevice or to an extraction program that dynamically attempts to extract the original content \nfrom the traffic stream. Once a copy of traffic content is in the hands of a cracker, they can often \nextract many forms of confidential information, such as usernames, passwords, process proce-\ndures, data, and so on. Eavesdropping usually requires physical access to the IT infrastructure \nto connect a physical recording device to an open port or cable splice or to install a software \nrecording tool onto the system. Eavesdropping is often facilitated by the use of a network traffic \ncapture or monitoring program or a protocol analyzer system (often called a sniffer). Eaves-\ndropping devices and software are usually difficult to detect because they are used in passive \nattacks. When eavesdropping or wiretapping is transformed into altering or injecting commu-\nnications, the attack is considered an active attack.\nYou can combat eavesdropping by maintaining physical access security to prevent unautho-\nrized personnel from accessing your IT infrastructure. As for protecting communications that \noccur outside of your network or protecting against internal attackers, the use of encryption (such \nas IPSec or SSH) and one-time authentication methods (i.e., one-time pads or token devices) on \ncommunication traffic will greatly reduce the effectiveness and timeliness of eavesdropping.\nThe common threat of eavesdropping is one of the primary motivations to maintain reliable \ncommunications security. While data is in transit, it is often easier to intercept than when it is \nin storage. Furthermore, the lines of communication may lie outside of your organization’s con-\ntrol. Thus, reliable means to secure data while in transit outside of your internal infrastructure \nis of utmost importance. Some of the common network health and communication reliability \nevaluation and management tools, such as sniffers, can be used for nefarious purposes and thus \nrequire stringent controls and oversight to prevent abuse.\nSecond-Tier Attacks\nImpersonation, replay, and modification attacks are all called second-tier attacks. A second-tier \nattack is an assault that relies upon information or data gained from eavesdropping or other \nsimilar data-gathering techniques. In other words, it is an attack that is launched only after \nsome other attack is completed.\nImpersonation/Masquerading\nImpersonation, or masquerading, is the act of pretending to be someone or something you are \nnot to gain unauthorized access to a system. Impersonation is often possible through the capture \nof usernames and passwords or of session setup procedures for network services.\n" }, { "page_number": 186, "text": "Network Attacks and Countermeasures\n141\nSome solutions to prevent impersonation include the use of one-time pads and token authen-\ntication systems, the use of Kerberos, and the use of encryption to increase the difficulty of \nextracting authentication credentials from network traffic.\nReplay Attacks\nReplay attacks are an offshoot of impersonation attacks and are made possible through capturing \nnetwork traffic via eavesdropping. Replay attacks attempt to reestablish a communication session \nby replaying captured traffic against a system. They can be prevented by using one-time authen-\ntication mechanisms and sequenced session identification.\nModification Attacks\nModification is an attack in which captured packets are altered and then played against a sys-\ntem. Modified packets are designed to bypass the restrictions of improved authentication mech-\nanisms and session sequencing. Countermeasures to modification replay attacks include the use \nof digital signature verifications and packet checksum verification.\nAddress Resolution Protocol (ARP)\nThe Address Resolution Protocol (ARP) is a subprotocol of the TCP/IP protocol suite that oper-\nates at the Network layer (layer 3). ARP is used to discover the MAC address of a system by \npolling using its IP address. ARP functions by broadcasting a request packet with the target IP \naddress. The system with that IP address (or some other system that already has an ARP map-\nping for it) will reply with the associated MAC address. The discovered IP-to-MAC mapping is \nstored in the ARP cache and is used to direct packets.\nARP mappings can be attacked through spoofing. Spoofing provides false MAC addresses \nfor requested IP-addressed systems to redirect traffic to alternate destinations. ARP attacks are \noften an element in man-in-the-middle attacks. Such attacks involve an intruder’s system spoof-\ning its MAC address against the destination’s IP address into the source’s ARP cache. All pack-\nets received form the source system are inspected and then forwarded on to the actual intended \ndestination system. You can take measures to fight ARP attacks, such as defining static ARP \nmappings for critical systems, monitoring ARP caches for MAC-to-IP address mappings, or \nusing an IDS to detect anomalies in system traffic and changes in ARP traffic.\nDNS Spoofing\nAn attack related to ARP is known as DNS spoofing. DNS spoofing occurs when an attacker \nalters the domain-name-to-IP-address mappings in a DNS system to redirect traffic to a rogue \nsystem or to simply perform a denial of service against a system. Protections against DNS spoof-\ning include allowing only authorized changes to DNS, restricting zone transfers, and logging all \nprivileged DNS activity.\nHyperlink Spoofing\nYet another related attack is hyperlink spoofing. Hyperlink spoofing is similar to DNS spoofing \nin that it is used to redirect traffic to a rogue or imposter system or to simply divert traffic away \n" }, { "page_number": 187, "text": "142\nChapter 4\n\u0002 Communications Security and Countermeasures\nfrom its intended destination. Hyperlink spoofing can take the form of DNS spoofing or can \nsimply be an alteration of the hyperlink URLs in the HTML code of documents sent to clients. \nHyperlink spoofing attacks are usually successful because most users do not verify the domain \nname in a URL via DNS, rather, they assume the hyperlink is valid and just click it.\nProtections against hyperlink spoofing include the same precautions used against DNS \nspoofing as well as keeping your system patched and using the Internet with caution.\nSummary\nMaintaining control over communication pathways is essential to supporting confidentiality, \nintegrity, and availability for network, voice, and other forms of communication. Numerous \nattacks are focused on intercepting, blocking, or otherwise interfering with the transfer of data \nfrom one location to another. Fortunately, there are also reasonable countermeasures to reduce \nor even eliminate many of these threats.\nTunneling is a means by which messages in one protocol can be transported over another net-\nwork or communications system using a second protocol. Tunneling, otherwise known as \nencapsulation, can be combined with encryption to provide security for the transmitted mes-\nsage. VPNs are based on encrypted tunneling.\nNAT is used to hide the internal structure of a private network as well as enable multiple \ninternal clients to gain Internet access through a few public IP addresses. NAT is often a native \nfeature of border security devices, such as firewalls, routers, gateways, and proxies.\nIn circuit switching, a dedicated physical pathway is created between the two communicating \nparties. Packet switching occurs when the message or communication is broken up into small seg-\nments (usually fixed-length packets depending on the protocols and technologies employed) and \nsent across the intermediary networks to the destination. Within packet-switching systems are two \ntypes of communication paths or virtual circuits. A virtual circuit is a logical pathway or circuit \ncreated over a packet-switched network between two specific endpoints. There are two types of \nvirtual circuits: permanent virtual circuits (PVCs) and switched virtual circuits (SVCs).\nWAN links or long-distance connection technologies can be divided into two primary cate-\ngories: dedicated and nondedicated lines. A dedicated line connects two specific endpoints and \nonly those two endpoints together. A nondedicated line is one that requires a connection to be \nestablished before data transmission can occur. A nondedicated line can be used to connect with \nany remote system that uses the same type of nondedicated line. WAN connection technologies \ninclude X.25, Frame Relay, ATM, SMDS, SDLC, HDLC, and HSSI.\nWhen selecting or deploying security controls for network communications, there are \nnumerous characteristics that you should evaluate in light of your circumstances, capabilities, \nand security policy. Security controls should be transparent to users. Hash totals and CRC \nchecks can be used to verify message integrity. Record sequences are used to ensure sequence \nintegrity of a transmission. Transmission logging helps detect communication abuses.\nBasic Internet-based e-mail is insecure, but there are steps you can take to secure it. To secure \ne-mail, you should provide for nonrepudiation, restrict access to authorized users, make sure \nintegrity is maintained, authenticate the message source, verify delivery, and even classify sensitive \n" }, { "page_number": 188, "text": "Exam Essentials\n143\ncontent. These issues must be addressed in a security policy before they can be implemented in a \nsolution. They often take the form of acceptable use policies, access controls, privacy declarations, \ne-mail management procedures, and backup and retention policies.\nE-mail is a common delivery mechanism for malicious code. Filtering attachments, using anti-\nvirus software, and educating users are effective countermeasures against that kind of attack. \nE-mail spamming or flooding is a form of denial of service, which can be deterred through filters \nand IDSs. E-mail security can be improved using S/MIME, MOSS, PEM, and PGP.\nUsing encryption to protect the transmission of documents and prevent eavesdropping \nimproves fax and voice security. Training users effectively is a useful countermeasure against \nsocial engineering attacks.\nA security boundary can be the division between one secured area and another secured area, \nor it can be the division between a secured area and an unsecured area. Both must be addressed \nin a security policy.\nCommunication systems are vulnerable to many attacks, including denial of service, eaves-\ndropping, impersonation, replay, modification, and ARP attacks. Fortunately, effective coun-\ntermeasures exist for each of these. PBX fraud and abuse and phone phreaking are problems \nthat must also be addressed.\nExam Essentials\nKnow what tunneling is.\nTunneling is the encapsulation of a protocol-deliverable message \nwithin a second protocol. The second protocol often performs encryption to protect the mes-\nsage contents.\nUnderstand VPNs.\nVPNs are based on encrypted tunneling. They can offer authentication \nand data protection as a point-to-point solution. Common VPN protocols are PPTP, L2F, \nL2TP, and IPSec.\nBe able to explain NAT.\nNAT protects the addressing scheme of a private network, allows \nthe use of the private IP addresses, and enables multiple internal clients to obtain Internet access \nthrough a few public IP addresses. NAT is supported by many security border devices, such as \nfirewalls, routers, gateways, and proxies.\nUnderstand the difference between packet switching and circuit switching.\nIn circuit switch-\ning, a dedicated physical pathway is created between the two communicating parties. Packet \nswitching occurs when the message or communication is broken up into small segments and \nsent across the intermediary networks to the destination. Within packet-switching systems are \ntwo types of communication paths or virtual circuits: permanent virtual circuits (PVCs) and \nswitched virtual circuits (SVCs).\nUnderstand the difference between dedicated and nondedicated links.\nA dedicated line is one \nthat is indefinably and continually reserved for use by a specific customer. A dedicated line is \nalways on and waiting for traffic to be transmitted over it. The link between the customer’s \nLAN and the dedicated WAN link is always open and established. A dedicated line connects \n" }, { "page_number": 189, "text": "144\nChapter 4\n\u0002 Communications Security and Countermeasures\ntwo specific endpoints and only those two endpoints. Examples of dedicated lines include T1, \nT3, E1, E3, and cable modems. A nondedicated line is one that requires a connection to be \nestablished before data transmission can occur. A nondedicated line can be used to connect with \nany remote system that uses the same type of nondedicated line. Examples of nondedicated lines \ninclude standard modems, DSL, and ISDN.\nKnow the various types of WAN technologies.\nKnow that most WAN technologies require a \nchannel service unit/data service unit (CSU/DSU). These can be referred to as WAN switches. \nThere are many types of carrier networks and WAN connection technologies, such as X.25, \nFrame Relay, ATM, and SMDS. Some WAN connection technologies require additional spe-\ncialized protocols to support various types of specialized systems or devices. Three of these pro-\ntocols are SDLC, HDLC, and HSSI.\nUnderstand the differences between PPP and SLIP.\nThe Point-to-Point Protocol (PPP) is an \nencapsulation protocol designed to support the transmission of IP traffic over dial-up or point-\nto-point links. PPP includes a wide range of communication services, including assignment and \nmanagement of IP addresses, management of synchronous communications, standardized \nencapsulation, multiplexing, link configuration, link quality testing, error detection, and feature \nor option negotiation (such as compression). PPP was originally designed to support CHAP and \nPAP for authentication. However, recent versions of PPP also support MS-CHAP, EAP, and \nSPAP. PPP replaced the Serial Line Internet Protocol (SLIP). SLIP offered no authentication, \nsupported only half-duplex communications, had no error detection capabilities, and required \nmanual link establishment and teardown.\nUnderstand common characteristics of security controls.\nSecurity controls should be trans-\nparent to users. Hash totals and CRC checks can be used to verify message integrity. Record \nsequences are used to ensure sequence integrity of a transmission. Transmission logging helps \ndetect communication abuses.\nUnderstand how e-mail security works.\nInternet e-mail is based on SMTP, POP3, and IMAP. \nIt is inherently insecure. It can be secured, but the methods used must be addressed in a security \npolicy. E-mail security solutions include using S/MIME, MOSS, PEM, or PGP.\nKnow how fax security works.\nFax security is primarily based on using encrypted transmis-\nsions or encrypted communication lines to protect the faxed materials. The primary goal is to \nprevent interception. Activity logs and exception reports can be used to detect anomalies in fax \nactivity that could be symptoms of attack.\nKnow the threats associated with PBX systems and the countermeasures to PBX fraud.\nCountermeasures to PBX fraud and abuse include many of the same precautions you would \nemploy to protect a typical computer network: logical or technical controls, administrative con-\ntrols, and physical controls.\nRecognize what a phreaker is.\nPhreaking is a specific type of hacking or cracking in which \nvarious types of technology are used to circumvent the telephone system to make free long dis-\ntance calls, to alter the function of telephone service, to steal specialized services, or even to \ncause service disruptions. Common tools of phreakers include black, red, blue, and white boxes.\n" }, { "page_number": 190, "text": "Exam Essentials\n145\nUnderstand voice communications security.\nVoice communications are vulnerable to many \nattacks, especially as voice communications become an important part of network services. \nConfidentiality can be obtained through the use of encrypted communications. Countermea-\nsures must be deployed to protect against interception, eavesdropping, tapping, and other types \nof exploitation.\nBe able to explain what social engineering is.\nSocial engineering is a means by which an \nunknown person gains the trust of someone inside of your organization by convincing employ-\nees that they are, for example, associated with upper management, technical support, or the \nhelp desk. The victim is often encouraged to make a change to their user account on the system, \nsuch as reset their password. The primary countermeasure for this sort of attack is user training.\nExplain the concept of security boundaries.\nA security boundary can be the division between \none secured area and another secured area. It can also be the division between a secured area \nand an unsecured area. Both must be addressed in a security policy.\nUnderstand the various attacks and countermeasures associated with communications security.\nCommunication systems are vulnerable to many attacks, including eavesdropping, imperson-\nation, replay, modification, and ARP attacks. Be able to list effective countermeasures for each.\n" }, { "page_number": 191, "text": "146\nChapter 4\n\u0002 Communications Security and Countermeasures\nReview Questions\n1.\nWhich of the following is not true?\nA. Tunneling employs encapsulation.\nB. All tunneling uses encryption.\nC. Tunneling is used to transmit data over an intermediary network.\nD. Tunneling can be used to bypass firewalls, gateways, proxies, or other traffic control \ndevices.\n2.\nTunnel connections can be established over all except for which of the following?\nA. WAN links\nB. LAN pathways\nC. Dial-up connections\nD. Stand-alone systems\n3.\nWhat do most VPNs use to protect transmitted data?\nA. Obscurity\nB. Encryption\nC. Encapsulation\nD. Transmission logging\n4.\nWhich of the following is not an essential element of a VPN link?\nA. Tunneling\nB. Encapsulation\nC. Protocols\nD. Encryption\n5.\nWhich of the following cannot be linked over a VPN?\nA. Two distant LANs\nB. Two systems on the same LAN\nC. A system connected to the Internet and a LAN connected to the Internet\nD. Two systems without an intermediary network connection\n6.\nWhich of the following is not a VPN protocol?\nA. PPTP\nB. L2F\nC. SLIP\nD. IPSec\n" }, { "page_number": 192, "text": "Review Questions\n147\n7.\nWhich of the following VPN protocols do not offer encryption? (Choose all that apply.)\nA. L2F\nB. L2TP\nC. IPSec\nD. PPTP\n8.\nAt which OSI model layer does the IPSec protocol function?\nA. Data Link\nB. Transport\nC. Session\nD. Network\n9.\nWhich of the following is not defined in RFC 1918 as one of the private IP address ranges that \nare not routed on the Internet?\nA. 169.172.0.0–169.191.255.255\nB. 192.168.0.0–192.168.255.255\nC. 10.0.0.0–10.255.255.255\nD. 172.16.0.0–172.31.255.255\n10. Which of the following is not a benefit of NAT?\nA. Hiding the internal IP addressing scheme\nB. Sharing a few public Internet addresses with a large number of internal clients\nC. Using the private IP addresses from RFC 1918 on an internal network\nD. Filtering network traffic to prevent brute force attacks\n11. A significant benefit of a security control is when it goes unnoticed by users. What is this called?\nA. Invisibility\nB. Transparency\nC. Diversion\nD. Hiding in plain sight\n12. When you’re designing a security system for Internet-delivered e-mail, which of the following is \nleast important?\nA. Nonrepudiation\nB. Availability\nC. Message integrity\nD. Access restriction\n" }, { "page_number": 193, "text": "148\nChapter 4\n\u0002 Communications Security and Countermeasures\n13. Which of the following is typically not an element that must be discussed with end users in \nregard to e-mail retention policies?\nA. Privacy\nB. Auditor review\nC. Length of retainer\nD. Backup method\n14. What is it called when e-mail itself is used as an attack mechanism?\nA. Masquerading\nB. Mailbombing\nC. Spoofing\nD. Smurf attack\n15. Why is spam so difficult to stop?\nA. Filters are ineffective at blocking inbound messages.\nB. The source address is usually spoofed.\nC. It is an attack requiring little expertise.\nD. Spam can cause denial of service attacks.\n16. Which of the following security mechanisms for e-mail can provide two types of messages: \nsigned and enveloped?\nA. PEM\nB. PGP\nC. S/MIME\nD. MOSS\n17.\nIn addition to maintaining an updated system and controlling physical access, which of the fol-\nlowing is the most effective countermeasure against PBX fraud and abuse?\nA. Encrypting communications\nB. Changing default passwords\nC. Using transmission logs\nD. Taping and archiving all conversations\n18. Which of the following can be used to bypass even the best physical and logical security mech-\nanisms to gain access to a system?\nA. Brute force attacks\nB. Denial of service\nC. Social engineering\nD. Port scanning\n" }, { "page_number": 194, "text": "Review Questions\n149\n19. Which of the following is not a denial of service attack?\nA. Exploiting a flaw in a program to consume 100 percent of the CPU\nB. Sending malformed packets to a system, causing it to freeze\nC. Performing a brute force attack against a known user account\nD. Sending thousands of e-mails to a single address\n20. Which of the following is a digital end-to-end communications mechanism developed by tele-\nphone companies to support high-speed digital communications over the same equipment and \ninfrastructure that is used to carry voice communications?\nA. ISDN\nB. Frame Relay\nC. SMDS\nD. ATM\n" }, { "page_number": 195, "text": "150\nChapter 4\n\u0002 Communications Security and Countermeasures\nAnswers to Review Questions\n1.\nB. Tunneling does not always use encryption. It does, however, employ encapsulation, is used to \ntransmit data over an intermediary network, and is able to bypass firewalls, gateways, proxies, \nor other traffic control devices.\n2.\nD. A stand-alone system has no need for tunneling because no communications between systems \nare occurring and no intermediary network is present.\n3.\nB. Most VPNs use encryption to protect transmitted data. In and of themselves, obscurity, \nencapsulation, and transmission logging do not protect data as it is transmitted.\n4.\nD. Encryption is not necessary for the connection to be considered a VPN, but it is recom-\nmended for the protection of that data.\n5.\nD. An intermediary network connection is required for a VPN link to be established.\n6.\nC. SLIP is a dial-up connection protocol, a forerunner of PPP. It is not a VPN protocol.\n7.\nA, B. Layer 2 Forwarding (L2F) was developed by Cisco as a mutual authentication tunneling \nmechanism. However, L2F does not offer encryption. L2TP also lacks built-in encryption.\n8.\nD. IPSec operates at the Network layer (layer 3).\n9.\nA. The address range 169.172.0.0–169.191.255.255 is not listed in RFC 1918 as a public IP \naddress range.\n10. D. NAT does not protect against nor prevent brute force attacks.\n11. B. When transparency is a characteristic of a service, security control, or access mechanism, it \nis unseen by users.\n12. B. Although availability is a key aspect of security in general, it is the least important aspect of \nsecurity systems for Internet-delivered e-mail.\n13. D. The backup method is not an important factor to discuss with end users regarding e-mail \nretention.\n14. B. Mailbombing is the use of e-mail as an attack mechanism. Flooding a system with messages \ncauses a denial of service.\n15. B. It is often difficult to stop spam because the source of the messages is usually spoofed.\n16. C. Two types of messages can be formed using S/MIME: signed messages and enveloped mes-\nsages. A signed message provides integrity and sender authentication. An enveloped message \nprovides integrity, sender authentication, and confidentiality.\n17.\nB. Changing default passwords on PBX systems provides the most effective increase in security.\n" }, { "page_number": 196, "text": "Answers to Review Questions\n151\n18. C. Social engineering can often be used to bypass even the most effective physical and logical con-\ntrols. Whatever the actual activity is that the attacker convinces the victim to perform, it is usually \ndirected toward opening a back door that the attacker can use to gain access to the network.\n19. C. A brute force attack is not considered a DoS.\n20. A. ISDN, or Integrated Services Digital Network, is a digital end-to-end communications mech-\nanism. ISDN was developed by telephone companies to support high-speed digital communica-\ntions over the same equipment and infrastructure that is used to carry voice communications.\n" }, { "page_number": 197, "text": "" }, { "page_number": 198, "text": "Chapter\n5\nSecurity Management \nConcepts and \nPrinciples\nTHE CISSP EXAM TOPICS COVERED IN THIS \nCHAPTER INCLUDE:\n\u0001 Security Management Concepts and Principles\n\u0001 Protection Mechanisms\n\u0001 Change Control/Management\n\u0001 Data Classification\n" }, { "page_number": 199, "text": "The Security Management Practices domain of the Common \nBody of Knowledge (CBK) for the CISSP certification exam deals \nwith the common elements of security solutions. These include \nelements essential to the design, implementation, and administration of security mechanisms.\nThis domain is discussed in this chapter and in Chapter 6, “Asset Value, Policies, and Roles.” \nBe sure to read and study the materials from both chapters to ensure complete coverage of the \nessential material for the CISSP certification exam.\nSecurity Management Concepts and \nPrinciples\nSecurity management concepts and principles are inherent elements in a security policy and \nsolution deployment. They define the basic parameters needed for a secure environment. They \nalso define the goals and objectives that both policy designers and system implementers must \nachieve to create a secure solution. It is important for real-world security professionals, as well \nas CISSP exam students, to understand these items thoroughly.\nThe primary goals and objectives of security are contained within the CIA Triad. The CIA \nTriad is the name given to the three primary security principles: confidentiality, integrity, and \navailability. Security controls must address one or more of these three principles. Security con-\ntrols are typically evaluated on whether or not they address all three of these core information \nsecurity tenets. Vulnerabilities and risks are also evaluated based on the threat they pose against \none or more of the CIA Triad principles. Thus, it is a good idea to be familiar with these prin-\nciples and use them as guidelines and measuring sticks against which to judge all things related \nto security.\nThese three principles are considered the most important within the realm of security. How-\never, how important each is to a specific organization depends upon the organization’s security \ngoals and requirements and on the extent to which its security might be threatened.\nConfidentiality\nThe first principle from the CIA Triad is confidentiality. If a security mechanism offers confi-\ndentiality, it offers a high level of assurance that data, objects, or resources are not exposed to \nunauthorized subjects. If a threat exists against confidentiality, there is the possibility that unau-\nthorized disclosure could take place.\n" }, { "page_number": 200, "text": "Security Management Concepts and Principles\n155\nIn general, for confidentiality to be maintained on a network, data must be protected from \nunauthorized access, use, or disclosure while in storage, in process, and in transit. Unique and \nspecific security controls are required for each of these states of data, resources, and objects to \nmaintain confidentiality.\nThere are numerous attacks that focus on the violation of confidentiality. These include cap-\nturing network traffic and stealing password files as well as social engineering, port scanning, \nshoulder surfing, eavesdropping, sniffing, and so on.\nViolations of confidentiality are not limited to directed intentional attacks. Many instances \nof unauthorized disclosure of sensitive or confidential information are due to human error, \noversight, or ineptitude. Events that lead to confidentiality breaches include failing to properly \nencrypt a transmission, failing to fully authenticate a remote system before transferring data, \nleaving open otherwise secured access points, accessing malicious code that opens a back door, \nor even walking away from an access terminal while data is displayed on the monitor. Confi-\ndentiality violations can occur because of the actions of an end user or a system administrator. \nThey can also occur due to an oversight in a security policy or a misconfigured security control.\nThere are numerous countermeasures to ensure confidentiality against possible threats. \nThese include the use of encryption, network traffic padding, strict access control, rigorous \nauthentication procedures, data classification, and extensive personnel training.\nConfidentiality and integrity are dependent upon each other. Without object integrity, confi-\ndentiality cannot be maintained. Other concepts, conditions, and aspects of confidentiality \ninclude sensitivity, discretion, criticality, concealment, secrecy, privacy, seclusion, and isolation.\nIntegrity\nThe second principle from the CIA Triad is integrity. For integrity to be maintained, objects must \nretain their veracity and be intentionally modified by only authorized subjects. If a security mech-\nanism offers integrity, it offers a high level of assurance that the data, objects, and resources are \nunaltered from their original protected state. This includes alterations occurring while the object \nis in storage, in transit, or in process. Thus, maintaining integrity means the object itself is not \naltered and the operating system and programming entities that manage and manipulate the \nobject are not compromised.\nIntegrity can be examined from three perspectives:\n\u0002\nUnauthorized subjects should be prevented from making modifications.\n\u0002\nAuthorized subjects should be prevented from making unauthorized modifications.\n\u0002\nObjects should be internally and externally consistent so that their data is a correct and true \nreflection of the real world and any relationship with any child, peer, or parent object is \nvalid, consistent, and verifiable.\nFor integrity to be maintained on a system, controls must be in place to restrict access to data, \nobjects, and resources. Additionally, activity logging should be employed to ensure that only \nauthorized users are able to access their respective resources. Maintaining and validating object \nintegrity across storage, transport, and processing requires numerous variations of controls and \noversight.\n" }, { "page_number": 201, "text": "156\nChapter 5\n\u0002 Security Management Concepts and Principles\nThere are numerous attacks that focus on the violation of integrity. These include viruses, \nlogic bombs, unauthorized access, errors in coding and applications, malicious modification, \nintentional replacement, and system back doors.\nAs with confidentiality, integrity violations are not limited to intentional attacks. Many \ninstances of unauthorized alteration of sensitive information are due to human error, oversight, \nor ineptitude. Events that lead to integrity breaches include accidentally deleting files; entering \ninvalid data; altering configurations; including errors in commands, codes, and scripts; intro-\nducing a virus; and executing malicious code (such as a Trojan horse). Integrity violations can \noccur because of the actions of any user, including administrators. They can also occur due to \nan oversight in a security policy or a misconfigured security control.\nThere are numerous countermeasures to ensure integrity against possible threats. These \ninclude strict access control, rigorous authentication procedures, intrusion detection systems, \nobject/data encryption, hash total verifications, interface restrictions, input/function checks, \nand extensive personnel training.\nIntegrity is dependent upon confidentiality. Without confidentiality, integrity cannot be \nmaintained. Other concepts, conditions, and aspects of integrity include accuracy, truthfulness, \nauthenticity, validity, nonrepudiation, accountability, responsibility, completeness, and com-\nprehensiveness.\nAvailability\nThe third principle from the CIA Triad is availability, which means that authorized subjects are \ngranted timely and uninterrupted access to objects. If a security mechanism offers availability, \nit offers a high level of assurance that the data, objects, and resources are accessible to autho-\nrized subjects. Availability includes efficient uninterrupted access to objects and prevention of \ndenial of service (DoS) attacks. Availability also implies that the supporting infrastructure—\nincluding network services, communications, and access control mechanisms—is functional \nand allows authorized users to gain authorized access.\nFor availability to be maintained on a system, controls must be in place to ensure authorized \naccess and an acceptable level of performance, to quickly handle interruptions, to provide for \nredundancy, to maintain reliable backups, and to prevent data loss or destruction.\nThere are numerous threats to availability. These include device failure, software errors, and \nenvironmental issues (heat, static, etc.). There are also some forms of attacks that focus on the \nviolation of availability, including denial of service attacks, object destruction, and communi-\ncations interruptions.\nAs with confidentiality and integrity, violations of availability are not limited to intentional \nattacks. Many instances of unauthorized alteration of sensitive information are due to human \nerror, oversight, or ineptitude. Some events that lead to integrity breaches include accidentally \ndeleting files, overutilizing a hardware or software component, under-allocating resources, and \nmislabeling or incorrectly classifying objects. Availability violations can occur because of the \nactions of any user, including administrators. They can also occur due to an oversight in a secu-\nrity policy or a misconfigured security control.\nThere are numerous countermeasures to ensure availability against possible threats. These \ninclude designing intermediary delivery systems properly, using access controls effectively, \n" }, { "page_number": 202, "text": "Security Management Concepts and Principles\n157\nmonitoring performance and network traffic, using firewalls and routers to prevent DoS attacks, \nimplementing redundancy for critical systems, and maintaining and testing backup systems.\nAvailability is dependent upon both integrity and confidentiality. Without integrity and con-\nfidentiality, availability cannot be maintained. Other concepts, conditions, and aspects of avail-\nability include usability, accessibility, and timeliness.\nOther Security Concepts\nIn addition to the CIA Triad, there is a plethora of other security-related concepts, principles, \nand tenents that should be considered and addressed when designing a security policy and \ndeploying a security solution. This section discusses privacy, identification, authentication, \nauthorization, accountability, nonrepudiation, and auditing.\nPrivacy\nPrivacy can be a difficult entity to define. The term is used frequently in numerous contexts with-\nout much quantification or qualification. Here are some possible partial definitions of privacy:\n\u0002\nPrevention of unauthorized access\n\u0002\nFreedom from unauthorized access to information deemed personal or confidential\n\u0002\nFreedom from being observed, monitored, or examined without consent or knowledge\nWhen addressing privacy in the realm of IT, it usually becomes a balancing act between indi-\nvidual rights and the rights or activities of an organization. Some claim that individuals have the \nright to control whether or not information can be collected about them and what can be done \nwith it. Others claim that any activity performed in public view, such as most activities per-\nformed over the Internet, can be monitored without the knowledge of or permission from the \nindividuals being watched and that the information gathered from such monitoring can be used \nfor whatever purposes an organization deems appropriate or desirable.\nOn one hand, protecting individuals from unwanted observation, direct marketing, and dis-\nclosure of private, personal, or confidential details is considered a worthy effort. Likewise, orga-\nnizations profess that demographic studies, information gleaning, and focused marketing \nimprove business models, reduce advertising waste, and save money for all parties.\nWhatever your personal or organizational stance is on the issue of online privacy, it must be \naddressed in an organizational security policy. Privacy is an issue not just for external visitors \nto your online offerings, but also for your customers, employees, suppliers, and contractors. If \nyou gather any type of information about any person or company, you must address privacy.\nIn most cases, especially when privacy is being violated or restricted, the individuals and \ncompanies must be informed; otherwise, you may face legal ramifications. Privacy issues must \nalso be addressed when allowing or restricting personal use of e-mail, retaining e-mail, record-\ning phone conversations, gathering information about surfing or spending habits, and so on.\nIdentification\nIdentification is the process by which a subject professes an identity and accountability is initiated. \nA subject must provide an identity to a system to start the process of authentication, authorization, \n" }, { "page_number": 203, "text": "158\nChapter 5\n\u0002 Security Management Concepts and Principles\nand accountability. Providing an identity can be typing in a username; swiping a smart card; waving \na token device; speaking a phrase; or positioning your face, hand, or finger for a camera or scanning \ndevice. Proving a process ID number also represents the identification process. Without an identity, \na system has no way to correlate an authentication factor with the subject.\nOnce a subject has been identified (i.e., once the subject’s identity has been recognized and \nverified), the identity is accountable for any further actions by that subject. IT systems track \nactivity by identities, not by the subjects themselves. A computer doesn’t know one human from \nanother, but it does know that your user account is different from all other user accounts. A sub-\nject’s identity is typically labeled as or considered to be public information.\nAuthentication\nThe process of verifying or testing that the claimed identity is valid is authentication. Authen-\ntication requires from the subject additional information that must exactly correspond to the \nidentity indicated. The most common form of authentication is using a password. Authentica-\ntion verifies the identity of the subject by comparing one or more factors against the database \nof valid identities (i.e., user accounts). The authentication factor used to verify identity is typi-\ncally labeled as or considered to be private information. The capability of the subject and system \nto maintain the secrecy of the authentication factors for identities directly reflects the level of \nsecurity of that system.\nIdentification and authentication are always used together as a single two-step process. Pro-\nviding an identity is step one and providing the authentication factor(s) is step two. Without \nboth, a subject cannot gain access to a system—neither element alone is useful.\nThere are several types of authentication information a subject can provide (e.g., something \nyou know, something you have). Each authentication technique or factor has its unique benefits \nand drawbacks. Thus, it is important to evaluate each mechanism in light of the environment \nin which it will be deployed to determine viability. Authentication was discussed at length in \nChapter 1, “Accountability and Access Control.”\nAuthorization\nOnce a subject is authenticated, access must be authorized. The process of authorization ensures \nthat the requested activity or access to an object is possible given the rights and privileges assigned \nto the authenticated identity. In most cases, the system evaluates an access control matrix that \ncompares the subject, the object, and the intended activity. If the specific action is allowed, the \nsubject is authorized. If the specific action is not allowed, the subject is not authorized.\nKeep in mind that just because a subject has been identified and authenticated does not auto-\nmatically mean they have been authorized. It is possible for a subject to be logged onto a net-\nwork (i.e., identified and authenticated) but be blocked from accessing a file or printing to a \nprinter (i.e., by not being authorized to perform that activity). Most network users are autho-\nrized to perform only a limited number of activities on a specific collection of resources. Iden-\ntification and authentication are all-or-nothing aspects of access control. Authorization has a \nwide range of variations between all or nothing for each individual object within the environ-\nment. A user may be able to read a file but not delete it, print a document but not alter the print \nqueue, or log on to a system but not access any resources.\n" }, { "page_number": 204, "text": "Protection Mechanisms\n159\nAuditing\nAuditing, or monitoring, is the programmatic means by which subjects are held accountable for \ntheir actions while authenticated on a system. Auditing is also the process by which unautho-\nrized or abnormal activities are detected on a system. Auditing is recording activities of a subject \nand objects as well as recording the activities of core system functions that maintain the oper-\nating environment and the security mechanisms. The audit trails created by recording system \nevents to logs can be used to evaluate the health and performance of a system. System crashes \nmay indicate faulty programs, corrupt drivers, or intrusion attempts. The event logs leading up \nto a crash can often be used to discover the reason a system failed. Log files provide an audit trail \nfor re-creating the history of an event, intrusion, or system failure. Auditing is needed to detect \nmalicious actions by subjects, attempted intrusions, and system failures, and to reconstruct \nevents, provide evidence for prosecution, and produce problem reports and analysis. Auditing \nis usually a native feature of an operating system and most applications and services. Thus, con-\nfiguring the system to record information about specific types of events is fairly straightforward.\nFor more information on configuring and administrating auditing and logging, see Chapter 14, \n“Auditing and Monitoring.”\nAccountability\nAn organization’s security policy can be properly enforced only if accountability is maintained. In \nother words, security can be maintained only if subjects are held accountable for their actions. \nEffective accountability relies upon the capability to prove a subject’s identity and track their \nactivities. Accountability is established by linking a human to the activities of an online identity \nthrough the security services and mechanisms of auditing, authorization, authentication, and \nidentification.\nNonrepudiation\nNonrepudiation ensures that the subject of an activity or event cannot deny that the event \noccurred. Nonrepudiation prevents a subject from claiming not to have sent a message, not to \nhave performed an action, or not to have been the cause of an event. It is made possible through \nidentity, authentication, authorization, accountability, and auditing. Nonrepudiation can be \nestablished using digital certificates, session identifiers, transaction logs, and numerous other \ntransactional and access control mechanisms.\nProtection Mechanisms\nAnother aspect of security solution concepts and principles is the element of protection mech-\nanisms. These are common characteristics of security controls. Not all security controls must \nhave them, but many controls offer their protection for confidentiality, integrity, and availabil-\nity through the use of these mechanisms.\n" }, { "page_number": 205, "text": "160\nChapter 5\n\u0002 Security Management Concepts and Principles\nLayering\nLayering, also known as defense in depth, is simply the use of multiple controls in a series. No \none specific control can protect against all possible threats. The use of a multilayered solution \nallows for numerous different and specific controls to be brought to bear against whatever \nthreats come to pass. When security solutions are designed in layers, most threats are elimi-\nnated, mitigated, or thwarted.\nUsing layers in a series rather than in parallel is an important concept. Performing security \nrestrictions in a series means to perform one after the other in a linear fashion. Only through \na series configuration will each attack be scanned, evaluated, or mitigated by every security \ncontrol. A single failure of a security control does not render the entire solution ineffective. \nIf security controls were implemented in parallel, a threat could pass through a single check-\npoint that did not address its particular malicious activity. Serial configurations are very nar-\nrow but very deep, whereas parallel configurations are very wide but very shallow. Parallel \nsystems are useful in distributed computing applications, but parallelism is not a useful con-\ncept in the realm of security.\nThink of physical entrances to buildings. A parallel configuration is used for shopping malls. \nThere are many doors in many locations around the entire perimeter of the mall. A series con-\nfiguration would most likely be used in a bank or an airport. A single entrance is provided and \nthat entrance is actually several gateways or checkpoints that must be passed in sequential order \nto gain entry into active areas of the building.\nLayering also includes the concept that networks comprise numerous separate entities, each \nwith its own unique security controls and vulnerabilities. In an effective security solution, there \nis a synergy between all networked systems that creates a single security front. The use of sep-\narate security systems creates a layered security solution.\nAbstraction\nAbstraction is used for efficiency. Similar elements are put into groups, classes, or roles that are \nassigned security controls, restrictions, or permissions as a collective. Thus, the concept of abstrac-\ntion is used when classifying objects or assigning roles to subjects. The concept of abstraction also \nincludes the definition of object and subject types or of objects themselves (i.e., a data structure \nused to define a template for a class of entities). Abstraction is used to define what types of data \nan object can contain, what types of functions can be performed on or by that object, and what \ncapabilities that object has. Abstraction simplifies security by enabling you to assign security con-\ntrols to a group of objects collected by type or function.\nData Hiding\nData hiding is exactly what it sounds like: preventing data from being discovered or accessed \nby a subject. Keeping a database from being accessed by unauthorized visitors is a form of data \nhiding, as is restricting a subject at a lower classification level from accessing data at a higher \nclassification level. Preventing an application from accessing hardware directly is also a form of \ndata hiding. Data hiding is often a key element in security controls as well as in programming.\n" }, { "page_number": 206, "text": "Change Control/Management\n161\nEncryption\nEncryption is the art and science of hiding the meaning or intent of a communication from unin-\ntended recipients. Encryption can take many forms and be applied to every type of electronic \ncommunication, including text, audio, and video files, as well as applications themselves. \nEncryption is a very important element in security controls, especially in regard to the trans-\nmission of data between systems. There are various strengths of encryption, each of which is \ndesigned and/or appropriate for a specific use or purpose. Encryption is discussed at length in \nChapters 9, “Cryptography and Private Key Algorithms,” and 10, “PKI and Cryptographic \nApplications.”\nChange Control/Management\nAnother important aspect of security management is the control or management of change. \nChange in a secure environment can introduce loopholes, overlaps, missing objects, and over-\nsights that can lead to new vulnerabilities. The only way to maintain security in the face of \nchange is to systematically manage change. This usually involves extensive planning, testing, \nlogging, auditing, and monitoring of activities related to security controls and mechanisms. The \nrecords of changes to an environment are then used to identify agents of change, whether those \nagents are objects, subjects, programs, communication pathways, or even the network itself.\nThe goal of change management is to ensure that any change does not lead to reduced or \ncompromised security. Change management is also responsible for making it possible to roll \nback any change to a previous secured state. Change management is only a requirement for sys-\ntems complying with the Information Technology Security Evaluation and Criteria (ITSEC) \nclassifications of B2, B3, and A1. However, change management can be implemented on any \nsystem despite the level of security. Ultimately, change management improves the security of an \nenvironment by protecting implemented security from unintentional, tangential, or effected \ndiminishments. While an important goal of change management is to prevent unwanted reduc-\ntions in security, its primary purpose is to make all changes subject to detailed documentation \nand auditing and thus able to be reviewed and scrutinized by management.\nChange management should be used to oversee alterations to every aspect of a system, \nincluding hardware configuration and OS and application software. Change management \nshould be included in design, development, testing, evaluation, implementation, distribution, \nevolution, growth, ongoing operation, and modification. It requires a detailed inventory of \nevery component and configuration. It also requires the collection and maintenance of complete \ndocumentation for every system component, from hardware to software and from configura-\ntion settings to security features.\nThe change control process of configuration or change management has several goals or \nrequirements:\n\u0002\nImplement changes in a monitored and orderly manner. Changes are always controlled.\n\u0002\nA formalized testing process is included to verify that a change produces expected results.\n" }, { "page_number": 207, "text": "162\nChapter 5\n\u0002 Security Management Concepts and Principles\n\u0002\nAll changes can be reversed.\n\u0002\nUsers are informed of changes before they occur to prevent loss of productivity.\n\u0002\nThe effects of changes are systematically analyzed.\n\u0002\nNegative impact of changes on capabilities, functionality, and performance is minimized.\nOne example of a change management process is a parallel run, which is a type of new system \ndeployment testing where the new system and the old system are run in parallel. Each major or \nsignificant user process is performed on each system simultaneously to ensure that the new sys-\ntem supports all required business functionality that the old system supported or provided.\nData Classification\nData classification is the primary means by which data is protected based on its need for secrecy, \nsensitivity, or confidentiality. It is inefficient to treat all data the same when designing and \nimplementing a security system. Some data items need more security than others. Securing \neverything at a low security level means sensitive data is easily accessible. Securing everything \nat a high security level is too expensive and restricts access to unclassified, noncritical data. Data \nclassification is used to determine how much effort, money, and resources are allocated to pro-\ntect the data and control access to it.\nThe primary objective of data classification schemes is to formalize and stratify the process \nof securing data based on assigned labels of importance and sensitivity. Data classification is \nused to provide security mechanisms for the storage, processing, and transfer of data. It also \naddresses how data is removed from a system and destroyed.\nThe following are benefits of using a data classification scheme:\n\u0002\nIt demonstrates an organization's commitment to protecting valuable resources and assets.\n\u0002\nIt assists in identifying those assets that are most critical or valuable to the organization.\n\u0002\nIt lends credence to the selection of protection mechanisms.\n\u0002\nIt is often required for regulatory compliance or legal restrictions.\n\u0002\nIt helps to define access levels, types of authorized uses, and parameters for declassification, \nand/or destruction of no longer valuable resources.\nThe criteria by which data is classified varies based on the organization performing the clas-\nsification. However, there are numerous generalities that can be gleaned from common or stan-\ndardized classification systems:\n\u0002\nUsefulness of the data\n\u0002\nTimeliness of the data\n\u0002\nValue or cost of the data\n\u0002\nMaturity or age of the data\n\u0002\nLifetime of the data (or when it expires)\n" }, { "page_number": 208, "text": "Data Classification\n163\n\u0002\nAssociation with personnel\n\u0002\nData disclosure damage assessment (i.e., how disclosure of the data would affect the \norganization)\n\u0002\nData modification damage assessment (i.e., how modification of the data would affect the \norganization)\n\u0002\nNational security implications of the data\n\u0002\nAuthorized access to the data (i.e., who has access to the data)\n\u0002\nRestriction from the data (i.e., who is restricted from the data)\n\u0002\nMaintenance and monitoring of the data (i.e., who should maintain and monitor the data)\n\u0002\nStorage of the data\nUsing whatever criteria is appropriate for the organization, data is evaluated and an appro-\npriate data classification label is assigned to it. In some cases, the label is added to the data \nobject. In other cases, labeling is simply assigned by the placement of the data into a storage \nmechanism or behind a security protection mechanism.\nTo implement a classification scheme, there are seven major steps or phases that you must take:\n1.\nIdentify the custodian and define their responsibilities.\n2.\nSpecify the evaluation criteria of how the information will be classified and labeled.\n3.\nClassify and label each resource. The owner conducts this step, but it should be reviewed \nby a supervisor.\n4.\nDocument any exceptions to the classification policy that are discovered and integrate them \ninto the evaluation criteria.\n5.\nSelect the security controls that will be applied to each classification level to provide the \nnecessary level of protection.\n6.\nSpecify the procedures for declassifying resources and the procedures for transferring cus-\ntody of a resource to an external entity.\n7.\nCreate an enterprise-wide awareness program to instruct all personnel about the classifi-\ncation system.\nDeclassification is often overlooked when designing a classification system and documenting \nthe usage procedures. Declassification is required once an asset no longer warrants or needs the \nprotection of its currently assigned classification or sensitivity level. In other words, if the asset \nwas new, it would be assigned a lower sensitivity label than it currently is assigned. When you \nfail to declassify assets as needed, you waste security resources and degrade the value and pro-\ntection of the higher sensitivity levels.\nThe two common classification schemes are government/military classification and commer-\ncial business/private sector classification. There are five levels of government/military classifi-\ncation (listed highest to lowest):\nTop secret\nThe highest level of classification. Unauthorized disclosure of top secret data will \nhave drastic effects and cause grave damage to national security.\n" }, { "page_number": 209, "text": "164\nChapter 5\n\u0002 Security Management Concepts and Principles\nSecret\nUsed for data of a restricted nature. Unauthorized disclosure of data classified as secret \nwill have significant effects and cause critical damage to national security.\nConfidential\nUsed for data of a confidential nature. Unauthorized disclosure of data classified \nas confidential will have noticeable effects and cause serious damage to national security. This \nclassification is used for all data between secret and sensitive but unclassified classifications.\nSensitive but unclassified\nUsed for data of a sensitive or private nature, but disclosure of this \ndata would not cause significant damage.\nUnclassified\nThe lowest level of classification. Used for data that is neither sensitive nor clas-\nsified. Disclosure of unclassified data does not compromise confidentiality nor cause any notice-\nable damage.\nAn easy way to remember the names of the five levels of the government or \nmilitary classification scheme in their correct order is with a memorization \nacronym: US Can Stop Terrorism. Notice that the five uppercase letters repre-\nsent the five named classification levels and they appear in this phrase in the \ncorrect order from least secure on the left to most secure on the right (or bot-\ntom to top in the preceding list of items).\nThe classifications of confidential, secret, and top secret are collectively known or labeled as \nclassified. Often, revealing the actual classification of data to unauthorized individuals is a vio-\nlation of that data in and of itself. Thus, the term classified is generally used to refer to any data \nthat is ranked above sensitive but unclassified. All classified data is exempt from the Freedom \nof Information Act as well as other laws and regulations. The U.S. military classification scheme \nis most concerned with the sensitivity of data and focuses on the protection of confidentiality \n(i.e., prevention of disclosure). You can roughly define each level or label of classification as to \nthe level of damage that would be caused in the event of a confidentiality violation. Data from the \nTop Secret level would cause grave damage to national security, while data from the Unclassi-\nfied level would not cause any serious damage to national or localized security.\nThere are four levels of commercial business/private sector classification (listed highest to lowest):\nConfidential\nThe highest level of classification. Used for data that is extremely sensitive and \nfor internal use only. A significant negative impact could occur for the company if confidential \ndata is disclosed.\nPrivate\nUsed for data that is of a private or personal nature and intended for internal use only. A \nsignificant negative impact could occur for the company or individuals if private data is disclosed.\nConfidential and private data in a commercial business/private sector classifi-\ncation scheme both require roughly the same level of security protection. The \nreal difference between the two labels is that confidential data is used for com-\npany data while private data is used only for data related to individuals, such as \nmedical data.\n" }, { "page_number": 210, "text": "Summary\n165\nSensitive\nUsed for data that is more classified than public data. A negative impact could occur \nfor the company if sensitive data is disclosed.\nPublic\nThe lowest level of classification. Used for all data that does not fit in one of the higher \nclassifications. Its disclosure does not have a serious negative impact on the organization.\nAnother classification often used in the commercial business/private sector is proprietary. \nProprietary data is a form of confidential information. If proprietary data is disclosed, it can \nhave drastic affects on the competitive edge of an organization.\nSummary\nSecurity management concepts and principles are inherent elements in a security policy and in \nsolution deployment. They define the basic parameters needed for a secure environment. They \nalso define the goals and objectives that both policy designers and system implementers must \nachieve in order to create a secure solution. It is important for real-world security professionals \nas well as CISSP exam students to understand these items thoroughly.\nThe primary goals and objectives of security are contained within the CIA Triad: confidenti-\nality, integrity, and availability. These three principles are considered the most important within \nthe realm of security. Their importance to an organization depends on the organization’s security \ngoals and requirements and on how much of a threat to security exists in its environment.\nThe first principle from the CIA Triad is confidentiality, the principle that objects are not dis-\nclosed to unauthorized subjects. Security mechanisms that offer confidentiality offer a high level \nof assurance that data, objects, or resources are not exposed to unauthorized subjects. If a threat \nexists against confidentiality, there is the possibility that unauthorized disclosure could take place.\nThe second principle from the CIA Triad is integrity, the principle that objects retain their verac-\nity and are intentionally modified by only authorized subjects. Security mechanisms that offer integ-\nrity offer a high level of assurance that the data, objects, and resources are unaltered from their \noriginal protected state. This includes alterations occurring while the object is in storage, in transit, \nor in process. Maintaining integrity means the object itself is not altered, nor are the operating sys-\ntem and programming entities that manage and manipulate the object compromised.\nThe third principle from the CIA Triad is availability, the principle that authorized subjects \nare granted timely and uninterrupted access to objects. Security mechanisms that offer avail-\nability offer a high level of assurance that the data, objects, and resources are accessible by \nauthorized subjects. Availability includes efficient uninterrupted access to objects and preven-\ntion of denial of service attacks. It also implies that the supporting infrastructure is functional \nand allows authorized users to gain authorized access.\nOther security-related concepts, principles, and tenets that should be considered and \naddressed when designing a security policy and deploying a security solution are privacy, iden-\ntification, authentication, authorization, accountability, nonrepudiation, and auditing.\nYet another aspect of security solution concepts and principles is the elements of protection \nmechanisms: layering, abstraction, data hiding, and the use of encryption. These are common \ncharacteristics of security controls, and although not all security controls must have them, many \ncontrols use these mechanisms to protect confidentiality, integrity, and availability\n" }, { "page_number": 211, "text": "166\nChapter 5\n\u0002 Security Management Concepts and Principles\nThe control or management of change is an important aspect of security management prac-\ntices. When a secure environment is changed, loopholes, overlaps, missing objects, and over-\nsights can lead to new vulnerabilities. You can, however, maintain security by systematically \nmanaging change. This typically involves extensive logging, auditing, and monitoring of activ-\nities related to security controls and security mechanisms. The resulting data is then used to \nidentify agents of change, whether objects, subjects, programs, communication pathways, or \neven the network itself.\nData classification is the primary means by which data is protected based on its secrecy, sen-\nsitivity, or confidentiality. Because some data items need more security than others, it is ineffi-\ncient to treat all data the same when designing and implementing a security system. If everything \nis secured at a low security level, sensitive data is easily accessible, but securing everything at a \nhigh security level is too expensive and restricts access to unclassified, noncritical data. Data \nclassification is used to determine how much effort, money, and resources are allocated to pro-\ntect the data and control access to it.\nExam Essentials\nUnderstand the CIA Triad element confidentiality.\nConfidentiality is the principle that \nobjects are not disclosed to unauthorized subjects. Know why it is important, mechanisms that \nsupport it, attacks that focus on it, and effective countermeasures.\nUnderstand the CIA Triad element integrity.\nIntegrity is the principle that objects retain their \nveracity and are intentionally modified by only authorized subjects. Know why it is important, \nmechanisms that support it, attacks that focus on it, and effective countermeasures.\nUnderstand the CIA Triad element availability.\nAvailability is the principle that authorized \nsubjects are granted timely and uninterrupted access to objects. Know why it is important, \nmechanisms that support it, attacks that focus on it, and effective countermeasures.\nKnow how privacy fits into the realm of IT security.\nKnow the multiple meanings/defini-\ntions of privacy, why it is important to protect, and the issues surrounding it, especially in a \nwork environment.\nBe able to explain how identification works.\nIdentification is the process by which a subject \nprofesses an identity and accountability is initiated. A subject must provide an identity to a sys-\ntem to start the process of authentication, authorization, and accountability.\nUnderstand the process of authentication.\nThe process of verifying or testing that a claimed \nidentity is valid is authentication. Authentication requires information from the subject that \nmust exactly correspond to the identity indicated.\nKnow how authorization fits into a security plan.\nOnce a subject is authenticated, its access \nmust be authorized. The process of authorization ensures that the requested activity or object \naccess is possible given the rights and privileges assigned to the authenticated identity.\n" }, { "page_number": 212, "text": "Exam Essentials\n167\nBe able to explain the auditing process.\nAuditing, or monitoring, is the programmatic means \nby which subjects are held accountable for their actions while authenticated on a system. Audit-\ning is also the process by which unauthorized or abnormal activities are detected on a system. \nAuditing is needed to detect malicious actions by subjects, attempted intrusions, and system fail-\nures and to reconstruct events, provide evidence for prosecution, and produce problem reports \nand analysis.\nUnderstand the importance of accountability.\nAn organization’s security policy can be prop-\nerly enforced only if accountability is maintained. In other words, security can be maintained \nonly if subjects are held accountable for their actions. Effective accountability relies upon the \ncapability to prove a subject’s identity and track their activities.\nBe able to explain nonrepudiation.\nNonrepudiation ensures that the subject of an activity or \nevent cannot deny that the event occurred. It prevents a subject from claiming not to have sent \na message, not to have performed an action, or not to have been the cause of an event.\nKnow how layering simplifies security.\nLayering is simply the use of multiple controls in \nseries. Using a multilayered solution allows for numerous different and specific controls to be \nbrought to bear against whatever threats come to pass.\nBe able to explain the concept of abstraction.\nAbstraction is used to collect similar elements \ninto groups, classes, or roles that are assigned security controls, restrictions, or permissions as \na collective. It adds efficiency to carrying out a security plan.\nUnderstand data hiding.\nData hiding is exactly what it sounds like: preventing data from \nbeing discovered or accessed by a subject. It is often a key element in security controls as well \nas in programming.\nUnderstand the need for encryption.\nEncryption is the art and science of hiding the meaning or \nintent of a communication from unintended recipients. It can take many forms and be applied to \nevery type of electronic communication, including text, audio, and video files, as well as programs \nthemselves. Encryption is a very important element in security controls, especially in regard to the \ntransmission of data between systems.\nBe able to explain the concepts of change control and change management.\nChange in a \nsecure environment can introduce loopholes, overlaps, missing objects, and oversights that can \nlead to new vulnerabilities. The only way to maintain security in the face of change is to sys-\ntematically manage change.\nKnow why and how data is classified.\nData is classified to simplify the process of assigning \nsecurity controls to groups of objects rather than to individual objects. The two common clas-\nsification schemes are government/military and commercial business/private sector. Know the \nfive levels of government/military classification and the four levels of commercial business/pri-\nvate sector classification.\nUnderstand the importance of declassification.\nDeclassification is required once an asset no \nlonger warrants the protection of its currently assigned classification or sensitivity level.\n" }, { "page_number": 213, "text": "168\nChapter 5\n\u0002 Security Management Concepts and Principles\nReview Questions\n1.\nWhich of the following contains the primary goals and objectives of security?\nA. A network’s border perimeter\nB. The CIA Triad\nC. A stand-alone system\nD. The Internet\n2.\nVulnerabilities and risks are evaluated based on their threats against which of the following?\nA. One or more of the CIA Triad principles\nB. Data usefulness\nC. Due care\nD. Extent of liability\n3.\nWhich of the following is a principle of the CIA Triad that means authorized subjects are granted \ntimely and uninterrupted access to objects?\nA. Identification\nB. Availability\nC. Encryption\nD. Layering\n4.\nWhich of the following is not considered a violation of confidentiality?\nA. Stealing passwords\nB. Eavesdropping\nC. Hardware destruction\nD. Social engineering\n5.\nWhich of the following is not true?\nA. Violations of confidentiality include human error.\nB. Violations of confidentiality include management oversight.\nC. Violations of confidentiality are limited to direct intentional attacks.\nD. Violations of confidentiality can occur when a transmission is not properly encrypted.\n6.\nConfidentiality is dependent upon which of the following?\nA. Accountability\nB. Availability\nC. Nonrepudiation\nD. Integrity\n" }, { "page_number": 214, "text": "Review Questions\n169\n7.\nIf a security mechanism offers availability, then it offers a high level of assurance that the data, \nobjects, and resources are _______________ by authorized subjects.\nA. Controlled\nB. Audited\nC. Accessible\nD. Repudiated\n8.\nWhich of the following describes the freedom from being observed, monitored, or examined \nwithout consent or knowledge?\nA. Integrity\nB. Privacy\nC. Authentication\nD. Accountability\n9.\nAll but which of the following items require awareness for all individuals affected?\nA. The restriction of personal e-mail\nB. Recording phone conversations\nC. Gathering information about surfing habits\nD. The backup mechanism used to retain e-mail messages\n10. Which of the following is typically not used as an identification factor?\nA. Username\nB. Smart card swipe\nC. Fingerprint scan\nD. A challenge/response token device\n11. What ensures that the subject of an activity or event cannot deny that the event occurred?\nA. CIA Triad\nB. Abstraction\nC. Nonrepudiation\nD. Hash totals\n12. Which of the following is the most important and distinctive concept in relation to layered security?\nA. Multiple\nB. Series\nC. Parallel\nD. Filter\n" }, { "page_number": 215, "text": "170\nChapter 5\n\u0002 Security Management Concepts and Principles\n13. Which of the following is not considered an example of data hiding?\nA. Preventing an authorized reader of an object from deleting that object\nB. Keeping a database from being accessed by unauthorized visitors\nC. Restricting a subject at a lower classification level from accessing data at a higher classi-\nfication level\nD. Preventing an application from accessing hardware directly\n14. What is the primary goal of change management?\nA. Maintaining documentation\nB. Keeping users informed of changes\nC. Allowing rollback of failed changes\nD. Preventing security compromises\n15. What is the primary objective of data classification schemes?\nA. To control access to objects for authorized subjects\nB. To formalize and stratify the process of securing data based on assigned labels of importance \nand sensitivity\nC. To establish a transaction trail for auditing accountability\nD. To manipulate access controls to provide for the most efficient means to grant or restrict \nfunctionality\n16. Which of the following is typically not a characteristic considered when classifying data?\nA. Value\nB. Size of object\nC. Useful lifetime\nD. National security implications\n17.\nWhat are the two common data classification schemes?\nA. Military and private sector\nB. Personal and government\nC. Private sector and unrestricted sector\nD. Classified and unclassified\n18. Which of the following is the lowest military data classification for classified data?\nA. Sensitive\nB. Secret\nC. Sensitive but unclassified\nD. Private\n" }, { "page_number": 216, "text": "Review Questions\n171\n19. Which commercial business/private sector data classification is used to control information \nabout individuals within an organization?\nA. Confidential\nB. Private\nC. Sensitive\nD. Proprietary\n20. Data classifications are used to focus security controls over all but which of the following?\nA. Storage\nB. Processing\nC. Layering\nD. Transfer\n" }, { "page_number": 217, "text": "172\nChapter 5\n\u0002 Security Management Concepts and Principles\nAnswers to Review Questions\n1.\nB. The primary goals and objectives of security are confidentiality, integrity, and availability, \ncommonly referred to as the CIA Triad.\n2.\nA. Vulnerabilities and risks are evaluated based on their threats against one or more of the CIA \nTriad principles.\n3.\nB. Availability means that authorized subjects are granted timely and uninterrupted access to \nobjects.\n4.\nC. Hardware destruction is a violation of availability and possibly integrity. Violations of con-\nfidentiality include capturing network traffic, stealing password files, social engineering, port \nscanning, shoulder surfing, eavesdropping, and sniffing.\n5.\nC. Violations of confidentiality are not limited to direct intentional attacks. Many instances of \nunauthorized disclosure of sensitive or confidential information are due to human error, over-\nsight, or ineptitude.\n6.\nD. Without integrity, confidentiality cannot be maintained.\n7.\nC. Accessibility of data, objects, and resources is the goal of availability. If a security mechanism \noffers availability, then it is highly likely that the data, objects, and resources are accessible by \nauthorized subjects.\n8.\nB. Privacy is freedom from being observed, monitored, or examined without consent or knowledge.\n9.\nD. Users should be aware that e-mail messages are retained, but the backup mechanism used to \nperform this operation does not need to be disclosed to them.\n10. D. A challenge/response token device is almost exclusively used as an authentication factor, not \nan identification factor.\n11. C. Nonrepudiation ensures that the subject of an activity or event cannot deny that the event \noccurred.\n12. B. Layering is the deployment of multiple security mechanisms in a series. When security restric-\ntions are performed in a series, they are performed one after the other in a linear fashion. There-\nfore, a single failure of a security control does not render the entire solution ineffective.\n13. A. Preventing an authorized reader of an object from deleting that object is just an access con-\ntrol, not data hiding. If you can read an object, it is not hidden from you.\n14. D. The prevention of security compromises is the primary goal of change management.\n15. B. The primary objective of data classification schemes is to formalize and stratify the process \nof securing data based on assigned labels of importance and sensitivity.\n16. B. Size is not a criteria for establishing data classification. When classifying an object, you \nshould take value, lifetime, and security implications into consideration.\n" }, { "page_number": 218, "text": "Answers to Review Questions\n173\n17.\nA. Military (or government) and private sector (or commercial business) are the two common \ndata classification schemes.\n18. B. Of the options listed, secret is the lowest classified military data classification.\n19. B. The commercial business/private sector data classification of private is used to protect infor-\nmation about individuals.\n20. C. Layering is a core aspect of security mechanisms, but it is not a focus of data classifications.\n" }, { "page_number": 219, "text": "" }, { "page_number": 220, "text": "Chapter\n6\nAsset Value, Policies, \nand Roles\nTHE CISSP EXAM TOPICS COVERED IN THIS \nCHAPTER INCLUDE:\n\u0001 Employment Policies and Practices\n\u0001 Roles and Responsibilities\n\u0001 Policies, Standards, Guidelines, and Procedures\n\u0001 Risk Management\n\u0001 Security Awareness Training\n\u0001 Security Management Planning\n" }, { "page_number": 221, "text": "The Security Management Practices domain of the Common \nBody of Knowledge (CBK) for the CISSP certification exam deals \nwith hiring practices, security roles, formalizing security struc-\nture, risk management, awareness training, and management planning.\nBecause of the complexity and importance of hardware and software controls, security man-\nagement for employees is often overlooked in overall security planning. This chapter explores \nthe human side of security, from establishing secure hiring practices and job descriptions to \ndeveloping an employee infrastructure. Additionally, employee training, management, and ter-\nmination practices are considered an integral part of creating a secure environment. Finally, we \nexamine how to assess and manage security risks.\nEmployment Policies and Practices\nHumans are the weakest element in any security solution. No matter what physical or logical \ncontrols are deployed, humans can discover ways to avoid them, circumvent or subvert them, \nor disable them. Thus, it is important to take into account the humanity of your users when \ndesigning and deploying security solutions for your environment.\nIssues, problems, and compromises related to humans occur at all stages of a security solu-\ntion development. This is because humans are involved throughout the development, deploy-\nment, and ongoing administration of any solution. Therefore, you must evaluate the effect \nusers, designers, programmers, developers, managers, and implementers have on the process.\nSecurity Management for Employees\nHiring new staff typically involves several distinct steps: creating a job description, setting a \nclassification for the job, screening candidates, and hiring and training the one best suited for \nthe job. Without a job description, there is no consensus on what type of individual should be \nhired. Personnel should be added to an organization because there is a need for their specific \nskills and experience. Any job description for any position within an organization should \naddress relevant security issues. You must consider items such as whether the position requires \nhandling of sensitive material or access to classified information. In effect, the job description \ndefines the roles to which an employee needs to be assigned to perform their work tasks. The \njob description should define the type and extent of access the position requires on the secured \nnetwork. Once these issues have been resolved, assigning a security classification to the job \ndescription is fairly standard.\n" }, { "page_number": 222, "text": "Employment Policies and Practices\n177\nImportant elements in constructing a job description include separation of duties, job \nresponsibilities, and job rotation.\nSeparation of duties\nSeparation of duties is the security concept in which critical, significant, \nand sensitive work tasks are divided among several individuals. This prevents any one person \nfrom having the ability to undermine or subvert vital security mechanisms. This unwanted \nactivity is called collusion.\nJob responsibilities\nJob responsibilities are the specific work tasks an employee is required to \nperform on a regular basis. Depending on their responsibilities, employees require access to var-\nious objects, resources, and services. On a secured network, users must be granted access priv-\nileges for those elements related to their work tasks. To maintain the greatest security, access \nshould be assigned according to the principle of least privilege. The principle of least privilege \nstates that in a secured environment, users should be granted the minimum amount of access \nnecessary for them to complete their required work tasks or job responsibilities.\nJob rotation\nJob rotation, or rotating employees among numerous job positions, is simply a \nmeans by which an organization improves its overall security. Job rotation serves two functions. \nFirst, it provides a type of knowledge redundancy. When multiple employees are each capable \nof performing the work tasks required by several job positions, the organization is less likely to \nexperience serious downtime or loss in productivity if an illness or other incident keeps one or \nmore employees out of work for an extended period of time. Second, moving personnel around \nreduces the risk of fraud, data modification, theft, sabotage, and misuse of information. The \nlonger a person works in a specific position, the more likely they are to be assigned additional \nwork tasks and thus expand their privileges and access. As a person becomes increasingly famil-\niar with their work tasks, they may abuse their privileges for personal gain or malice. If misuse \nor abuse is committed by one employee, it will be easier to detect by another employee who \nknows the job position and work responsibilities. Therefore, job rotation also provides a form \nof peer auditing.\nWhen multiple people work together to perpetrate a crime, it’s called collusion. The likeli-\nhood that a coworker will be willing to collaborate on an illegal or abusive scheme is reduced \ndue to the higher risk of detection the combination of separation of duties, restricted job respon-\nsibilities, and job rotation provides.\nJob descriptions are not used exclusively for the hiring process; they should be maintained \nthroughout the life of the organization. Only through detailed job descriptions can a comparison \nbe made between what a person should be responsible for and what they actually are responsible \nfor. It is a managerial task to ensure that job descriptions overlap as little as possible and that one \nworker’s responsibilities do not drift or encroach on those of another’s. Likewise, managers \nshould audit privilege assignments to ensure that workers do not obtain access that is not strictly \nrequired for them to accomplish their work tasks.\nScreening and Background Checks\nScreening candidates for a specific position is based on the sensitivity and classification defined \nby the job description. The sensitivity and classification of a specific position is dependent upon \nthe level of harm that could be caused by accidental or intentional violations of security by a \n" }, { "page_number": 223, "text": "178\nChapter 6\n\u0002 Asset Value, Policies, and Roles\nperson in the position. Thus, the thoroughness of the screening process should reflect the secu-\nrity of the position to be filled.\nBackground checks and security clearances are essential elements in proving that a candidate \nis adequate, qualified, and trustworthy for a secured position. Background checks include \nobtaining a candidate’s work and educational history; checking references; interviewing col-\nleagues, neighbors, and friends; checking police and government records for arrests or illegal \nactivities; verifying identity through fingerprints, driver’s license, and birth certificate; and hold-\ning a personal interview. This process could also include a polygraph test, drug testing, and per-\nsonality testing/evaluation.\nCreating Employment Agreements\nWhen a new employee is hired, they should sign an employment agreement. Such a document \noutlines the rules and restrictions of the organization, the security policy, the acceptable use and \nactivities policies, details of the job description, violations and consequences, and the length of \ntime the position is to be filled by the employee. Many of these items may be separate docu-\nments. In such a case, the employment agreement is used to verify that the employment candi-\ndate has read and understood the associated documentation for their perspective job position.\nIn addition to employment agreements, there may be other security-related documentation \nthat must be addressed. One common document is a nondisclosure agreement (NDA). An NDA \nis used to protect the confidential information within an organization from being disclosed by \na former employee. When a person signs an NDA, they agree not to disclose any information \nthat is defined as confidential to anyone outside of the organization. Violations of an NDA are \noften met with strict penalties.\nThroughout the employment lifetime of personnel, managers should regularly audit the job \ndescriptions, work tasks, privileges, and so on for every staff member. It is common for work \ntasks and privileges to drift over time. This can cause some tasks to be overlooked and others \nto be performed multiple times. Drifting can also result in security violations. Regularly review-\ning the boundaries defined by each job description in relation to what is actually occurring aids \nin keeping security violations to a minimum. A key part of this review process is mandatory \nvacations. In many secured environments, mandatory vacations of one to two weeks are used \nto audit and verify the work tasks and privileges of employees. This removes the employee from \nthe work environment and places a different worker in their position. This often results in easy \ndetection of abuse, fraud, or negligence.\nEmployee Termination\nWhen an employee must be terminated, there are numerous issues that must be addressed. A ter-\nmination procedure policy is essential to maintaining a secure environment even in the face of \na disgruntled employee who must be removed from the organization. The reactions of termi-\nnated employees can range from understanding acceptance to violent, destructive rage. A sen-\nsible procedure for handling terminations must be designed and implemented to reduce \nincidents.\nThe termination of an employee should be handled in a private and respectful manner. How-\never, this does not mean that precautions should not be taken. Terminations should take place \n" }, { "page_number": 224, "text": "Security Roles\n179\nwith at least one witness, preferably a higher-level manager and/or a security guard. Once the \nemployee has been informed of their release, they should be escorted off the premises immedi-\nately. Before the employee is released, all organization-specific identification, access, or security \nbadges as well as cards, keys, and access tokens should be collected.\nWhen possible, an exit interview should be performed. However, this typically depends upon \nthe mental state of the employee upon release and numerous other factors. If an exit interview \nis unfeasible immediately upon termination, it should be conducted as soon as possible. The pri-\nmary purpose of the exit interview is to review the liabilities and restrictions placed on the \nformer employee based on the employment agreement, nondisclosure agreement, and any other \nsecurity-related documentation.\nThe following list includes some other issues that should be handled as soon as possible:\n\u0002\nMake sure the employee returns any organizational equipment or supplies from their vehi-\ncle or home.\n\u0002\nRemove or disable the employee’s network user account.\n\u0002\nNotify human resources to issue a final paycheck, pay any unused vacation time, and ter-\nminate benefit coverage.\n\u0002\nArrange for a member of the security department to accompany the released employee \nwhile they gather their personal belongings from the work area.\nIn most cases, you should disable or remove an employee’s system access at the same time or \njust before they are notified of being terminated. This is especially true if that employee is capable \nof accessing confidential data or has the expertise or access to alter or damage data or services. \nFailing to restrict released employees’ activities can leave your organization open to a wide range \nof vulnerabilities, including theft and destruction of both physical property and logical data.\nSecurity Roles\nA security role is the part an individual plays in the overall scheme of security implementation \nand administration within an organization. Security roles are not necessarily prescribed in job \ndescriptions because they are not always distinct or static. Familiarity with security roles will \nhelp in establishing a communications and support structure within an organization. This struc-\nture will enable the deployment and enforcement of the security policy. (The following six roles \nare presented in the logical order in which they appear in a secured environment).\nSenior manager\nThe organizational owner (senior manager) role is assigned to the person who \nis ultimately responsible for the security maintained by an organization and who should be most \nconcerned about the protection of its assets. The senior manager must sign off on all policy \nissues. In fact, all activities must be approved by and signed off on by the senior manager before \nthey can be carried out. There is no effective security policy if the senior manager does not \nauthorize and support it. The senior manager’s endorsement of the security policy indicates the \naccepted ownership of the implemented security within the organization. The senior manager \nis the person who will be held liable for the overall success or failure of a security solution and \n" }, { "page_number": 225, "text": "180\nChapter 6\n\u0002 Asset Value, Policies, and Roles\nis responsible for exercising due care and due diligence in establishing security for an organiza-\ntion. Even though senior managers are ultimately responsible for security, they rarely imple-\nment security solutions. In most cases, that responsibility is delegated to security professionals \nwithin the organization.\nSecurity professional\nThe security professional, information security officer, InfoSec officer, \nor CIRT (Computer Incident Response Team) role is assigned to a trained and experienced net-\nwork, systems, and security engineer who is responsible for following the directives mandated \nby senior management. The security professional has the functional responsibility for security, \nfor writing the security policy and implementing it. The role of security professional can be \nlabeled as an IS/IT function role. The security professional role is often filled by a team that is \nresponsible for designing and implementing security solutions based on the approved security \npolicy. Security professionals are not decision makers; they are implementers. All decisions must \nbe left to the senior manager.\nData owner\nThe data owner role is assigned to the person who is responsible for classi-\nfying information for placement and protection within the security solution. The data \nowner is typically a high-level manager who is ultimately responsible for data protection. \nHowever, the data owner usually delegates the responsibility of the actual data-manage-\nment tasks to a data custodian.\nData custodian\nThe data custodian role is assigned to the user who is responsible for the tasks \nof implementing the prescribed protection defined by the security policy and upper manage-\nment. The data custodian performs all activities necessary to provide adequate protection for \nthe CIA of data and to fulfill the requirements and responsibilities delegated from upper man-\nagement. These activities can include performing and testing backups, validating data integrity, \ndeploying security solutions, and managing data storage based on classification.\nUser\nThe user (end user or operator) role is assigned to any person who has access to the \nsecured system. A user’s access is tied to their work tasks and is limited so they have only enough \naccess to perform the tasks necessary for their job position (principle of least privilege). Users \nare responsible for understanding and upholding the security policy of an organization by fol-\nlowing prescribed operational procedures and operating within defined security parameters.\nAuditor\nAnother role is that of an auditor. An auditor is responsible for testing and verifying \nthat the security policy is properly implemented and the derived security solutions are adequate. \nThe auditor role may be assigned to a security professional or a trained user. The auditor pro-\nduces compliance and effectiveness reports that are reviewed by the senior manager. Issues dis-\ncovered through these reports are transformed into new directives assigned by the senior \nmanager to security professionals or data custodians. However, the auditor is listed as the last \nor final role since the auditor needs users or operators to be working in an environment as the \nsource of activity to audit and monitor.\nAll of these roles serve an important function within a secured environment. They are useful \nfor identifying liability and responsibility as well as for identifying the hierarchical management \nand delegation scheme.\n" }, { "page_number": 226, "text": "Security Management Planning\n181\nSecurity Management Planning\nSecurity management planning ensures proper creation, implementation, and enforcement of a \nsecurity policy. The most effective way to tackle security management planning is using a top-\ndown approach. Upper, or senior, management is responsible for initiating and defining policies \nfor the organization. Security policies provide direction for the lower levels of the organization’s \nhierarchy. It is the responsibility of middle management to flesh out the security policy into \nstandards, baselines, guidelines, and procedures. The operational managers or security profes-\nsionals must then implement the configurations prescribed in the security management docu-\nmentation. Finally, the end users must comply with all the security policies of the organization.\nThe opposite of the top-down approach is the bottom-up approach. In a bot-\ntom-up approach environment, the IT staff makes security decisions directly \nwithout input from senior management. The bottom-up approach is rarely uti-\nlized in organizations and is considered problematic in the IT industry.\nSecurity management is a responsibility of upper management, not of the IT staff, and is con-\nsidered a business operations issue rather than an IT administration issue. The team or department \nresponsible for security within an organization should be autonomous from all other depart-\nments. The InfoSec team should be lead by a designated chief security officer (CSO) who must \nreport directly to senior management. Placing the autonomy of the CSO and his team outside of \nthe typical hierarchical structure in an organization can improve security management across the \nentire organization. It also helps to avoid cross-department and internal political issues.\nElements of security management planning include defining security roles; prescribing how \nsecurity will be managed, who will be responsible for security, and how security will be tested \nfor effectiveness; developing security policies; performing risk analysis; and requiring security \neducation for employees. These responsibilities are guided through the development of man-\nagement plans.\nThe best laid security plan is useless without one key factor: approval by senior management. \nWithout senior management’s approval of and commitment to the security policy, the policy \nwill not succeed. It is the responsibility of the policy development team to educate senior man-\nagement sufficiently so it understands the risks, liabilities, and exposures that remain even after \nsecurity measures prescribed in the policy are deployed. Developing and implementing a secu-\nrity policy is evidence of due care and due diligence on the part of senior management. If a com-\npany does not practice due care and due diligence, managers can be held liable for negligence \nand held accountable for both asset and financial losses.\nA security management planning team should develop three types of plans:\nStrategic plan\nA strategic plan is a long-term plan that is fairly stable. It defines the organiza-\ntion’s goals, mission, and objectives. It’s useful for about five years if it is maintained and \nupdated annually. The strategic plan also serves as the planning horizon. Long-term goals \nand visions for the future are discussed in a strategic plan. A strategic plan should include a risk \nassessment.\n" }, { "page_number": 227, "text": "182\nChapter 6\n\u0002 Asset Value, Policies, and Roles\nTactical plan\nThe tactical plan is a midterm plan developed to provide more details on accom-\nplishing the goals set forth in the strategic plan. A tactical plan is typically useful for about a year \nand often prescribes and schedules the tasks necessary to accomplish organizational goals. Some \nexamples of tactical plans include project plans, acquisition plans, hiring plans, budget plans, \nmaintenance plans, support plans, and system development plans.\nOperational plan\nAn operational plans is a short-term and highly detailed plan based on the \nstrategic and tactical plans. It is valid or useful only for a short time. Operational plans must be \nupdated often (such as monthly or quarterly) to retain compliance with tactical plans. Opera-\ntional plans are detailed plans that spell out how to accomplish the various goals of the orga-\nnization. They include resource allotments, budgetary requirements, staffing assignments, \nscheduling, and step-by-step or implementation procedures. Operational plans include details \non how the implementation processes are in compliance with the organization’s security policy. \nExamples of operational plans include training plans, system deployment plans, and product \ndesign plans.\nSecurity is a continuous process. Thus, the activity of security management planning may \nhave a definitive initiation point, but its tasks and work is never fully accomplished or complete. \nEffective security plans focus attention on specific and achievable objectives, anticipate change \nand potential problems, and serve as a basis for decision making for the entire organization. \nSecurity documentation should be concrete, well defined, and clearly stated. For a security plan \nto be effective, it must be developed, maintained, and actually used.\nPolicies, Standards, Baselines, \nGuidelines, and Procedures\nFor most organizations, maintaining security is an essential part of ongoing business. If their \nsecurity were seriously compromised, many organizations would fail. To reduce the likelihood \nof a security failure, the process of implementing security has been somewhat formalized. This \nformalization has greatly reduced the chaos and complexity of designing and implementing \nsecurity solutions for IT infrastructures. The formalization of security solutions takes the form \nof a hierarchical organization of documentation. Each level focuses on a specific type or cate-\ngory of information and issues.\nSecurity Policies\nThe top tier of the formalization is known as a security policy. A security policy is a document \nthat defines the scope of security needed by the organization and discusses the assets that need \nprotection and the extent to which security solutions should go to provide the necessary pro-\ntection. The security policy is an overview or generalization of an organization’s security needs. \nIt defines the main security objectives and outlines the security framework of an organization. \nThe security policy also identifies the major functional areas of data processing and clarifies and \n" }, { "page_number": 228, "text": "Policies, Standards, Baselines, Guidelines, and Procedures\n183\ndefines all relevant terminology. It should clearly define why security is important and what \nassets are valuable. It is a strategic plan for implementing security. It should broadly outline the \nsecurity goals and practices that should be employed to protect the organization’s vital interests. \nThe document discusses the importance of security to every aspect of daily business operation \nand the importance of the support of the senior staff for the implementation of security. The \nsecurity policy is used to assign responsibilities, define roles, specify audit requirements, outline \nenforcement processes, indicate compliance requirements, and define acceptable risk levels. \nThis document is often used as the proof that senior management has exercised due care in pro-\ntecting itself against intrusion, attack, and disaster. Security policies are compulsory.\nMany organizations employ several types of security policies to define or outline their overall \nsecurity strategy. An organizational security policy focuses on issues relevant to every aspect of \nan organization. An issue-specific security policy focuses on a specific network service, depart-\nment, function, or other aspect that is distinct from the organization as a whole. A system-\nspecific security policy focuses on individual systems or types of systems and prescribes approved \nhardware and software, outlines methods for locking down a system, and even mandates firewall \nor other specific security controls.\nIn addition to these focused types of security policies, there are three overall categories of \nsecurity policies: regulatory, advisory, and informative. A regulatory policy is required when-\never industry or legal standards are applicable to your organization. This policy discusses the \nregulations that must be followed and outlines the procedures that should be used to elicit com-\npliance. An advisory policy discusses behaviors and activities that are acceptable and defines \nconsequences of violations. It explains the senior management’s desires for security and com-\npliance within an organization. Most policies are advisory. An informative policy is designed to \nprovide information or knowledge about a specific subject, such as company goals, mission \nstatements, or how the organization interacts with partners and customers. An informative pol-\nicy provides support, research, or background information relevant to the specific elements of \nthe overall policy. An informative policy is nonenforceable.\nFrom the security policies flow many other documents or sub-elements necessary for a com-\nplete security solution. Policies are broad overviews, whereas standards, baselines, guidelines, \nand procedures include more specific, detailed information on the actual security solution. Stan-\ndards are the next level below security policies.\nSecurity Policies and Individuals\nAs a rule of thumb, security policies (as well as standards, guidelines, and procedures) should \nnot address specific individuals. Instead of assigning tasks and responsibilities to a person, the \npolicy should define tasks and responsibilities to fit a role. That role is a function of adminis-\ntrative control or personnel management. Thus, a security policy does not define who is to do \nwhat but rather defines what must be done by the various roles within the security infrastruc-\nture. Then these defined security roles are assigned to individuals as a job description or an \nassigned work task.\n" }, { "page_number": 229, "text": "184\nChapter 6\n\u0002 Asset Value, Policies, and Roles\nSecurity Standards, Baselines, and Guidelines\nStandards define compulsory requirements for the homogenous use of hardware, software, \ntechnology, and security controls. They provide a course of action by which technology and \nprocedures are uniformly implemented throughout an organization. Standards are tactical doc-\numents that define steps or methods to accomplish the goals and overall direction defined by \nsecurity policies.\nAt the next level are baselines. A baseline defines a minimum level of security that every system \nthroughout the organization must meet. All systems not complying with the baseline should be \ntaken out of production until they can be brought up to the baseline. The baseline establishes a \ncommon foundational secure state upon which all additional and more stringent security mea-\nsures can be built. Baselines are usually system specific and often refer to an industry or govern-\nment standard, like the Trusted Computer System Evaluation Criteria (TCSEC) or Information \nTechnology Security Evaluation and Criteria (ITSEC). For example, most military organizations \nrequire that all systems support the TCSEC C2 security level at a minimum.\nGuidelines are the next element of the formalized security policy structure. A guideline offers \nrecommendations on how standards and baselines are implemented and serves as operational \nguides for both security professionals and users. Guidelines are flexible so they can be custom-\nized for each unique system or condition and can be used in the creation of new procedures. \nThey state which security mechanisms should be deployed instead of prescribing a specific prod-\nuct or control and detailing configuration settings. They outline methodologies, include sug-\ngested actions, and are not compulsory.\nSecurity Procedures\nProcedures are the final element of the formalized security policy structure. A procedure is a \ndetailed, step-by-step how-to document that describes the exact actions necessary to implement \na specific security mechanism, control, or solution. A procedure could discuss the entire system \ndeployment operation or focus on a single product or aspect, such as deploying a firewall or \nupdating virus definitions. In most cases, procedures are system and software specific. They \nmust be updated as the hardware and software of a system evolve. The purpose of a procedure \nis to ensure the integrity of business processes. If everything is accomplished by following a \nAcceptable Use Policy\nAn acceptable use policy is a commonly produced document that exists as part of the overall \nsecurity documentation infrastructure. The acceptable use policy is specifically designed to \nassign security roles within the organization as well as ensure the responsibilities tied to \nthose roles. This policy defines a level of acceptable performance and expectation of behav-\nior and activity. Failure to comply with the policy may result in job action warnings, penalties, \nor termination.\n" }, { "page_number": 230, "text": "Risk Management\n185\ndetailed procedure, then all activities should be in compliance with policies, standards, and \nguidelines. Procedures help ensure standardization of security across all systems.\nAll too often, policies, standards, baselines, guidelines, and procedures are developed only as \nan afterthought at the urging of a consultant or auditor. If these documents are not used and \nupdated, the administration of a secured environment will be unable to use them as guides. And \nwithout the planning, design, structure, and oversight provided by these documents, no envi-\nronment will remain secure or represent proper diligent due care.\nIt is also common practice to develop a single document containing aspects of all of these ele-\nments. This should be avoided. Each of these structures must exist as a separate entity because \neach performs a different specialized function. At the top of the formalization structure (i.e., \nsecurity policies), there are fewer documents because they contain general broad discussions of \noverview and goals. There are more documents further down the formalization structure (i.e., \nguidelines and procedures) because they contain details specific to a limited number of systems, \nnetworks, divisions, and areas.\nKeeping these documents as separate entities provides several benefits:\n\u0002\nNot all users need to know the security standards, baselines, guidelines, and procedures for \nall security classification levels.\n\u0002\nWhen changes occur, it is easier to update and redistribute only the affected material rather \nthan updating a monolithic policy and redistributing it throughout the organization.\nRisk Management\nSecurity is aimed at preventing loss or disclosure of data while sustaining authorized access. The \npossibility that something could happen to damage, destroy, or disclose data is known as risk. \nManaging risk is therefore an element of sustaining a secure environment. Risk management is \na detailed process of identifying factors that could damage or disclose data, evaluating those fac-\ntors in light of data value and countermeasure cost, and implementing cost-effective solutions \nfor mitigating or reducing risk.\nThe primary goal of risk management is to reduce risk to an acceptable level. What that level \nactually is depends upon the organization, the value of its assets, and the size of its budget. It \nis impossible to design and deploy a totally risk-free environment; however, significant risk \nreduction is possible, often with little effort. Risks to an IT infrastructure are not all computer \nbased. In fact, many risks come from non-computer sources. It is important to consider all pos-\nsible risks when performing risk evaluation for an organization.\nThe process by which the primary goal of risk management is achieved is known as risk anal-\nysis. It includes analyzing an environment for risks, evaluating each risk as to its likelihood of \noccurring and the cost of the damage it would cause if it did occur, assessing the cost of various \ncountermeasures for each risk, and creating a cost/benefit report for safeguards to present to \nupper management. In addition to these risk-focused activities, risk management also requires \nevaluation, assessment, and the assignment of value for all assets within the organization. Without \nproper asset valuations, it is not possible to prioritize and compare risks with possible losses.\n" }, { "page_number": 231, "text": "186\nChapter 6\n\u0002 Asset Value, Policies, and Roles\nRisk Terminology\nRisk management employs a vast terminology that must be clearly understood, especially for \nthe CISSP exam. This section defines and discusses all of the important risk-related terminology.\nAsset\nAn asset is anything within an environment that should be protected. It can be a com-\nputer file, a network service, a system resource, a process, a program, a product, an IT infra-\nstructure, a database, a hardware device, software, facilities, and so on. If an organization \nplaces any value on an item under its control and deems that item important enough to protect, \nit is labeled an asset for the purposes of risk management and analysis. The loss or disclosure \nof an asset could result in an overall security compromise, loss of productivity, reduction in \nprofits, additional expenditures, discontinuation of the organization, and numerous intangible \nconsequences.\nAsset valuation\nAsset valuation is a dollar value assigned to an asset based on actual cost and \nnonmonetary expenses. These can include costs to develop, maintain, administer, advertise, \nsupport, repair, and replace an asset; they can also include more elusive values, such as public \nconfidence, industry support, productivity enhancement, knowledge equity, and ownership \nbenefits. Asset valuation is discussed in detail later in this chapter.\nThreats\nAny potential occurrence that may cause an undesirable or unwanted outcome for an \norganization or for a specific asset is a threat. Threats are any action or inaction that could cause \ndamage, destruction, alteration, loss, or disclosure of assets or that could block access to or pre-\nvent maintenance of assets. Threats can be large or small and result in large or small conse-\nquences. They may be intentional or accidental. They may originate from people, organizations, \nhardware, networks, structures, or nature. Threat agents intentionally exploit vulnerabilities. \nThreat agents are usually people, but they could also be programs, hardware, or systems. Threat \nevents are accidental exploitations of vulnerabilities. Threat events include fire, earthquake, \nflood, system failure, human error (due to lack of training or ignorance), and power outages.\nVulnerability\nThe absence or the weakness of a safeguard or countermeasure is called a vul-\nnerability. In other words, a vulnerability is a flaw, loophole, oversight, error, limitation, frailty, \nor susceptibility in the IT infrastructure or any other aspect of an organization. If a vulnerability \nis exploited, loss or damage to assets can occur.\nExposure\nExposure is being susceptible to asset loss due to a threat; there is the possibility that \na vulnerability can or will be exploited by a threat agent or event. Exposure doesn’t mean that \na realized threat (an event that results in loss) is actually occurring (the exposure to a realized \nthreat is called experienced exposure). It just means that if there is a vulnerability and a threat \nthat can exploit it, there is the possibility that a threat event, or potential exposure, can occur.\nRisk\nRisk is the possibility or likelihood that a threat will exploit a vulnerability to cause harm \nto an asset. It is an assessment of probability, possibility, or chance. The more likely it is that a \nthreat event will occur, the greater the risk. Every instance of exposure is a risk. When written \nas a formula, risk can be defined as risk = threat + vulnerability. Thus, reducing either the threat \nagent or the vulnerability directly results in a reduction in risk.\n" }, { "page_number": 232, "text": "Risk Management\n187\nWhen a risk is realized, a threat agent or a threat event has taken advantage of a vulnerability \nand caused harm to or disclosure of one or more assets. The whole purpose of security is to pre-\nvent risks from becoming realized by removing vulnerabilities and blocking threat agents and \nthreat events from jeopardizing assets. As a risk management tool, security is the implementa-\ntion of safeguards.\nSafeguards\nA safeguard, or countermeasure, is anything that removes a vulnerability or protects \nagainst one or more specific threats. A safeguard can be installing a software patch, making a con-\nfiguration change, hiring security guards, altering the infrastructure, modifying processes, improv-\ning the security policy, training personnel more effectively, electrifying a perimeter fence, installing \nlights, and so on. It is any action or product that reduces risk through the elimination or lessening \nof a threat or a vulnerability anywhere within an organization. Safeguards are the only means by \nwhich risk is mitigated or removed.\nAttack\nAn attack is the exploitation of a vulnerability by a threat agent. In other words, an \nattack is any intentional attempt to exploit a vulnerability of an organization’s security infra-\nstructure to cause damage, loss, or disclosure of assets. An attack can also be viewed as any vio-\nlation or failure to adhere to an organization’s security policy.\nBreach\nA breach is the occurrence of a security mechanism being bypassed or thwarted by a \nthreat agent. When a breach is combined with an attack, a penetration, or intrusion, can result. \nA penetration is the condition in which a threat agent has gained access to an organization’s infra-\nstructure through the circumvention of security controls and is able to directly imperil assets.\nThe elements asset, threat, vulnerability, exposure, risk, and safeguard are related, as shown \nin Figure 6.1. Threats exploit vulnerabilities, which results in exposure. Exposure is risk, and \nrisk is mitigated by safeguards. Safeguards protect assets that are endangered by threats.\nF I G U R E\n6 . 1\nThe elements of risk\nThreats\nexploit\nVulnerabilities\nwhich results in\nExposure\nwhich is\nRisk\nwhich is mitigated by\nSafeguards\nwhich protect\nAssets\nwhich are\nendangered by\n" }, { "page_number": 233, "text": "188\nChapter 6\n\u0002 Asset Value, Policies, and Roles\nRisk Assessment Methodologies\nRisk management and analysis is primarily an exercise for upper management. It is their responsi-\nbility to initiate and support risk analysis and assessment by defining the scope and purpose of the \nendeavor. The actual processes of performing risk analysis are often delegated to security profes-\nsionals or an evaluation team. However, all risk assessments, results, decisions, and outcomes must \nbe understood and approved by upper management as an element in providing prudent due care.\nAll IT systems have risk. There is no way to eliminate 100 percent of all risks. Instead, upper \nmanagement must decide which risks are acceptable and which are not. Determining which \nrisks are acceptable requires detailed and complex asset and risk assessments.\nRisk Analysis\nRisk analysis is performed to provide upper management with the details necessary to decide \nwhich risks should be mitigated, which should be transferred, and which should be accepted. \nThe result is a cost/benefit comparison between the expected cost of asset loss and the cost of \ndeploying safeguards against threats and vulnerabilities. Risk analysis identifies risks, quantifies \nthe impact of threats, and aids in budgeting for security. Risk analysis helps to integrate the \nneeds and objectives of the security policy with the organization’s business goals and intentions.\nThe first step in risk analysis is to appraise the value of an organization’s assets. If an asset has \nno value, then there is no need to provide protection for it. A primary goal of risk analysis is to \nensure that only cost-effective safeguards are deployed. It makes no sense to spend $100,000 pro-\ntecting an asset that is worth only $1,000. The value of an asset directly affects and guides the level \nof safeguards and security deployed to protect it. As a rule, the annual costs of safeguards should \nnot exceed the expected annual cost of asset loss.\nAsset Valuation\nWhen evaluating the cost of an asset, there are many aspects to consider. The goal of asset eval-\nuation is to assign a specific dollar value to it. Determining an exact value is often difficult if not \nimpossible, but nevertheless, a specific value must be established. (Note that the discussion of \nqualitative versus quantitative risk analysis in the next section may clarify this issue.) Improp-\nerly assigning value to assets can result in failing to properly protect an asset or implementing \nfinancially infeasible safeguards. The following list includes some of the issues that contribute \nto the valuation of assets:\nPurchase cost\nDevelopment cost\nAdministrative or management cost\nMaintenance or upkeep cost\nCost in acquiring asset\nCost to protect or sustain asset\nValue to owners and users\nValue to competitors\nIntellectual property or equity value\nMarket valuation \n(sustainable price)\nReplacement cost\nProductivity enhancement or \ndegradation\nOperational costs of asset \npresence and loss\nLiability of asset loss\nUsefulness\n" }, { "page_number": 234, "text": "Risk Management\n189\nAssigning or determining the value of assets to an organization can fulfill numerous require-\nments. It serves as the foundation for performing a cost/benefit analysis of asset protection \nthrough safeguard deployment. It serves as a means for selecting or evaluating safeguards and \ncountermeasures. It provides values for insurance purposes and establishes an overall net worth \nor net value for the organization. It helps senior management understand exactly what is at risk \nwithin the organization. Understanding the value of assets also helps to prevent negligence of \ndue care and encourages compliance with legal requirements, industry regulations, and internal \nsecurity policies.\nAfter asset valuation, threats must be identified and examined. This involves creating an \nexhaustive list of all possible threats for the organization and its IT infrastructure. The list \nshould include threat agents as well as threat events. It is important to keep in mind that threats \ncan come from anywhere. Threats to IT are not limited to IT sources. When compiling a list of \nthreats, be sure to consider the following:\nViruses\nHackers\nProcessing errors, \nbuffer overflows\nCoding/programming \nerrors\nCascade errors and \ndependency faults\nUser errors\nPersonnel privilege \nabuse\nIntruders (physical \nand logical)\nCriminal activities by \nauthorized users\nNatural disasters \n(earthquakes, floods, \nfire, volcanoes, \nhurricanes, tornadoes, \ntsunamis, etc.)\nTemperature \nextremes\nEnvironmental \nfactors (presence of \ngases, liquids, \norganisms, etc.)\nMovement (vibrations, \njarring, etc.)\nPhysical damage \n(crushing, projectiles, \ncable severing, etc.)\nEnergy anomalies \n(static, EM pulses, \nradio frequencies \n[RFs], power loss, \npower surges, etc.)\nEquipment failure\nIntentional attacks\nMisuse of data, \nresources, or services\nLoss of data\nPhysical theft\nReorganization\nChanges or \ncompromises to data \nclassification or \nsecurity policies\nInformation warfare\nSocial engineering\nAuthorized user \nillness or epidemics\nGovernment, \npolitical, or military \nintrusions or \nrestrictions\nBankruptcy or \nalteration/\ninterruption of \nbusiness activity\n \n" }, { "page_number": 235, "text": "190\nChapter 6\n\u0002 Asset Value, Policies, and Roles\nIn most cases, a team rather than a single individual should perform risk assessment and \nanalysis. Also, the team members should be from various departments within the organization. \nIt is not usually a requirement that all team members be security professionals or even network/\nsystem administrators. The diversity of the team based on the demographics of the organization \nwill help to exhaustively identify and address all possible threats and risks.\nOnce a list of threats is developed, each threat and its related risk must be individually eval-\nuated. There are two risk assessment methodologies: quantitative and qualitative. Quantitative \nrisk analysis assigns real dollar figures to the loss of an asset. Qualitative risk analysis assigns \nsubjective and intangible values to the loss of an asset. Both methods are necessary for a com-\nplete risk analysis.\nQuantitative Risk Analysis\nThe quantitative method results in concrete probability percentages. However, a purely quan-\ntitative analysis is not possible; not all elements and aspects of the analysis can be quantified \nbecause some are qualitative, subjective, or intangible. The process of quantitative risk analysis \nstarts with asset valuation and threat identification. Next, you estimate the potential and fre-\nquency of each risk. This information is then used to calculate various cost functions that are \nused to evaluate safeguards.\nThe six major steps or phases in quantitative risk analysis are as follows:\n1.\nInventory assets and assign a value (AV).\n2.\nResearch each asset and produce a list of all possible threats of each individual asset. For \neach listed threat, calculate the exposure factor (EF) and single loss expectancy (SLE).\n3.\nPerform a threat analysis to calculate the likelihood of each threat taking place within a sin-\ngle year, that is, the annualized rate of occurrence (ARO).\n4.\nDerive the overall loss potential per threat by calculating the annualized loss expect-\nancy (ALE).\n5.\nResearch countermeasures for each threat, and then calculate the changes to ARO and ALE \nbased on an applied countermeasure.\n6.\nPerform a cost/benefit analysis of each countermeasure for each threat for each asset. Select \nthe most appropriate response to each threat.\nCost Functions\nSome of the cost functions associated with quantitative risk analysis include exposure factor, \nsingle loss expectancy, annualized rate of occurrence, and annualized loss expectancy:\nExposure factor\nThe exposure factor (EF) represents the percentage of loss that an organiza-\ntion would experience if a specific asset were violated by a realized risk. The EF can also be \ncalled the loss potential. In most cases, a realized risk does not result in the total loss of an asset. \nThe EF simply indicates the expected overall asset value loss due to a single realized risk. The \nEF is usually small for assets that are easily replaceable, such as hardware. It can be very large \n" }, { "page_number": 236, "text": "Risk Management\n191\nfor assets that are irreplaceable or proprietary, such as product designs or a database of cus-\ntomers. The EF is expressed as a percentage.\nSingle loss expectancy\nThe EF is needed to calculate the single loss expectancy (SLE). The SLE \nis the cost associated with a single realized risk against a specific asset. It indicates the exact \namount of loss an organization would experience if an asset were harmed by a specific threat. The \nSLE is calculated using the formula SLE = asset value (AV) * exposure factor (EF) (or SLE = AV \n* EF). The SLE is expressed in a dollar value. For example, if an asset is valued at $200,000 and \nit has an EF of 45% for a specific threat, then the SLE of the threat for that asset is $90,000.\nAnnualized rate of occurrence\nThe annualized rate of occurrence (ARO) is the expected fre-\nquency with which a specific threat or risk will occur (i.e., become realized) within a single year. \nThe ARO can range from a value of 0.0 (zero), indicating that the threat or risk will never be \nrealized, to a very large number, indicating the threat or risk occurs often. Calculating the ARO \ncan be complicated. It can be derived from historical records, statistical analysis, or guesswork. \nARO calculation is also known as probability determination. The ARO for some threats or risks \nis calculated by multiplying the likelihood of a single occurrence by the number of users who \ncould initiate the threat. For example, the ARO of an earthquake in Tulsa may be .00001, \nwhereas the ARO of an e-mail virus in an office in Tulsa may be 10,000,000.\nAnnualized loss expectancy\nThe annualized loss expectancy (ALE) is the possible yearly cost of \nall instances of a specific realized threat against a specific asset. The ALE is calculated using the \nformula ALE = single loss expectancy (SLE) * annualized rate of occurrence (ARO) (or ALE = SLE \n* ARO). For example, if the SLE of an asset is $90,000 and the ARO for a specific threat (such \nas total power loss) is .5, then the ALE is $45,000. On the other hand, if the ARO for a specific \nthreat were 15 (such as compromised user account), then the ALE would be $1,350,000.\nTable 6.1 illustrates the various formulas associated with quantitative risk analysis.\nT A B L E\n6 . 1\nQuantitative Risk Analysis Formulas\nConcept\nFormula\nExposure factor (EF)\n%\nSingle loss expectancy (SLE)\nSLE = AV * EF\nAnnualized rate of occurrence (ARO)\n# / year\nAnnualized loss expectancy (ALE)\nALE = SLE * ARO\nALE = AV * EF * ARO\nAnnual cost of the safeguard (ACS)\n$ / year\nValue or benefit of a safeguard\n[ALE1 – ALE2] – ACS\n" }, { "page_number": 237, "text": "192\nChapter 6\n\u0002 Asset Value, Policies, and Roles\nThreat/Risk Calculations\nThe task of calculating EF, SLE, ARO, and ALE for every asset and every threat/risk is a daunt-\ning one. Fortunately, there are quantitative risk assessment tools that simplify and automate \nmuch of this process. These tools are used to produce an asset inventory with valuations and \nthen, using predefined AROs along with some customizing options (i.e., industry, geography, \nIT components, etc.), to produce risk analysis reports.\nCalculating Annualized Loss Expectancy (ALE)\nIn addition to determining the annual cost of the safeguard, you must calculate the ALE for the \nasset if the safeguard is implemented. This requires a new EF and ARO specific to the safeguard. \nAs mentioned earlier, the annual costs of safeguards should not exceed the expected annual cost \nof asset loss. To make the determination of whether the safeguard is financially equitable, use \nthe following formula:\nALE before safeguard – ALE after implementing the safeguard – annual cost of safeguard = \nvalue of the safeguard to the company.\nIf the result is negative, the safeguard is not a financially responsible choice. If the result is pos-\nitive, then that value is the annual savings your organization can reap by deploying the safeguard.\nThe annual savings or loss from a safeguard should not be the only element considered when \nevaluating safeguards. The issues of legal responsibility and prudent due care should also be \nconsidered. In some cases, it makes more sense to lose money in the deployment of a safeguard \nthan to risk legal liability in the event of an asset disclosure or loss.\nCalculating Safeguard Costs\nFor each specific risk, one or more safeguards or countermeasures must be evaluated on a cost/\nbenefit basis. To perform this evaluation, you must first compile a list of safeguards for each \nthreat. Then each safeguard must be assigned a deployment value. In fact, the deployment value \nor the cost of the safeguard must be measured against the value of the protected asset. The value \nof the protected asset therefore determines the maximum expenditures for protection mecha-\nnisms. Security should be cost effective, and thus it is not prudent to spend more (in terms of \ncash or resources) protecting an asset than its value to the organization. If the cost of the coun-\ntermeasure is greater than the value of the asset (i.e., the cost of the risk), then the risk should \nbe accepted.\nThere are numerous factors involved in calculating the value of a countermeasure:\n\u0002\nCost of purchase, development, and licensing\n\u0002\nCost of implementation and customization\n\u0002\nCost of annual operation, maintenance, administration, and so on\n\u0002\nCost of annual repairs and upgrades\n\u0002\nProductivity improvement or loss\n\u0002\nChanges to environment\n\u0002\nCost of testing and evaluation\n" }, { "page_number": 238, "text": "Risk Management\n193\nTo perform the cost/benefit analysis of a safeguard, you must first calculate the following:\n\u0002\nThe pre-countermeasure ALE for an asset-and-threat pairing\n\u0002\nThe post-countermeasure ALE for an asset-and-threat pairing\n\u0002\nThe annual cost of the safeguard (ACS)\nHere is the cost/benefit formula:\n[pre-countermeasure ALE – post-countermeasure ALE] – ACS.\nThe countermeasure with the greatest result from this cost/benefit formula makes the most \neconomic sense to deploy against the specific asset-and-threat pairing.\nQualitative Risk Analysis\nQualitative risk analysis is more scenario based than it is calculator based. Rather than assign-\ning exact dollar figures to possible losses, you rank threats on a scale to evaluate their risks, \ncosts, and effects. The process of performing qualitative risk analysis involves judgment, intu-\nition, and experience. There are many actual techniques and methods used to perform qualita-\ntive risk analysis:\n\u0002\nBrainstorming\n\u0002\nDelphi technique\n\u0002\nStoryboarding\n\u0002\nFocus groups\n\u0002\nSurveys\n\u0002\nQuestionnaires\n\u0002\nChecklists\n\u0002\nOne-on-one meetings\n\u0002\nInterviews\nDetermining which mechanism to employ is based on the culture of the organization and the \ntypes of risks and assets involved. It is common for several methods to be employed simultaneously \nand their results compared and contrasted in the final risk analysis report to upper management.\nScenarios\nThe basic process for all of these mechanisms involves the creation of scenarios. A scenario is \na written description of a single major threat. The description focuses on how a threat would \nbe instigated and what effects it could have on the organization, the IT infrastructure, and spe-\ncific assets. Generally, the scenarios are limited to one page of text to keep them manageable. \nFor each scenario, one or more safeguards that would completely or partially protect against the \nmajor threat discussed in the scenario are described. The analysis participants then assign a \nthreat level to the scenario, a loss potential, and the advantages of each safeguard. These assign-\nments can be grossly simple, such as using high, medium, and low or a basic number scale of \n" }, { "page_number": 239, "text": "194\nChapter 6\n\u0002 Asset Value, Policies, and Roles\n1 to 10, or they can be detailed essay responses. The responses from all participants are then \ncompiled into a single report that is presented to upper management.\nThe usefulness and validity of a qualitative risk analysis is improved as the number and diver-\nsity of the participants in the evaluation increases. Whenever possible, include one or more per-\nsons from each level of the organizational hierarchy, from upper management to end user. It is \nalso important to include a cross section from each major department, division, office, or branch.\nDelphi Technique\nThe Delphi technique is probably the only mechanism on this list that is not immediately rec-\nognizable and understood. The Delphi technique is simply an anonymous feedback and \nresponse process. Its primary purpose is to elicit honest and uninfluenced responses from all \nparticipants. The participants are usually gathered into a single meeting room. To each request \nfor feedback, each participant writes down their response on paper anonymously. The results \nare compiled and presented to the group for evaluation. The process is repeated until a consen-\nsus is reached.\nBoth the quantitative and qualitative risk analysis mechanisms offer useful results. However, \neach technique involves a unique method of evaluating the same set of assets and risks. Prudent \ndue care requires that both methods be employed. The benefits and disadvantages of these two \nsystems are displayed in Table 6.2.\nT A B L E\n6 . 2\nComparison of Quantitative and Qualitative Risk Analysis\nCharacteristic\nQualitative\nQuantitative\nEmploys complex functions\nNo\nYes\nUses cost/benefit analysis\nNo\nYes\nResults in specific values\nNo\nYes\nRequires guesswork\nYes\nNo\nSupports automation\nNo\nYes\nInvolves a high volume of information\nNo\nYes\nIs objective\nNo\nYes\nUses opinions\nYes\nNo\nRequires significant time and effort\nNo\nYes\nOffers useful and meaningful results\nYes\nYes\n" }, { "page_number": 240, "text": "Risk Management\n195\nHandling Risk\nThe results of risk analysis are many:\n\u0002\nComplete and detailed valuation of all assets\n\u0002\nAn exhaustive list of all threats and risks, rate of occurrence, and extent of loss if realized\n\u0002\nA list of threat-specific safeguards and countermeasures that identifies their effectiveness \nand ALE\n\u0002\nA cost/benefit analysis of each safeguard\nThis information is essential for management to make informed, educated, and intelligent deci-\nsions about safeguard implementation and security policy alterations.\nOnce the risk analysis is complete, management must address each specific risk. There are \nfour possible responses to risk:\n\u0002\nReduce\n\u0002\nAssign\n\u0002\nAccept\n\u0002\nReject\nReducing risk, or risk mitigation, is the implementation of safeguards and countermeasures \nto eliminate vulnerabilities or block threats. Picking the most cost-effective or beneficial coun-\ntermeasure is part of risk management, but it is not an element of risk assessment. In fact, \ncountermeasure selection is a post-risk assessment or risk analysis activity.\nAssigning risk, or transferring risk, is the placement of the cost of loss a risk represents onto \nanother entity or organization. Purchasing insurance and outsourcing are common forms of \nassigning or transferring risk.\nAccepting risk is the valuation by management of the cost/benefit analysis of possible safe-\nguards and the determination that the cost of the countermeasure greatly outweighs the possible \ncost of loss due to a risk. It also means that management has agreed to accept the consequences \nand the loss if the risk is realized. In most cases, accepting risk requires a clearly written state-\nment that indicates why a safeguard was not implemented, who is responsible for the decision, \nand who will be responsible for the loss if the risk is realized, usually in the form of a “sign-off \nletter.” An organization’s decision to accept risk is based on its risk tolerance. Risk tolerance is \nthe ability of an organization to absorb the losses associated with realized risks.\nA final but unacceptable possible response to risk is to reject risk or ignore risk. Denying that \na risk exists and hoping that by ignoring a risk it will never be realized are not valid prudent due \ncare responses to risk.\nOnce countermeasures are implemented, the risk that remains is known as residual risk. \nResidual risk comprises any threats to specific assets against which upper management chooses \nnot to implement a safeguard. In other words, residual risk is the risk that management has cho-\nsen to accept rather than mitigate. In most cases, the presence of residual risk indicates that the \ncost/benefit analysis showed that the available safeguards were not cost-effective deterrents.\nTotal risk is the amount of risk an organization would face if no safeguards were implemented. \nA formula for total risk is threats * vulnerabilities * asset value = total risk. The difference between \n" }, { "page_number": 241, "text": "196\nChapter 6\n\u0002 Asset Value, Policies, and Roles\ntotal risk and residual risk is known as the controls gap. The controls gap is the amount of risk \nthat is reduced by implementing safeguards. A formula for residual risk is total risk – controls gap \n= residual risk.\nCountermeasure Selection\nSelecting a countermeasure within the realm of risk management does rely heavily on the cost/\nbenefit analysis results. However, there are several other factors that you should consider:\n\u0002\nThe cost of the countermeasure should be less than the value of asset.\n\u0002\nThe cost of the countermeasure should be less than the benefit of the countermeasure.\n\u0002\nThe result of the applied countermeasure should make the cost of an attack greater for the \nperpetrator than the derived benefit from an attack.\n\u0002\nThe countermeasure should provide a solution to a real and identified problem. (Don’t \ninstall countermeasures just because they are available, are advertised, or sound cool.)\n\u0002\nThe benefit of the countermeasure should not be dependent upon its secrecy. This means \nthat “security through obscurity” is not a viable countermeasure and that any viable coun-\ntermeasure can withstand public disclosure and scrutiny.\n\u0002\nThe benefit of the countermeasure should be testable and verifiable.\n\u0002\nThe countermeasure should provide consistent and uniform protection across all users, sys-\ntems, protocols, and so on.\n\u0002\nThe countermeasure should have few or no dependencies to reduce cascade failures.\n\u0002\nThe countermeasure should require minimal human intervention after initial deployment \nand configuration.\n\u0002\nThe countermeasure should be tamperproof.\n\u0002\nThe countermeasure should have overrides accessible to privileged operators only.\n\u0002\nThe countermeasure should provide fail-safe and/or fail-secure options.\nSecurity Awareness Training\nThe successful implementation of a security solution requires changes in user behavior. These \nchanges primarily consist of alterations in normal work activities to comply with the standards, \nguidelines, and procedures mandated by the security policy. Behavior modification involves \nsome level of learning on the part of the user. There are three commonly recognized learning lev-\nels: awareness, training, and education.\nA prerequisite to actual security training is awareness. The goal of creating awareness is to \nbring security into the forefront and make it a recognized entity for users. Awareness estab-\nlishes a common baseline or foundation of security understanding across the entire organiza-\ntion. Awareness is not exclusively created through a classroom type of exercise but also \nthrough the work environment. There are many tools that can be used to create awareness, \n" }, { "page_number": 242, "text": "Summary\n197\nsuch as posters, notices, newsletter articles, screen savers, T-shirts, rally speeches by manag-\ners, announcements, presentations, mouse pads, office supplies, and memos as well as the tra-\nditional instructor-led training courses. Awareness focuses on key or basic topics and issues \nrelated to security that all employees, no matter which position or classification they have, \nmust understand and comprehend.\nAwareness is a tool to establish a minimum standard common denominator or foundation \nof security understanding. All personnel should be fully aware of their security responsibilities \nand liabilities. They should be trained to know what to do and what not to do.\nThe issues that users need to be aware of include avoiding waste, fraud, and unauthorized \nactivities. All members of an organization, from senior management to temporary intern, need \nthe same level of awareness. The awareness program in an organization should be tied in with \nits security policy, incident handling plan, and disaster recovery procedures. For an awareness-\nbuilding program to be effective, it must be fresh, creative, and updated often. The awareness \nprogram should also be tied to an understanding of how the corporate culture will affect and \nimpact security for individuals as well as the organization as a whole. If employees do not see \nenforcement of security policies and standards, especially at the awareness level, then they may \nnot feel obligated to abide by them.\nTraining is teaching employees to perform their work tasks and to comply with the security \npolicy. All new employees require some level of training so they will be able to comply with all \nstandards, guidelines, and procedures mandated by the security policy. New users need to know \nhow to use the IT infrastructure, where data is stored, and how and why resources are classified. \nMany organizations choose to train new employees before they are granted access to the net-\nwork, whereas others will grant new users limited access until their training in their specific job \nposition is complete. Training is an ongoing activity that must be sustained throughout the life-\ntime of the organization for every employee. It is considered an administrative security control.\nAwareness and training are often provided in-house. That means these teaching tools are cre-\nated and deployed by and within the organization itself. However, the next level of knowledge \ndistribution is usually obtained from an external third-party source.\nEducation is a more detailed endeavor in which students/users learn much more than they actu-\nally need to know to perform their work tasks. Education is most often associated with users pur-\nsuing certification or seeking job promotion. It is typically a requirement for personnel seeking \nsecurity professional positions. A security professional requires extensive knowledge of security \nand the local environment for the entire organization and not just their specific work tasks.\nSummary\nWhen planning a security solution, it’s important to consider how humans are the weakest ele-\nment. Regardless of the physical or logical controls deployed, humans can discover ways to \navoid them, circumvent or subvert them, or disable them. Thus, it is important to take users into \naccount when designing and deploying security solutions for your environment. The aspects of \nsecure hiring practices, roles, policies, standards, guidelines, procedures, risk management, \nawareness training, and management planning all contribute to protecting assets. The use of \nthese security structures provides some protection from the threat of humans.\n" }, { "page_number": 243, "text": "198\nChapter 6\n\u0002 Asset Value, Policies, and Roles\nSecure hiring practices require detailed job descriptions. Job descriptions are used as a guide for \nselecting candidates and properly evaluating them for a position. Maintaining security through \njob descriptions includes the use of separation of duties, job responsibilities, and job rotation.\nA termination policy is needed to protect an organization and its existing employees. The ter-\nmination procedure should include witnesses, return of company property, disabling network \naccess, an exit interview, and an escort from the property.\nSecurity roles determine who is responsible for the security of an organization’s assets. Those \nassigned the senior management role are ultimately responsible and liable for any asset loss, and \nthey are the ones who define security policy. Security professionals are responsible for imple-\nmenting security policy, and users are responsible for complying with the security policy. The \nperson assigned the data owner role is responsible for classifying information, and a data cus-\ntodian is responsible for maintaining the secure environment and backing up data. An auditor \nis responsible for making sure a secure environment is properly protecting assets.\nA formalized security policy structure consists of policies, standards, baselines, guidelines, \nand procedures. These individual documents are essential elements to the design and implemen-\ntation of security in any environment.\nThe process of identifying, evaluating, and preventing or reducing risks is known as risk \nmanagement. The primary goal of risk management is to reduce risk to an acceptable level. \nDetermining this level depends upon the organization, the value of its assets, and the size of its \nbudget. Although it is impossible to design and deploy a completely risk-free environment, it is \npossible to significantly reduce risk with little effort. Risk analysis is the process by which risk \nmanagement is achieved and includes analyzing an environment for risks, evaluating each risk \nas to its likelihood of occurring and the cost of the resulting damage, assessing the cost of var-\nious countermeasures for each risk, and creating a cost/benefit report for safeguards to present \nto upper management.\nTo successfully implement a security solution, user behavior must change. Such changes pri-\nmarily consist of alterations in normal work activities to comply with the standards, guidelines, \nand procedures mandated by the security policy. Behavior modification involves some level of \nlearning on the part of the user. There are three commonly recognized learning levels: aware-\nness, training, and education.\nAn important aspect of security management planning is the proper implementation of a \nsecurity policy. To be effective, the approach to security management must be a top-down \napproach. The responsibility of initiating and defining a security policy lies with upper or senior \nmanagement. Security policies provide direction for the lower levels of the organization’s hier-\narchy. Middle management is responsible for fleshing out the security policy into standards, \nbaselines, guidelines, and procedures. It is the responsibility of the operational managers or \nsecurity professionals to implement the configurations prescribed in the security management \ndocumentation. Finally, the end users’ responsibility is to comply with all security policies of the \norganization.\nSecurity management planning includes defining security roles, developing security policies, \nperforming risk analysis, and requiring security education for employees. These responsibilities \nare guided by the developments of management plans. Strategic, tactical, and operational plans \nshould be developed by a security management team.\n" }, { "page_number": 244, "text": "Exam Essentials\n199\nExam Essentials\nUnderstand the security implications of hiring new employees.\nTo properly plan for security, \nyou must have standards in place for job descriptions, job classification, work tasks, job respon-\nsibilities, preventing collusion, candidate screening, background checks, security clearances, \nemployment agreements, and nondisclosure agreements. By deploying such mechanisms, you \nensure that new hires are aware of the required security standards, thus protecting your orga-\nnization’s assets.\nBe able to explain separation of duties.\nSeparation of duties is the security concept of divid-\ning critical, significant, sensitive work tasks among several individuals. By separating duties in \nthis manner, you ensure that no one person can compromise system security.\nUnderstand the principle of least privilege.\nThe principle of least privilege states that, in a \nsecured environment, users should be granted the minimum amount of access necessary for \nthem to complete their required work tasks or job responsibilities. By limiting user access only \nto those items that they need to complete their work tasks, you limit the vulnerability of sensi-\ntive information.\nKnow why job rotation and mandatory vacations are necessary.\nJob rotation serves two \nfunctions: It provides a type of knowledge redundancy, and moving personnel around reduces \nthe risk of fraud, data modification, theft, sabotage, and misuse of information. Mandatory \nvacations of one to two weeks are used to audit and verify the work tasks and privileges of \nemployees. This often results in easy detection of abuse, fraud, or negligence.\nBe able to explain proper termination policies.\nA termination policy defines the procedure \nfor terminating employees. It should include items such as always having a witness, disabling \nthe employee’s network access, and performing an exit interview. A termination policy should \nalso include escorting the terminated employee off of the premises and requiring the return of \nsecurity tokens and badges and company property.\nUnderstand key security roles.\nThe primary security roles are senior manager, organizational \nowner, upper management, security professional, user, data owner, data custodian, and audi-\ntor. By creating a security role hierarchy, you limit risk overall.\nKnow the elements of a formalized security policy structure.\nTo create a comprehensive \nsecurity plan, you need the following items in place: security policy, standards, baselines, guide-\nlines, and procedures. Such documentation clearly states security requirements and creates due \ndiligence on the part of the responsible parties.\nBe able to define overall risk management.\nThe process of identifying factors that could dam-\nage or disclose data, evaluating those factors in light of data value and countermeasure cost, and \nimplementing cost-effective solutions for mitigating or reducing risk is known as risk manage-\nment. By performing risk management, you lay the foundation for reducing risk overall.\nUnderstand risk analysis and the key elements involved.\nRisk analysis is the process by which \nupper management is provided with details to make decisions about which risks are to be mitigated, \n" }, { "page_number": 245, "text": "200\nChapter 6\n\u0002 Asset Value, Policies, and Roles\nwhich should be transferred, and which should be accepted. To fully evaluate risks and subsequently \ntake the proper precautions, you must analyze the following: assets, asset valuation, threats, vulner-\nability, exposure, risk, realized risk, safeguards, countermeasures, attacks, and breaches.\nKnow how to evaluate threats.\nThreats can originate from numerous sources, including IT, \nhumans, and nature. Threat assessment should be performed as a team effort to provide the \nwidest range of perspective. By fully evaluating risks from all angles, you reduce your system’s \nvulnerability.\nUnderstand quantitative risk analysis.\nQuantitative risk analysis focuses on hard values and \npercentages. A complete quantitative analysis is not possible due to intangible aspects of risk. \nThe process involves asset valuation and threat identification and then determining a threat’s \npotential frequency and the resulting damage; the result is a cost/benefit analysis of safeguards.\nBe able to explain the concept of an exposure factor (EF).\nAn exposure factor is an element \nof quantitative risk analysis that represents the percentage of loss that an organization would \nexperience if a specific asset were violated by a realized risk. By calculating exposure factors, \nyou are able to implement a sound risk management policy.\nKnow what single loss expectancy (SLE) is and how to calculate it.\nSLE is an element of \nquantitative risk analysis that represents the cost associated with a single realized risk against \na specific asset. The formula is SLE = asset value (AV) * exposure factor (EF).\nUnderstand annualized rate of occurrence (ARO).\nARO is an element of quantitative risk \nanalysis that represents the expected frequency with which a specific threat or risk will occur \n(i.e., become realized) within a single year. Understanding AROs further enables you to calcu-\nlate the risk and take proper precautions.\nKnow what annualized loss expectancy (ALE) is and how to calculate it.\nALE is an element \nof quantitative risk analysis that represents the possible yearly cost of all instances of a specific \nrealized threat against a specific asset. The formula is ALE = single loss expectancy (SLE) * \nannualized rate of occurrence (ARO).\nKnow the formula for safeguard evaluation.\nIn addition to determining the annual cost of a \nsafeguard, you must calculate the ALE for the asset if the safeguard is implemented. To do so, \nuse the formula ALE before safeguard – ALE after implementing the safeguard – annual cost of \nsafeguard = value of the safeguard to the company.\nUnderstand qualitative risk analysis.\nQualitative risk analysis is based more on scenarios than \ncalculations. Exact dollar figures are not assigned to possible losses; instead, threats are ranked \non a scale to evaluate their risks, costs, and effects. Such an analysis assists those responsible in \ncreating proper risk management policies.\nUnderstand the Delphi technique.\nThe Delphi technique is simply an anonymous feedback \nand response process used to arrive at a consensus. Such a consensus gives the responsible par-\nties the opportunity to properly evaluate risks and implement solutions.\nKnow the options for handling risk.\nReducing risk, or risk mitigation, is the implementation \nof safeguards and countermeasures. Assigning risk or transferring a risk places the cost of loss \na risk represents onto another entity or organization. Purchasing insurance is one form of \n" }, { "page_number": 246, "text": "Exam Essentials\n201\nassigning or transferring risk. Accepting risk means the management has evaluated the cost/\nbenefit analysis of possible safeguards and has determined that the cost of the countermeasure \ngreatly outweighs the possible cost of loss due to a risk. It also means that management has \nagreed to accept the consequences and the loss if the risk is realized.\nBe able to explain total risk, residual risk, and controls gap.\nTotal risk is the amount of risk an \norganization would face if no safeguards were implemented. To calculate total risk, use the for-\nmula threats * vulnerabilities * asset value = total risk. Residual risk is the risk that management \nhas chosen to accept rather than mitigate. The difference between total risk and residual risk is \nknown as the controls gap. The controls gap is the amount of risk that is reduced by implementing \nsafeguards. To calculate residual risk, use the formula total risk – controls gap = residual risk.\nKnow how to implement security awareness training.\nBefore actual training can take place, \nawareness of security as a recognized entity must be created for users. Once this is accom-\nplished, training, or teaching employees to perform their work tasks and to comply with the \nsecurity policy, can begin. All new employees require some level of training so they will be able \nto comply with all standards, guidelines, and procedures mandated by the security policy. Edu-\ncation is a more detailed endeavor in which students/users learn much more than they actually \nneed to know to perform their work tasks. Education is most often associated with users pur-\nsuing certification or seeking job promotion.\nUnderstand security management planning.\nSecurity management is based on three types of \nplans: strategic, tactical, and operational. A strategic plan is a long-term plan that is fairly stable. \nIt defines the organization’s goals, mission, and objectives. The tactical plan is a midterm plan \ndeveloped to provide more details on accomplishing the goals set forth in the strategic plan. Oper-\national plans are short-term and highly detailed plans based on the strategic and tactical plans.\n" }, { "page_number": 247, "text": "202\nChapter 6\n\u0002 Asset Value, Policies, and Roles\nReview Questions\n1.\nWhich of the following is the weakest element in any security solution?\nA. Software products\nB. Internet connections\nC. Security policies\nD. Humans\n2.\nWhen seeking to hire new employees, what is the first step?\nA. Create a job description.\nB. Set position classification.\nC. Screen candidates.\nD. Request resumes.\n3.\nWhat is the primary purpose of an exit interview?\nA. To return the exiting employee’s personal belongings\nB. To review the nondisclosure agreement\nC. To evaluate the exiting employee’s performance\nD. To cancel the exiting employee’s network access accounts\n4.\nWhen an employee is to be terminated, which of the following should be done?\nA. Inform the employee a few hours before they are officially terminated.\nB. Disable the employee’s network access just before they are informed of the termination.\nC. Send out a broadcast e-mail informing everyone that a specific employee is to be terminated.\nD. Wait until you and the employee are the only people remaining in the building before \nannouncing the termination.\n5.\nWho is liable for failing to perform prudent due care?\nA. Security professionals\nB. Data custodian\nC. Auditor\nD. Senior management\n6.\nWhich of the following is a document that defines the scope of security needed by an organiza-\ntion, lists the assets that need protection, and discusses the extent to which security solutions \nshould go to provide the necessary protection?\nA. Security policy\nB. Standard\nC. Guideline\nD. Procedure\n" }, { "page_number": 248, "text": "Review Questions\n203\n7.\nWhich of the following policies is required when industry or legal standards are applicable to \nyour organization?\nA. Advisory\nB. Regulatory\nC. Baseline\nD. Informative\n8.\nWhich of the following is not an element of the risk analysis process?\nA. Analyzing an environment for risks\nB. Creating a cost/benefit report for safeguards to present to upper management\nC. Selecting appropriate safeguards and implementing them\nD. Evaluating each risk as to its likelihood of occurring and cost of the resulting damage\n9.\nWhich of the following would not be considered an asset in a risk analysis?\nA. A development process\nB. An IT infrastructure\nC. A proprietary system resource\nD. Users’ personal files\n10. Which of the following represents accidental exploitations of vulnerabilities?\nA. Threat events\nB. Risks\nC. Threat agents\nD. Breaches\n11. When a safeguard or a countermeasure is not present or is not sufficient, what is created?\nA. Vulnerability\nB. Exposure\nC. Risk\nD. Penetration\n12. Which of the following is not a valid definition for risk?\nA. An assessment of probability, possibility, or chance\nB. Anything that removes a vulnerability or protects against one or more specific threats\nC. Risk = threat + vulnerability\nD. Every instance of exposure\n" }, { "page_number": 249, "text": "204\nChapter 6\n\u0002 Asset Value, Policies, and Roles\n13. When evaluating safeguards, what is the rule that should be followed in most cases?\nA. Expected annual cost of asset loss should not exceed the annual costs of safeguards.\nB. Annual costs of safeguards should equal the value of the asset.\nC. Annual costs of safeguards should not exceed the expected annual cost of asset loss.\nD. Annual costs of safeguards should not exceed 10 percent of the security budget.\n14. How is single loss expectancy (SLE) calculated?\nA. Threat + vulnerability\nB. Asset value ($) * exposure factor\nC. Annualized rate of occurrence * vulnerability\nD. Annualized rate of occurrence * asset value * exposure factor\n15. How is the value of a safeguard to a company calculated?\nA. ALE before safeguard – ALE after implementing the safeguard – annual cost of safeguard\nB. ALE before safeguard * ARO of safeguard\nC. ALE after implementing safeguard + annual cost of safeguard – controls gap\nD. Total risk – controls gap\n16. What security control is directly focused on preventing collusion?\nA. Principle of least privilege\nB. Job descriptions\nC. Separation of duties\nD. Qualitative risk analysis\n17.\nWhich security role is responsible for assigning the sensitivity label to objects?\nA. Users\nB. Data owner\nC. Senior management\nD. Data custodian\n18. When you are attempting to install a new security mechanism for which there is not a detailed \nstep-by-step guide on how to implement that specific product, which element of the security pol-\nicy should you turn to?\nA. Policies\nB. Procedures\nC. Standards\nD. Guidelines\n" }, { "page_number": 250, "text": "Review Questions\n205\n19. While performing a risk analysis, you identify a threat of fire and a vulnerability because there \nare no fire extinguishers. Based on this information, which of the following is a possible risk?\nA. Virus infection\nB. Damage to equipment\nC. System malfunction\nD. Unauthorized access to confidential information\n20. You’ve performed a basic quantitative risk analysis on a specific threat/vulnerability/risk rela-\ntion. You select a possible countermeasure. When re-performing the calculations, which of the \nfollowing factors will change?\nA. Exposure factor\nB. Single loss expectancy\nC. Asset value\nD. Annualized rate of occurrence\n" }, { "page_number": 251, "text": "206\nChapter 6\n\u0002 Asset Value, Policies, and Roles\nAnswers to Review Questions\n1.\nD. Regardless of the specifics of a security solution, humans are the weakest element.\n2.\nA. The first step in hiring new employees is to create a job description. Without a job descrip-\ntion, there is no consensus on what type of individual needs to be found and hired.\n3.\nB. The primary purpose of an exit interview is to review the nondisclosure agreement (NDA).\n4.\nB. You should remove or disable the employee’s network user account immediately before or at \nthe same time they are informed of their termination.\n5.\nD. Senior management is liable for failing to perform prudent due care.\n6.\nA. The document that defines the scope of an organization’s security requirements is called a \nsecurity policy. The policy lists the assets to be protected and discusses the extent to which secu-\nrity solutions should go to provide the necessary protection.\n7.\nB. A regulatory policy is required when industry or legal standards are applicable to your orga-\nnization. This policy discusses the rules that must be followed and outlines the procedures that \nshould be used to elicit compliance.\n8.\nC. Risk analysis includes analyzing an environment for risks, evaluating each risk as to its likeli-\nhood of occurring and the cost of the damage it would cause, assessing the cost of various coun-\ntermeasures for each risk, and creating a cost/benefit report for safeguards to present to upper \nmanagement. Selecting safeguards is a task of upper management based on the results of risk anal-\nysis. It is a task that falls under risk management, but it is not part of the risk analysis process.\n9.\nD. The personal files of users are not assets of the organization and thus not considered in a risk \nanalysis.\n10. A. Threat events are accidental exploitations of vulnerabilities.\n11. A. A vulnerability is the absence or weakness of a safeguard or countermeasure.\n12. B. Anything that removes a vulnerability or protects against one or more specific threats is con-\nsidered a safeguard or a countermeasure, not a risk.\n13. C. The annual costs of safeguards should not exceed the expected annual cost of asset loss.\n14. B. SLE is calculated using the formula SLE = asset value ($) * exposure factor.\n15. A. The value of a safeguard to an organization is calculated by ALE before safeguard – ALE after \nimplementing the safeguard – annual cost of safeguard.\n16. C. The likelihood that a coworker will be willing to collaborate on an illegal or abusive scheme \nis reduced due to the higher risk of detection created by the combination of separation of duties, \nrestricted job responsibilities, and job rotation.\n17.\nB. The data owner is responsible for assigning the sensitivity label to new objects and resources.\n" }, { "page_number": 252, "text": "Answers to Review Questions\n207\n18. D. If no detailed step-by-step instructions or procedures exist, then turn to the guidelines for \ngeneral principles to follow for the installation.\n19. B. The threat of a fire and the vulnerability of a lack of fire extinguishers leads to the risk of dam-\nage to equipment.\n20. D. A countermeasure directly affects the annualized rate of occurrence, primarily because the \ncountermeasure is designed to prevent the occurrence of the risk, thus reducing its frequency \nper year.\n" }, { "page_number": 253, "text": "" }, { "page_number": 254, "text": "Chapter\n7\nData and Application \nSecurity Issues\nTHE CISSP EXAM TOPICS COVERED IN THIS \nCHAPTER INCLUDE:\n\u0001 Application Issues\n\u0001 Databases and Data Warehousing\n\u0001 Data/Information Storage\n\u0001 Knowledge-Based Systems\n\u0001 Systems Development Controls\n" }, { "page_number": 255, "text": "All too often, security administrators are unaware of system vul-\nnerabilities caused by applications with security flaws (either \nintentional or unintentional). Security professionals often have \na background in system administration and don’t have an in-depth understanding of the \napplication development process, and therefore of application security. This can be a critical \nerror. As you will learn in Chapter 14, “Auditing and Monitoring,” organization insiders \n(i.e., employees, contractors, and trusted visitors) are the most likely candidates to commit \ncomputer crimes. Security administrators must be aware of all threats to ensure that adequate \nchecks and balances exist to protect against a malicious insider or application vulnerability.\nThis chapter examines some of the common threats applications pose to both traditional and \ndistributed computing environments. Next, we explore how to protect data. Finally, we take a \nlook at some of the systems development controls that can help ensure the accuracy, reliability, \nand integrity of internal application development processes.\nApplication Issues\nAs technology marches on, application environments are becoming much more complex than \nthey were in the days of simple stand-alone DOS systems running precompiled code. Organi-\nzations are now faced with challenges that arise from connecting their systems to networks of \nall shapes and sizes (from the office LAN to the global Internet) as well as from distributed com-\nputing environments. These challenges come in the form of malicious code threats such as \nmobile code objects, viruses, worms and denial of service attacks. In this section, we’ll take a \nbrief look at a few of these issues.\nLocal/Nondistributed Environment\nIn a traditional, nondistributed computing environment, individual computer systems store and \nexecute programs to perform functions for the local user. Such tasks generally involve net-\nworked applications that provide access to remote resources, such as web servers and remote \nfile servers, as well as other interactive networked activities, such as the transmission and recep-\ntion of electronic mail. The key characteristic of a nondistributed system is that all user-executed \ncode is stored on the local machine (or on a file system accessible to that machine, such as a file \nserver on the machine’s LAN) and executed using processors on that machine.\nThe threats that face local/nondistributed computing environments are some of the more \ncommon malicious code objects that you are most likely already familiar with, at least in \n" }, { "page_number": 256, "text": "Application Issues\n211\npassing. This section contains a brief description of those objects to introduce them from \nan application security standpoint. They are covered in greater detail in Chapter 8, “Mali-\ncious Code and Application Attacks.”\nViruses\nViruses are the oldest form of malicious code objects that plague cyberspace. Once they are in \na system, they attach themselves to legitimate operating system and user files and applications \nand normally perform some sort of undesirable action, ranging from the somewhat innocuous \ndisplay of an annoying message on the screen to the more malicious destruction of the entire \nlocal file system.\nBefore the advent of networked computing, viruses spread from system to system through \ninfected media. For example, suppose a user’s hard drive is infected with a virus. That user \nmight then format a floppy disk and inadvertently transfer the virus to it along with some data \nfiles. When the user inserts the disk into another system and reads the data, that system would \nalso become infected with the virus. The virus might then get spread to several other users, who \ngo on to share it with even more users in an exponential fashion.\nMacro viruses are among the most insidious viruses out there. They’re \nextremely easy to write and take advantage of some of the advanced features \nof modern productivity applications to significantly broaden their reach.\nIn this day and age, more and more computers are connected to some type of network and have \nat least an indirect connection to the Internet. This greatly increases the number of mechanisms \nthat can transport viruses from system to system and expands the potential magnitude of these \ninfections to epidemic proportions. After all, an e-mail macro virus that can automatically prop-\nagate itself to every contact in your address book can inflict far more widespread damage than a \nboot sector virus that requires the sharing of physical storage media to transmit infection. The var-\nious types of viruses and their propagation techniques are discussed in Chapter 8.\nTrojan Horses\nDuring the Trojan War, the Greek military used a false horse filled with soldiers to gain access \nto the fortified city of Troy. The Trojans fell prey to this deception because they believed the \nhorse to be a generous gift and were unaware of its insidious payload. Modern computer users \nface a similar threat from today’s electronic version of the Trojan horse. A Trojan horse is a \nmalicious code object that appears to be a benevolent program—such as a game or simple util-\nity. When a user executes the application, it performs the “cover” functions, as advertised; how-\never, electronic Trojan horses also carry an unknown payload. While the computer user is using \nthe new program, the Trojan horse performs some sort of malicious action—such as opening a \nsecurity hole in the system for hackers to exploit, tampering with data, or installing keystroke \nmonitoring software.\n" }, { "page_number": 257, "text": "212\nChapter 7\n\u0002 Data and Application Security Issues\nLogic Bombs\nLogic bombs are malicious code objects that lie dormant until events occur that satisfy one or more \nlogical conditions. At that time, they spring into action, delivering their malicious payload to unsus-\npecting computer users. They are often planted by disgruntled employees or other individuals who \nwant to harm an organization but for one reason or another might want to delay the malicious activ-\nity for a period of time. Many simple logic bombs operate based solely upon the system date or time. \nFor example, an employee who was terminated might set a logic bomb to destroy critical business \ndata on the first anniversary of their termination. Other logic bombs operate using more complex \ncriteria. For example, a programmer who fears termination might plant a logic bomb that alters pay-\nroll information after the programmer’s account is locked out of the system.\nWorms\nWorms are an interesting type of malicious code that greatly resemble viruses, with one major \ndistinction. Like viruses, worms spread from system to system bearing some type of malicious \npayload. However, whereas viruses must be shared to propagate, worms are self-replicating. \nThey remain resident in memory and exploit one or more networking vulnerabilities to spread \nfrom system to system under their own power. Obviously, this allows for much greater propa-\ngation and can result in a denial of service attack against entire networks. Indeed, the famous \nInternet Worm launched by Robert Morris in November 1988 (technical details of this worm \nare presented in Chapter 8) actually crippled the entire Internet for several days.\nDistributed Environment\nThe previous section discussed how the advent of networked computing facilitated the rapid spread \nof malicious code objects between computing systems. This section examines how distributed com-\nputing (an offshoot of networked computing) introduces a variety of new malicious code threats that \ninformation system security practitioners must understand and protect their systems against.\nEssentially, distributed computing allows a single user to harness the computing power of \none or more remote systems to achieve a single goal. A very common example of this is the cli-\nent/server interaction that takes place when a computer user browses the World Wide Web. The \nclient uses a web browser, such as Microsoft Internet Explorer or Netscape Navigator, to \nrequest information from a remote server. The remote server’s web hosting software then \nreceives and processes the request. In many cases, the web server fulfills the request by retrieving \nan HTML file from the local file system and transmitting it to the remote client. In the case of \ndynamically generated web pages, that request might involve generating custom content tai-\nlored to the needs of the individual user (real-time account information is a good example of \nthis). In effect, the web user is causing remote server(s) to perform actions on their behalf.\nAgents\nAgents (also known as bots) are intelligent code objects that perform actions on behalf of a user. \nAgents typically take initial instructions from the user and then carry on their activity in an \nunattended manner for a predetermined period of time, until certain conditions are met, or for \nan indefinite period.\n" }, { "page_number": 258, "text": "Application Issues\n213\nThe most common type of intelligent agent in use today is the web bot. These agents contin-\nuously crawl a variety of websites retrieving and processing data on behalf of the user. For \nexample, a user interested in finding a low airfare between two cities might program an intel-\nligent agent to scour a variety of airline and travel websites and continuously check fare prices. \nWhenever the agent detects a fare lower than previous fares, it might send the user an e-mail \nmessage, pager alert, or other notification of the cheaper travel opportunity. More adventurous \nbot programmers might even provide the agent with credit card information and instruct it to \nactually order a ticket when the fare reaches a certain level.\nAlthough agents can be very useful computing objects, they also introduce a variety of new \nsecurity concerns that must be addressed. For example, what if a hacker programs an agent to \ncontinuously probe a network for security holes and report vulnerable systems in real time? \nHow about a malicious individual who uses a number of agents to flood a website with bogus \nrequests, thereby mounting a denial of service attack against that site? Or perhaps a commer-\ncially available agent accepts credit card information from a user and then transmits it to a \nhacker at the same time that it places a legitimate purchase.\nApplets\nRecall that agents are code objects sent from a user’s system to query and process data stored \non remote systems. Applets perform the opposite function; these code objects are sent from a \nserver to a client to perform some action. In fact, applets are actually self-contained miniature \nprograms that execute independently of the server that sent them.\nThis process is best explained through the use of an example. Imagine a web server that offers \na variety of financial tools to Web users. One of these tools might be a mortgage calculator that \nprocesses a user’s financial information and provides a monthly mortgage payment based upon \nthe loan’s principal and term and the borrower’s credit information. Instead of processing this \ndata and returning the results to the client system, the remote web server might send to the local \nsystem an applet that enables it to perform those calculations itself. This provides a number of \nbenefits to both the remote server and the end user:\n\u0002\nThe processing burden is shifted to the client, freeing up resources on the web server to pro-\ncess requests from more users.\n\u0002\nThe client is able to produce data using local resources rather than waiting for a response \nfrom the remote server. In many cases, this results in a quicker response to changes in the \ninput data.\n\u0002\nIn a properly programmed applet, the web server does not receive any data provided to the \napplet as input, therefore maintaining the security and privacy of the user’s financial data.\nHowever, just as with agents, applets introduce a number of security concerns. They allow a \nremote system to send code to the local system for execution. Security administrators must take \nsteps to ensure that this code is safe and properly screened for malicious activity. Also, unless the \ncode is analyzed line by line, the end user can never be certain that the applet doesn’t contain a \nTrojan horse component. For example, the mortgage calculator might indeed transmit sensitive \nfinancial information back to the web server without the end user’s knowledge or consent.\nThe following sections explore two common applet types: Java applets and ActiveX controls.\n" }, { "page_number": 259, "text": "214\nChapter 7\n\u0002 Data and Application Security Issues\nJava Applets\nJava is a platform-independent programming language developed by Sun Microsystems. Most \nprogramming languages use compilers that produce applications custom-tailored to run under \na specific operating system. This requires the use of multiple compilers to produce different ver-\nsions of a single application for each platform it must support. Java overcomes this limitation \nby inserting the Java Virtual Machine (JVM) into the picture. Each system that runs Java code \ndownloads the version of the JVM supported by its operating system. The JVM then takes the \nJava code and translates it into a format executable by that specific system. The great benefit of \nthis arrangement is that code can be shared between operating systems without modification. \nJava applets are simply short Java programs transmitted over the Internet to perform operations \non a remote system.\nSecurity was of paramount concern during the design of the Java platform and Sun’s devel-\nopment team created the “sandbox” concept to place privilege restrictions on Java code. The \nsandbox isolates Java code objects from the rest of the operating system and enforces strict rules \nabout the resources those objects can access. For example, the sandbox would prohibit a Java \napplet from retrieving information from areas of memory not specifically allocated to it, pre-\nventing the applet from stealing that information.\nActiveX Controls\nActiveX controls are Microsoft’s answer to Sun’s Java applets. They operate in a very similar \nfashion, but they are implemented using any one of a variety of languages, including Visual \nBasic, C, C++, and Java. There are two key distinctions between Java applets and ActiveX con-\ntrols. First, ActiveX controls use proprietary Microsoft technology and, therefore, can execute \nonly on systems running Microsoft operating systems. Second, ActiveX controls are not subject \nto the sandbox restrictions placed on Java applets. They have full access to the Windows oper-\nating environment and can perform a number of privileged actions. Therefore, special precau-\ntions must be taken when deciding which ActiveX controls to download and execute. Many \nsecurity administrators have taken the somewhat harsh position of prohibiting the download of \nany ActiveX content from all but a select handful of trusted sites.\nObject Request Brokers\nTo facilitate the growing trend toward distributed computing, the Object Management Group \n(OMG) set out to develop a common standard for developers around the world. The results of \ntheir work, known as the Common Object Request Broker Architecture (CORBA), defines an \ninternational standard (sanctioned by the International Organization for Standardization) for \ndistributed computing. It defines the sequence of interactions between client and server shown \nin Figure 7.1.\nIn this model, clients do not need specific knowledge of a server’s location or technical details \nto interact with it. They simply pass their request for a particular object to a local Object \nRequest Broker (ORB) using a well-defined interface. These interfaces are created using the \nOMG’s Interface Definition Language (IDL). The ORB, in turn, invokes the appropriate object, \nkeeping the implementation details transparent to the original client.\n" }, { "page_number": 260, "text": "Application Issues\n215\nF I G U R E\n7 . 1\nCommon Object Request Broker Architecture (CORBA)\nObject Request Brokers (ORBs) are an offshoot of object-oriented program-\nming, a topic discussed later in this chapter.\nThe discussion of CORBA and ORBs presented here is, by necessity, an over-\nsimplification designed to provide security professionals with an overview of \nthe process. CORBA extends well beyond the model presented in Figure 7.1 to \nfacilitate ORB-to-ORB interaction, load balancing, fault tolerance, and a num-\nber of other features. If you’re interested in learning more about CORBA, the \nOMG has an excellent tutorial on their website at www.omg.org/getting-\nstarted/index.htm.\nMicrosoft Component Models\nThe driving force behind OMG’s efforts to implement CORBA was the desire to create a com-\nmon standard that enabled non-vendor-specific interaction. However, as such things often \ngo, Microsoft decided to develop its own proprietary standards for object management: \nCOM and DCOM.\nThe Component Object Model (COM) is Microsoft’s standard architecture for the use \nof components within a process or between processes running on the same system. It works \nacross the range of Microsoft products, from development environments to the Office pro-\nductivity suite. In fact, Office’s object linking and embedding (OLE) model that allows \nusers to create documents that utilize components from different applications uses the \nCOM architecture.\nAlthough COM is restricted to local system interactions, the Distributed Component Object \nModel (DCOM) extends the concept to cover distributed computing environments. It replaces \nCOM’s interprocess communications capability with an ability to interact with the network \nstack and invoke objects located on remote systems.\nClient\nRequest\nObject\nRequest\nObject Request Broker (ORB)\n" }, { "page_number": 261, "text": "216\nChapter 7\n\u0002 Data and Application Security Issues\nAlthough DCOM and CORBA are competing component architectures, \nMicrosoft and OMG agreed to allow some interoperability between ORBs uti-\nlizing different models.\nDatabases and Data Warehousing\nAlmost every modern organization maintains some sort of database that contains information \ncritical to operations—be it customer contact information, order tracking data, human resource \nand benefits information, or sensitive trade secrets. It’s likely that many of these databases con-\ntain personal information that users hold secret, such as credit card usage activity, travel habits, \ngrocery store purchases, and telephone records. Because of the growing reliance on database \nsystems, information security professionals must ensure that adequate security controls exist to \nprotect them against unauthorized access, tampering, or destruction of data.\nIn the following sections, we’ll discuss database management system (DBMS) architecture, the var-\nious types of DBMSs, and their features. Then we’ll discuss database security features, polyinstantia-\ntion, ODBS, aggregation, inference, and data mining. They’re loaded sections, so pay attention.\nDatabase Management System (DBMS) Architecture\nAlthough there is variety of database management system (DBMS) architectures available \ntoday, the vast majority of contemporary systems implement a technology known as relational \ndatabase management systems (RDBMSs). For this reason, the following sections focus prima-\nrily on relational databases. However, first we’ll discuss two other important DBMS architec-\ntures: hierarchical and distributed.\nHierarchical and Distributed Databases\nA hierarchical data model combines records and fields that are related in a logical tree structure. \nThis is done so that each field can have one child, many, or no children, but each field can have \nonly a single parent, resulting in a consistent data mapping relationship of one-to-many. The hier-\narchical database model is not considered to be as flexible as the model for relational databases \n(which uses a data mapping relationship of one-to-one). This is due to the hierarchical database’s \ntree structure created by its linkages of data elements. Changing a single leaf or field is easy, but \naltering an entire branch (called pruning) is difficult. A great example of the hierarchical data \nmodel is the DNS system or the forked competition maps used in sports tournaments.\nThe distributed data model has data stored in more than one database, but those databases \nare logically connected. The user perceives the database as a single entity, even though it com-\nprises numerous parts interconnected over a network. Each field can have numerous children as \nwell as numerous parents. Thus, the data mapping relationship for distributed databases is \nmany-to-many.\n" }, { "page_number": 262, "text": "Databases and Data Warehousing\n217\nRelational Databases\nA relational database is a flat two-dimensional table made up of rows and columns. The row \nand column structure provides for one-to-one data mapping relationships. The main building \nblock of the relational database is the table (also known as a relation). Each table contains a set \nof related records. For example, a sales database might contain the following tables:\n\u0002\nCustomers table that contains contact information for all of the organization’s clients\n\u0002\nSales Reps table that contains identity information on the organization’s sales force\n\u0002\nOrders table that contains records of orders placed by each customer\nEach of these tables contains a number of attributes, or fields. They are typically represented \nas the columns of a table. For example, the Customers table might contain columns for the com-\npany name, address, city, state, zip code, and telephone number. Each customer would have its \nown record, or tuple, represented by a row in the table. The number of rows in the relation is \nreferred to as cardinality and the number of columns is the degree. The domain of a relation is \nthe set of allowable values that the attribute can take.\nTo remember cardinality, think of a deck of cards on a desk, each card (i.e., \nthe first four letters of this term!) is a row. To remember degree, think of a \nwall thermometer as a column (i.e., the temperature in degrees as mea-\nsured on a thermometer!).\nRelationships between the tables are defined to identify related records. In this example, rela-\ntionships would probably exist between the Customers table and the Sales Reps table because \neach customer is assigned a sales representative and each sales representative is assigned to one \nor more customers. Additionally, a relationship would probably exist between the Customers \ntable and the Orders table because each order must be associated with a customer and each cus-\ntomer is associated with one or more product orders.\nRecords are identified using a variety of keys. Quite simply, keys are a subset of the fields of a table \nused to uniquely identify records. There are three types of keys with which you should be familiar:\nCandidate keys\nSubsets of attributes that can be used to uniquely identify any record in a table. No \ntwo records in the same table will ever contain the same values for all attributes composing a candi-\ndate key. Each table may have one or more candidate keys, which are chosen from column headings.\nObject-Oriented Programming and Databases\nWhen relational databases are combined with object-oriented programming environments, \nobject-relational databases are produced. True Object Oriented DataBases (OODBs) benefit \nfrom the ease of code reuse, ease of troubleshooting analysis, and reduced overall mainte-\nnance. OODBs are also better suited for supporting complex applications involving multime-\ndia, CAD, video, graphics, and expert systems than other types of databases.\n" }, { "page_number": 263, "text": "218\nChapter 7\n\u0002 Data and Application Security Issues\nPrimary keys\nSelected from the set of candidate keys for a table to be used to uniquely identify \nthe records in a table. Each table has only one primary key, selected by the database designer \nfrom the set of candidate keys. The RDBMS enforces the uniqueness of primary keys by disal-\nlowing the insertion of multiple records with the same primary key.\nForeign keys\nUsed to enforce relationships between two tables (also known as referential integ-\nrity). Referential integrity ensures that if one table contains a foreign key, it actually corresponds \nto a still existing primary key in the other table in the relationship. It makes certain that no record/\ntuple/row contains a reference to a primary key of a nonexistent record/tuple/row.\nModern relational databases use a standard language, the Structured Query Language (SQL), to \nprovide users with a consistent interface for the storage, retrieval, and mo;dification of data and for \nadministrative control of the DBMS. Each DBMS vendor implements a slightly different version of \nSQL (like Microsoft’s Transact-SQL and Oracle’s PL/SQL), but all support a core feature set. SQL's \nprimary security feature is its granularity of authorization. However, SQL supports a myriad of \nways to execute or phrase the same query. In fact, the six basic SQL commands (Select, Update, \nDelete, Insert, Grant, and Revoke) can be used in various ways to perform the same activity.\nA bind variable is a placeholder for SQL literal values, such as numbers or char-\nacter strings. When a SQL query containing bind variables is passed to the \nserver, the server expects you to follow up the query later to pass on the actual \nliterals to put into the placeholders.\nDatabase Normalization\nDatabase developers strive to create well-organized and efficient databases. To assist with this \neffort, they’ve created several defined levels of database organization known as normal forms. \nThe process of bringing a database table into compliance with the normal forms is known as \nnormalization.\nAlthough there is a number of normal forms out there, the three most common are the First \nNormal Form (1NF), the Second Normal Form (2NF), and the Third Normal Form (3NF). Each \nof these forms adds additional requirements to reduce redundancy in the table, eliminating \nmisplaced data and performing a number of other housekeeping tasks. The normal forms are \ncumulative; to be in 2NF, a table must first be 1NF compliant. Before making a table 3NF com-\npliant, it must first be in 2NF.\nThe details of normalizing a database table are beyond the scope of the CISSP exam, but there \nis a large number of resources available on the Web to help you understand the requirements \nof the normal forms in greater detail.\n" }, { "page_number": 264, "text": "Databases and Data Warehousing\n219\nSQL provides the complete functionality necessary for administrators, developers, and end users \nto interact with the database. In fact, most of the GUI interfaces popular today merely wrap some \nextra bells and whistles around a simple SQL interface to the DBMS. SQL itself is divided into two \ndistinct components: the Data Definition Language (DDL), which allows for the creation and mod-\nification of the database’s structure (known as the schema), and the Data Manipulation Language \n(DML), which allows users to interact with the data contained within that schema.\nDatabase Transactions\nRelational databases support the explicit and implicit use of transactions to ensure data integ-\nrity. Each transaction is a discrete set of SQL instructions that will either succeed or fail as a \ngroup. It’s not possible for part of a transaction to succeed while part fails. Consider the exam-\nple of a transfer between two accounts at a bank. We might use the following SQL code to first \nadd $250 to account 1001 and then subtract $250 from account 2002:\nBEGIN TRANSACTION\nUPDATE accounts\nSET balance = balance + 250\nWHERE account_number = 1001\nUPDATE accounts\nSET balance = balance – 250\nWHERE account_number = 2002\nEND TRANSACTION\nImagine a case where these two statements were not executed as part of a transaction, but \nwere executed separately. If the database failed during the moment between completion of the \nfirst transaction and completion of the second transaction, $250 would have been added to \naccount 1001 but there would have been no corresponding deduction from account 2002. The \n$250 would have appeared out of thin air! This simple example underscores the importance of \ntransaction-oriented processing.\nWhen a transaction successfully completes, it is said to be committed to the database and can \nnot be undone. Transaction committing may be explicit, using SQL’s COMMIT command, or \nimplicit if the end of the transaction is successfully reached. If a transaction must be aborted, it \nmay be rolled back explicitly using the ROLLBACK command or implicitly if there is a hardware \nor software failure. When a transaction is rolled back, the database restores itself to the condi-\ntion it was in before the transaction began.\nThere are four required characteristics of all database transactions: atomicity, consistency, \nisolation, and durability. Together, these attributes are known as the ACID model, which is a \n" }, { "page_number": 265, "text": "220\nChapter 7\n\u0002 Data and Application Security Issues\ncritical concept in the development of database management systems. Let’s take a brief look at \neach of these requirements:\nAtomicity\nDatabase transactions must be atomic—that is, they must be an “all or nothing” \naffair. If any part of the transaction fails, the entire transaction must be rolled back as if it never \noccurred.\nConsistency\nAll transactions must begin operating in an environment that is consistent with \nall of the database’s rules (for example, all records have a unique primary key). When the trans-\naction is complete, the database must again be consistent with the rules, regardless of whether \nthose rules were violated during the processing of the transaction itself. No other transaction \nshould ever be able to utilize any inconsistent data that might be generated during the execution \nof another transaction.\nIsolation\nThe isolation principle requires that transactions operate separately from each other. \nIf a database receives two SQL transactions that modify the same data, one transaction must be \ncompleted in its entirety before the other transaction is allowed to modify the same data. This \nprevents one transaction from working with invalid data generated as an intermediate step by \nanother transaction.\nDurability\nDatabase transactions must be durable. That is, once they are committed to the \ndatabase, they must be preserved. Databases ensure durability through the use of backup mech-\nanisms, such as transaction logs.\nIn the following sections, we’ll discuss a variety of specific security issues of concern to data-\nbase developers and administrators.\nSecurity for Multilevel Databases\nAs you learned in Chapter 5, “Security Management Concepts and Principles,” many organi-\nzations use data classification schemes to enforce access control restrictions based upon the \nsecurity labels assigned to data objects and individual users. When mandated by an organiza-\ntion’s security policy, this classification concept must also be extended to the organization’s \ndatabases.\nMultilevel security databases contain information at a number of different classification lev-\nels. They must verify the labels assigned to users and, in response to user requests, provide only \ninformation that’s appropriate. However, this concept becomes somewhat more complicated \nwhen considering security for a database.\nWhen multilevel security is required, it’s essential that administrators and developers strive \nto keep data with different security requirements separate. The mixing of data with different \nclassification levels and/or need-to-know requirements is known as database contamination \nand is a significant security risk. Often, administrators will deploy a trusted front end to add \nmultilevel security to a legacy or insecure DBMS.\n" }, { "page_number": 266, "text": "Databases and Data Warehousing\n221\nConcurrency\nConcurrency, or edit control, is a preventative security mechanism that endeavors to make certain \nthat the information stored in the database is always correct or at least has its integrity and avail-\nability protected. This feature can be employed whether the database is multilevel or single level. \nConcurrency uses a “lock” feature to allow an authorized user to make changes but deny other \nusers access to view or make changes to data elements at the same time. Then, after the changes \nhave been made, an “unlock” feature allows other users the access they need. In some instances, \nadministrators will use concurrency with auditing mechanisms to track document and/or field \nchanges. When this recorded data is reviewed, concurrency becomes a detective control.\nOther Security Mechanisms\nThere are several other security mechanisms that administrators may deploy when using a \nDBMS. These features are relatively easy to implement and are common in the industry. The \nmechanisms related to semantic integrity, for instance are common security features of a DBMS. \nSemantic integrity ensures that no structural and semantic rules are violated due to any queries \nor updates by any user. It also checks that all stored data types are within valid domain ranges, \nensures that only logical values exist, and confirms that the system complies with any and all \nuniqueness constraints.\nAdministrators may employ time and date stamps to maintain data integrity and availability. \nTime and date stamps often appear in distributed database systems. When a time stamp is \nplaced on all change transactions and those changes are distributed or replicated to the other \ndatabase members, all changes are applied to all members but they are implemented in correct \nchronological order.\nRestricting Access with Views\nAnother way to implement multilevel security in a database is through the use of database \nviews. Views are simply SQL statements that present data to the user as if they were tables \nthemselves. They may be used to collate data from multiple tables, aggregate individual \nrecords, or restrict a user’s access to a limited subset of database attributes and/or records.\nViews are stored in the database as SQL commands rather than as tables of data. This dramat-\nically reduces the space requirements of the database and allows views to violate the rules of \nnormalization that apply to tables. On the other hand, retrieving data from a complex view can \ntake significantly longer than retrieving it from a table because the DBMS may need to perform \ncalculations to determine the value of certain attributes for each record.\nDue to the flexibility of views, many database administrators use them as a security tool—\nallowing users to interact only with limited views rather than with the raw tables of data under-\nlying them.\n" }, { "page_number": 267, "text": "222\nChapter 7\n\u0002 Data and Application Security Issues\nAnother common security feature of DBMS is that objects can be controlled granularly within \nthe database; this can also improve security control. Content-dependent access control is an exam-\nple of granular object control. Content-dependent access control focuses on control based upon \nthe contents or payload of the object being accessed. Since decisions must be made on an object-\nby-object basis, content-dependent control increases processing overhead. Another form of gran-\nular control is cell suppression. Cell suppression is the concept of hiding or imposing more security \nrestrictions on individual database fields or cells.\nContext-dependent access control is often discussed alongside content-dependent access con-\ntrol due to the similarity of their names. Context-dependent access control evaluates the big pic-\nture to make its access control decisions. The key factor in context-dependent access control is \nhow each object or packet or field relates to the overall activity or communication. Any single \nelement may look innocuous by itself, but in a larger context that element may be revealed to \nbe benign or malign.\nAdministrators may employ database partitioning to subvert aggregation, inferencing, and \ncontamination vulnerabilities. Database partitioning is the process of splitting a single database \ninto multiple parts, each with a unique and distinct security level or type of content.\nPolyinstantiation occurs when two or more rows in the same relational database table \nappear to have identical primary key elements but contain different data for use at differing clas-\nsification levels. It is often used as a defense against some types of inference attacks (we’ll discuss \ninference in just a moment).\nConsider a database table containing the location of various naval ships on patrol. Nor-\nmally, this database contains the exact position of each ship stored at the level with secret clas-\nsification. However, one particular ship, the USS UpToNoGood, is on an undercover mission \nto a top-secret location. Military commanders do not want anyone to know that the ship devi-\nated from its normal patrol. If the database administrators simply change the classification of \nthe UpToNoGood’s location to top secret, a user with a secret clearance would know that \nsomething unusual was going on when they couldn’t query the location of the ship. However, \nif polyinstantiation is used, two records could be inserted into the table. The first one, classified \nat the top secret level, would reflect the true location of the ship and be available only to users \nwith the appropriate top secret security clearance. The second record, classified at the secret \nlevel, would indicate that the ship was on routine patrol and would be returned to users with \na secret clearance.\nFinally, administrators can utilize noise and perturbation to insert false or misleading data \ninto a DBMS in order to redirect or thwart information confidentiality attacks.\nODBC\nOpen Database Connectivity (ODBC) is a database feature that allows applications to commu-\nnicate with different types of databases without having to be directly programmed for interac-\ntion with every type of database. ODBC acts as a proxy between applications and back-end \ndatabase drivers, giving application programmers greater freedom in creating solutions without \nhaving to worry about the back-end database system. Figure 7.2 illustrates the relationship \nbetween ODBC and DBMS.\n" }, { "page_number": 268, "text": "Databases and Data Warehousing\n223\nF I G U R E\n7 . 2\nODBC as the interface between applications and DBMS\nAggregation\nSQL provides a number of functions that combine records from one or more tables to produce \npotentially useful information. This process is called aggregation. Some of the functions, known \nas the aggregate functions, are listed here:\nCOUNT( )\nReturns the number of records that meet specified criteria\nMIN( )\nReturns the record with the smallest value for the specified attribute or combination \nof attributes\nMAX( )\nReturns the record with the largest value for the specified attribute or combination of \nattributes\nSUM( )\nReturns the summation of the values of the specified attribute or combination of \nattributes across all affected records\nAVG( )\nReturns the average value of the specified attribute or combination of attributes \nacross all affected records\nThese functions, although extremely useful, also pose a significant risk to the security of \ninformation in a database. For example, suppose a low-level military records clerk is responsi-\nble for updating records of personnel and equipment as they are transferred from base to base. \nAs part of their duties, this clerk may be granted the database permissions necessary to query \nand update personnel tables. Aggregation is not without its security vulnerabilities. Aggregation \nattacks are used to collect numerous low-level security items or low-value items and combine \nthem together to create something of a higher security level or value.\nThe military might not consider an individual transfer request (i.e., Sgt. Jones is being moved \nfrom Base X to Base Y) to be classified information. The records clerk has access to that infor-\nmation, but most likely, Sgt. Jones has already informed his friends and family that he will be \nmoving to Base Y. However, with access to aggregate functions, the records clerk might be able \nto count the number of troops assigned to each military base around the world. These force lev-\nels are often closely guarded military secrets, but the low-ranking records clerk was able to \ndeduce them by using aggregate functions across a large amount of unclassified data.\nO\nD\nB\nC\nApplication\nODBC\nManager\nDatabase\nDrivers\nDatabase\nTypes\n" }, { "page_number": 269, "text": "224\nChapter 7\n\u0002 Data and Application Security Issues\nFor this reason, it’s especially important for database security administrators to strictly con-\ntrol access to aggregate functions and adequately assess the potential information they may \nreveal to unauthorized individuals.\nData Mining\nMany organizations use large databases, known as data warehouses, to store large amounts of \ninformation from a variety of databases for use in specialized analysis techniques. These data \nwarehouses often contain detailed historical information not normally stored in production \ndatabases due to storage limitations or data security concerns.\nAn additional type of storage, known as a data dictionary, is commonly used for storing crit-\nical information about data, including usage, type, sources, relationships, and formats. DBMS \nsoftware reads the data dictionary to determine access rights for users attempting to access data.\nData mining techniques allow analysts to comb through these data warehouses and look for \npotential correlated information amid the historical data. For example, an analyst might discover \nthat the demand for light bulbs always increases in the winter months and then use this informa-\ntion when planning pricing and promotion strategies. The information that is discovered during \nInference\nThe database security issues posed by inference attacks are very similar to those posed by the \nthreat of data aggregation. As with aggregation, inference attacks involve the combination of \nseveral pieces of nonsensitive information used to gain access to information that should be \nclassified at a higher level. However, inference makes use of the human mind’s deductive \ncapacity rather than the raw mathematical ability of modern database platforms.\nA commonly cited example of an inference attack is that of the accounting clerk at a large cor-\nporation who is allowed to retrieve the total amount the company spends on salaries for use \nin a top-level report but is not allowed to access the salaries of individual employees. The \naccounting clerk often has to prepare those reports with effective dates in the past and so is \nallowed to access the total salary amounts for any day in the past year. Say, for example, that \nthis clerk must also know the hiring and termination dates of various employees and has access \nto this information. This opens the door for an inference attack. If an employee was the only \nperson hired on a specific date, the accounting clerk can now retrieve the total salary amount \non that date and the day before and deduce the salary of that particular employee—sensitive \ninformation that the user would not be permitted to access directly.\nAs with aggregation, the best defense against inference attacks is to maintain constant vigi-\nlance over the permissions granted to individual users. Furthermore, intentional blurring of \ndata may be used to prevent the inference of sensitive information. For example, if the account-\ning clerk were able to retrieve only salary information rounded to the nearest million, they \nwould probably not be able to gain any useful information about individual employees.\n" }, { "page_number": 270, "text": "Data/Information Storage\n225\na data mining operation is called metadata, or data about data, and is stored in a data mart. A data \nmart is a more secure storage environment than a data warehouse.\nData warehouses and data mining are significant to security professionals for two reasons. First, \nas previously mentioned, data warehouses contain large amounts of potentially sensitive informa-\ntion vulnerable to aggregation and inference attacks, and security practitioners must ensure that ade-\nquate access controls and other security measures are in place to safeguard this data. Second, data \nmining can actually be used as a security tool when it’s used to develop baselines for statistical anom-\naly-based intrusion detection systems (see Chapter 2, “Attacks and Monitoring,” for more informa-\ntion on the various types and functionality of intrusion detection systems).\nData/Information Storage\nDatabase management systems have helped harness the power of data and gain some modicum \nof control over who can access it and the actions they can perform on it. However, security pro-\nfessionals must keep in mind that DBMS security covers access to information through only the \ntraditional “front door” channels. Data is also processed through a computer’s storage \nresources—both memory and physical media. Precautions must be in place to ensure that these \nbasic resources are protected against security vulnerabilities as well. After all, you would never \nincur a lot of time and expense to secure the front door of your home and then leave the back \ndoor wide open, would you?\nTypes of Storage\nModern computing systems use several types of storage to maintain system and user data. The \nsystems strike a balance between the various storage types to satisfy an organization’s comput-\ning requirements. There are several common storage types:\nPrimary (or “real”) memory\nConsists of the main memory resources directly available to a \nsystem’s CPU. Primary memory normally consists of volatile random access memory (RAM) \nand is usually the most high-performance storage resource available to a system.\nSecondary storage\nConsists of more inexpensive, nonvolatile storage resources available to a \nsystem for long-term use. Typical secondary storage resources include magnetic and optical \nmedia, such as tapes, disks, hard drives, and CD/DVD storage.\nVirtual memory\nAllows a system to simulate additional primary memory resources through \nthe use of secondary storage. For example, a system low on expensive RAM might make a por-\ntion of the hard disk available for direct CPU addressing.\nVirtual storage\nAllows a system to simulate secondary storage resources through the use of \nprimary storage. The most common example of virtual storage is the “RAM disk” that presents \nitself to the operating system as a secondary storage device but is actually implemented in vol-\natile RAM. This provides an extremely fast file system for use in various applications but pro-\nvides no recovery capability.\n" }, { "page_number": 271, "text": "226\nChapter 7\n\u0002 Data and Application Security Issues\nRandom access storage\nAllows the operating system to request contents from any point \nwithin the media. RAM and hard drives are examples of random access storage.\nSequential access storage\nRequires scanning through the entire media from the beginning to \nreach a specific address. A magnetic tape is a common example of sequential access storage.\nVolatile storage\nLoses its contents when power is removed from the resource. RAM is the \nmost common type of volatile storage.\nNonvolatile storage\nDoes not depend upon the presence of power to maintain its contents. \nMagnetic/optical media and nonvolatile RAM (NVRAM) are typical examples of nonvol-\natile storage.\nStorage Threats\nInformation security professionals should be aware of two main threats posed against data stor-\nage systems. First, the threat of illegitimate access to storage resources exists no matter what \ntype of storage is in use. If administrators do not implement adequate file system access con-\ntrols, an intruder might stumble across sensitive data simply by browsing the file system. In \nmore sensitive environments, administrators should also protect against attacks that involve \nbypassing operating system controls and directly accessing the physical storage media to \nretrieve data. This is best accomplished through the use of an encrypted file system, which is \naccessible only through the primary operating system. Furthermore, systems that operate in a \nmultilevel security environment should provide adequate controls to ensure that shared mem-\nory and storage resources provide fail-safe controls so that data from one classification level is \nnot readable at a lower classification level.\nCovert channel attacks pose the second primary threat against data storage resources. Covert \nstorage channels allow the transmission of sensitive data between classification levels through \nthe direct or indirect manipulation of shared storage media. This may be as simple as writing \nsensitive data to an inadvertently shared portion of memory or physical storage. More complex \ncovert storage channels might be used to manipulate the amount of free space available on a \ndisk or the size of a file to covertly convey information between security levels. For more infor-\nmation on covert channel analysis, see Chapter 12, “Principles of Security Models.”\nKnowledge-Based Systems\nSince the advent of computing, engineers and scientists have worked toward developing systems \ncapable of performing routine actions that would bore a human and consume a significant \namount of time. The majority of the achievements in this area focused on relieving the burden \nof computationally intensive tasks. However, researchers have also made giant strides toward \ndeveloping systems that have an “artificial intelligence” that can simulate (to some extent) the \npurely human power of reasoning.\n" }, { "page_number": 272, "text": "Knowledge-Based Systems\n227\nThe following sections examine two types of knowledge-based artificial intelligence systems: \nexpert systems and neural networks. We’ll also take a look at their potential applications to \ncomputer security problems.\nExpert Systems\nExpert systems seek to embody the accumulated knowledge of mankind on a particular subject \nand apply it in a consistent fashion to future decisions. Several studies have shown that expert \nsystems, when properly developed and implemented, often make better decisions than some of \ntheir human counterparts when faced with routine decisions.\nThere are two main components to every expert system. The knowledge base contains the \nrules known by an expert system. The knowledge base seeks to codify the knowledge of human \nexperts in a series of “if/then” statements. Let’s consider a simple expert system designed to help \nhomeowners decide if they should evacuate an area when a hurricane threatens. The knowledge \nbase might contain the following statements (these statements are for example only):\n\u0002\nIf the hurricane is a Category 4 storm or higher, then flood waters normally reach a height \nof 20 feet above sea level.\n\u0002\nIf the hurricane has winds in excess of 120 miles per hour (mph), then wood-frame struc-\ntures will fail.\n\u0002\nIf it is late in the hurricane season, then hurricanes tend to get stronger as they approach \nthe coast.\nIn an actual expert system, the knowledge base would contain hundreds or thousands of asser-\ntions such as those just listed.\nThe second major component of an expert system—the inference engine—analyzes informa-\ntion in the knowledge base to arrive at the appropriate decision. The expert system user utilizes \nsome sort of user interface to provide the inference engine with details about the current situa-\ntion, and the inference engine uses a combination of logical reasoning and fuzzy logic techniques \nto draw a conclusion based upon past experience. Continuing with the hurricane example, a \nuser might inform the expert system that a Category 4 hurricane is approaching the coast with \nwind speeds averaging 140 mph. The inference engine would then analyze information in the \nknowledge base and make an evacuation recommendation based upon that past knowledge.\nExpert systems are not infallible—they’re only as good as the data in the knowledge base and \nthe decision-making algorithms implemented in the inference engine. However, they have one \nmajor advantage in stressful situations—their decisions do not involve judgment clouded by \nemotion. Expert systems can play an important role in analyzing situations such as emergency \nevents, stock trading, and other scenarios in which emotional investment sometimes gets in the \nway of a logical decision. For this reason, many lending institutions now utilize expert systems \nto make credit decisions instead of relying upon loan officers who might say to themselves, \n“Well, Jim hasn’t paid his bills on time, but he seems like a perfectly nice guy.”\n" }, { "page_number": 273, "text": "228\nChapter 7\n\u0002 Data and Application Security Issues\nNeural Networks\nIn neural networks, chains of computational units are used in an attempt to imitate the biolog-\nical reasoning process of the human mind. In an expert system, a series of rules is stored in a \nknowledge base, whereas in a neural network, a long chain of computational decisions that feed \ninto each other and eventually sum to produce the desired output is set up.\nKeep in mind that no neural network designed to date comes close to having the actual rea-\nsoning power of the human mind. That notwithstanding, neural networks show great potential \nto advance the artificial intelligence field beyond its current state. Benefits of neural networks \ninclude linearity, input-output mapping, and adaptivity. These benefits are evident in the imple-\nmentations of neural networks for voice recognition, face recognition, weather prediction, and \nthe exploration of models of thinking and consciousness.\nTypical neural networks involve many layers of summation, each of which requires weighting \ninformation to reflect the relative importance of the calculation in the overall decision-making \nprocess. These weights must be custom-tailored for each type of decision the neural network is \nexpected to make. This is accomplished through the use of a training period during which the \nnetwork is provided with inputs for which the proper decision is known. The algorithm then \nworks backward from these decisions to determine the proper weights for each node in the com-\nputational chain. This activity is known as the Delta rule or learning rule. Through the use of \nthe Delta rule, neural networks are able to learn from experience.\nDecision Support Systems\nA Decision Support System (DSS) is a knowledge-based application that analyzes business data \nand presents it in such a way as to make business decisions easier for users. It is considered more \nof an informational application than an operational application. Often a DSS is employed by \nknowledge workers (such as help desk or customer support personnel) and by sales services \n(such as phone operators). This type of application may present information in a graphical man-\nner so as to link concepts and content and guide the script of the operator. Often a DSS is backed \nby an expert system controlling a database.\nFuzzy Logic\nAs previously mentioned, inference engines commonly use a technique known as fuzzy logic. \nThis technique is designed to more closely approximate human thought patterns than the rigid \nmathematics of set theory or algebraic approaches that utilize “black and white” categoriza-\ntions of data. Fuzzy logic replaces them with blurred boundaries, allowing the algorithm to \nthink in the “shades of gray” that dominate human thought. Fuzzy logic as used by an expert \nsystem has four steps or phases: fuzzification, inference, composition, and defuzzification.\n" }, { "page_number": 274, "text": "Systems Development Controls\n229\nSecurity Applications\nBoth expert systems and neural networks have great applications in the field of computer secu-\nrity. One of the major advantages offered by these systems is their capability to rapidly make \nconsistent decisions. One of the major problems in computer security is the inability of system \nadministrators to consistently and thoroughly analyze massive amounts of log and audit trail \ndata to look for anomalies. It seems like a match made in heaven!\nOne successful application of this technology to the computer security arena is the Next-\nGeneration Intrusion Detection Expert System (NIDES) developed by Philip Porras and his \nteam at the Information and Computing Sciences System Design Laboratory of SRI Interna-\ntional. This system provides an inference engine and knowledge base that draws information \nfrom a variety of audit logs across a network and provides notification to security administra-\ntors when the activity of an individual user varies from their standard usage profile.\nSystems Development Controls\nMany organizations use custom-developed hardware and software systems to achieve flexible \noperational goals. As you will learn in Chapter 8 and Chapter 12, these custom solutions can \npresent great security vulnerabilities as a result of malicious and/or careless developers who cre-\nate trap doors, buffer overflow vulnerabilities, or other weaknesses that can leave a system open \nto exploitation by malicious individuals.\nTo protect against these vulnerabilities, it’s vital to introduce security concerns into the entire \nsystems development life cycle. An organized, methodical process helps ensure that solutions meet \nfunctional requirements as well as security guidelines. The following sections explore the spec-\ntrum of systems development activities with an eye toward security concerns that should be fore-\nmost on the mind of any information security professional engaged in solutions development.\nSoftware Development\nSecurity should be a consideration at every stage of a system’s development, including the soft-\nware development process. Programmers should strive to build security into every application \nthey develop, with greater levels of security provided to critical applications and those that pro-\ncess sensitive information. It’s extremely important to consider the security implications of a \nsoftware development project from the early stages because it’s much easier to build security \ninto a system than it is to add security onto an existing system.\nAssurance\nTo ensure that the security control mechanisms built into a new application properly implement \nthe security policy throughout the life cycle of the system, administrators use assurance proce-\ndures. Assurance procedures are simply formalized processes by which trust is built into the life \n" }, { "page_number": 275, "text": "230\nChapter 7\n\u0002 Data and Application Security Issues\ncycle of a system. The Trusted Computer System Evaluation Criteria (TCSEC) Orange Book \nrefers to this process as life cycle assurance. We’ll discuss this further in Chapter 13, “Admin-\nistrative Management.”\nAvoiding System Failure\nNo matter how advanced your development team, your systems will likely fail at some point in \ntime. You should plan for this type of failure when you put the software and hardware controls \nin place, ensuring that the system will respond appropriately. You can employ many methods \nto avoid failure, including using limit checks and creating fail-safe or fail-open procedures. Let’s \ntalk about these in more detail.\nLimit Checks\nEnvironmental controls and hardware devices cannot prevent problems created by poor pro-\ngram coding. It is important to have proper software development and coding practices to \nensure that security is a priority during product development. To avoid buffer overflow attacks, \nyou must perform limit checks by managing data types, data formats, and data length when \naccepting input from a user or another application. Limit checks ensure that data does not \nexceed maximum allowable values. Depending on the application, you may also need to include \nsequence checks to ensure that data input is properly ordered.\nIn most organizations, security professionals come from a system administra-\ntion background and don’t have professional experience in software develop-\nment. If your background doesn’t include this type of experience, don’t let that \nstop you from learning about it and educating your organization’s developers \non the importance of secure coding.\nFail-Secure and Fail-Open\nIn spite of the best efforts of programmers, product designers, and project managers, developed \napplications will be placed into situations and environments that were neither predicted nor fully \nunderstood. Some of these conditions will cause failures. Since failures are unpredictable, pro-\ngrammers should design into their code a general sense of how to respond to and handle failures.\nThere are two basic choices when planning for system failure: fail-secure (also called fail-\nsafe) or fail-open. The fail-secure failure state puts the system into a high level of security (and \npossibly even disables it entirely) until an administrator can diagnose the problem and restore \nthe system to normal operation. In the vast majority of environments, fail-secure is the appro-\npriate failure state because it prevents unauthorized access to information and resources.\nSoftware should revert to a fail-secure condition. This may mean closing just the application \nor possibly stopping the operation of the entire host system. An example of such failure \nresponse is seen in the Windows OS with the appearance of the Blue Screen of Death (BSOD), \nbut it is really called a STOP error. A STOP error occurs when an insecure and illegal activity \noccurs in spite of the OS’s efforts to prevent it. This could include an application gaining direct \n" }, { "page_number": 276, "text": "Systems Development Controls\n231\naccess to hardware, bypassing a security access check, or one process interfering with the mem-\nory space of another. Once an illegal operation occurs, the environment itself is no longer trust-\nworthy. So, rather than continuing to support an unreliable and insecure operating environment, \nthe OS initiates a STOP error as its fail-secure response. Once a fail-secure operation occurs, the \nprogrammer should consider the activities that occur afterward. The options are to remain in \na fail-secure state or to automatically reboot the system. The former option requires an admin-\nistrator to manually reboot the system and oversee the process. This action can be enforced by \nusing a boot password. The latter option does not require human intervention for the system to \nrestore itself to a functioning state, but it has its own unique issues. First, it is subject to initial \nprogram load (IPL) vulnerabilities (for more information on IPL, review Chapter 14, “Auditing \nand Monitoring”). Second, it must restrict the system to reboot into a nonprivileged state. In \nother words, the system should not reboot and perform an automatic logon; instead, it should \nprompt the user for authorized access credentials.\nIn limited circumstances, it may be appropriate to implement a fail-open failure \nstate, which allows users to bypass security controls when a system fails. This \nis sometimes appropriate for lower-layer components of a multilayered secu-\nrity system. Fail-open systems should be used with extreme caution. Before \ndeploying a system using this failure mode, clearly validate the business \nrequirement for this move. If it is justified, ensure that adequate alternative \ncontrols are in place to protect the organization’s resources should the system \nfail. It’s extremely rare that you’d want all of your security controls to utilize a \nfail-open approach.\nEven when security is properly designed and embedded in software, that security is often dis-\nabled in order to support easier installation. Thus, it is common for the IT administrator to have \nthe responsibility of turning on and configuring security to match the needs of their specific \nenvironment. Maintaining security is often a trade-off with user-friendliness and functionality, \nas you can see from Figure 7.3. Additionally, as you add or increase security, you will also \nincrease costs, increase administrative overhead, and reduce productivity/throughput.\nF I G U R E\n7 . 3\nSecurity vs. user-friendliness vs. functionality\nSecurity\nFunctionality\nUser-Friendliness\n" }, { "page_number": 277, "text": "232\nChapter 7\n\u0002 Data and Application Security Issues\nProgramming Languages\nAs you probably know, software developers use programming languages to develop software \ncode. You might not know that there are several types of languages that can be used simulta-\nneously by the same system. This section takes a brief look at the different types of program-\nming languages and the security implications of each.\nComputers understand binary code. They speak a language of 1s and 0s and that’s it! The \ninstructions that a computer follows are made up of a long series of binary digits in a language \nknown as machine language. Each CPU chipset has its own machine language and it’s virtually \nimpossible for a human being to decipher anything but the most simple machine language code \nwithout the assistance of specialized software. Assembly language is a higher-level alternative \nthat uses mnemonics to represent the basic instruction set of a CPU but still requires hardware-\nspecific knowledge of a relatively obscure assembly language. It also requires a large amount of \ntedious programming; a task as simple as adding two numbers together could take five or six \nlines of assembly code!\nProgrammers, of course, don’t want to write their code in either machine language or assem-\nbly language. They prefer to use high-level languages, such as C++, Java, and Visual Basic. \nThese languages allow programmers to write instructions that better approximate human com-\nmunication, decrease the length of time needed to craft an application, may decrease the number \nof programmers needed on a project, and also allow some portability between different oper-\nating systems and hardware platforms. Once programmers are ready to execute their programs, \nthere are two options available to them, depending upon the language they’ve chosen.\nSome languages (such as C++, Java, and FORTRAN) are compiled languages. When using \na compiled language, the programmer uses a tool known as the compiler to convert the higher-\nlevel language into an executable file designed for use on a specific operating system. This exe-\ncutable is then distributed to end users who may use it as they see fit. Generally speaking, it’s \nnot possible to view or modify the software instructions in an executable file.\nOther languages (such as JavaScript and VBScript) are interpreted languages. When these \nlanguages are used, the programmer distributes the source code, which contains instructions in \nthe higher-level language. End users then use an interpreter to execute that source code on their \nsystem. They’re able to view the original instructions written by the programmer.\nThere are security advantages and disadvantages to each approach. Compiled code is gener-\nally less prone to manipulation by a third party. However, it’s also easier for a malicious (or \nunskilled) programmer to embed back doors and other security flaws in the code and escape \ndetection because the original instructions can’t be viewed by the end user. Interpreted code, \nhowever, is less prone to the insertion of malicious code by the original programmer because the \nend user may view the code and check it for accuracy. On the other hand, everyone who touches \nthe software has the ability to modify the programmer’s original instructions and possibly \nembed malicious code in the interpreted software.\nReverse engineering is considered an unethical form of engineering, whereby \nprogrammers decompile vendor code in order to understand the intricate details \nof its functionality. Ethics come in to play because such efforts most often \npresage creating a similar, competing, or compatible product of their own.\n" }, { "page_number": 278, "text": "Systems Development Controls\n233\nObject-Oriented Programming\nMany of the latest programming languages, such as C++ and Java, support the concept of \nobject-oriented programming (OOP). Older programming styles, such as functional pro-\ngramming, focused on the flow of the program itself and attempted to model the desired \nbehavior as a series of steps. Object-oriented programming focuses on the objects involved in \nan interaction. It can be thought of as a group of objects that can be requested to perform cer-\ntain operations or exhibit certain behaviors. Objects work together to provide a system’s \nfunctionality or capabilities. OOP has the potential to be more reliable and able to reduce the \npropagation of program change errors. As a type of programming method, it is better suited \nto modeling or mimicking the real world. For example, a banking program might have three \nobject classes that correspond to accounts, account holders, and employees. When a new \naccount is added to the system, a new instance, or copy, of the appropriate object is created \nto contain the details of that account.\nEach object in the OOP model has methods that correspond to specific actions that can be \ntaken on the object. For example, the account object can have methods to add funds, deduct \nfunds, close the account, and transfer ownership.\nObjects can also be subclasses of other objects and inherit methods from their parent class. \nFor example, the account object may have subclasses that correspond to specific types of \naccounts, such as savings, checking, mortgages, and auto loans. The subclasses can use all of the \nmethods of the parent class and have additional class-specific methods. For example, the check-\ning object might have a method called write_check() whereas the other subclasses do not.\nFrom a security point of view, object-oriented-programming provides a black-box \napproach to abstraction. Users need to know the details of an object’s interface (generally \nthe inputs, outputs, and actions that correspond to each of the object’s methods) but don’t \nnecessarily need to know the inner workings of the object to use it effectively. To provide \nthe desired characteristics of object-oriented systems, the objects are encapsulated (self-\ncontained) and they can be accessed only through specific messages (i.e., input). Objects can \nalso exhibit the substitution property, which allows different objects providing compatible \noperations to be substituted for each other.\nComputer Aided Software Engineering (CASE)\nThe advent of object-oriented programming has reinvigorated a movement toward applying \ntraditional engineering design principles to the software engineering field. One such move-\nment has been toward the use of computer aided software engineering (CASE) tools to help \ndevelopers, managers, and customers interact through the various stages of the software \ndevelopment life cycle.\nOne popular CASE tool, Middle CASE, is used in the design and analysis phase of software \nengineering to help create screen and report layouts.\n" }, { "page_number": 279, "text": "234\nChapter 7\n\u0002 Data and Application Security Issues\nHere is a list of common object-oriented programming terms you might come across in \nyour work:\nMessage\nA message is a communication to or input of an object.\nMethod\nA method is internal code that defines the actions an object performs in response to \na message.\nBehavior\nThe results or output exhibited by an object is a behavior. Behaviors are the results \nof a message being processed through a method.\nClass\nA collection of the common methods from a set of objects that defines the behavior of \nthose objects is called a class.\nInstance\nObjects are instances of or examples of classes that contain their method.\nInheritance\nInheritance is the occurance when methods from a class (parent or superclass) are \ninherited by another subclass (child).\nDelegation\nDelegation is the forwarding of a request by an object to another object or dele-\ngate. An object delegates if it does not have a method to handle the message.\nPolymorphism\nA polymorphism is the characteristic of an object to provide different behav-\niors based upon the same message and methods owing to changes in external conditions.\nCohesive\nAn object is highly cohesive if it can perform a task with little or no help from oth-\ners. Highly cohesive objects are not as dependent upon other objects as objects that are less \ncohesive. Highly cohesive objects are often better. Objects that have high cohesion perform \ntasks alone and have low coupling.\nCoupling\nCoupling is the level of interaction between objects. Lower coupling means less \ninteraction. Lower coupling provides better software design because objects are more indepen-\ndent. Lower coupling is easier to troubleshoot and update. Objects that have low cohesion \nrequire lots of assistance from other objects to perform tasks and have high coupling.\nSystems Development Life Cycle\nSecurity is most effective if it is planned and managed throughout the life cycle of a system or \napplication. Administrators employ project management to keep a development project on tar-\nget and moving toward the goal of a completed product. Often project management is struc-\ntured using life cycle models to direct the development process. The use of formalized life cycle \nmodels helps to ensure good coding practices and the embedding of security in every stage of \nproduct development.\nThere are several activities that all systems development processes should have in common. \nAlthough they may not necessarily share the same names, these core activities are essential to the \ndevelopment of sound, secure systems. The section “Life Cycle Models” later in this chapter \nexamines two life cycle models and shows how these activities are applied in real-world soft-\nware engineering environments.\n" }, { "page_number": 280, "text": "Systems Development Controls\n235\nIt’s important to note at this point that the terminology used in system develop-\nment life cycles varies from model to model and from publication to publication. \nDon’t spend too much time worrying about the exact terms used in this book or \nany of the other literature you may come across. When taking the CISSP exam-\nination, it’s much more important that you have an understanding of how the \nprocess works and the fundamental principles underlying the development of \nsecure systems. That said, as with any rule, there are several exceptions.\nConceptual Definition\nThe conceptual definition phase of systems development involves creating the basic concept \nstatement for a system. Simply put, it’s a simple statement agreed upon by all interested stake-\nholders (the developers, customers, and management) that states the purpose of the project as \nwell as the general system requirements. The conceptual definition is a very high-level statement \nof purpose and should not be longer than one or two paragraphs. If you were reading a detailed \nsummary of the project, you might expect to see the concept statement as an abstract or intro-\nduction that enables an outsider to gain a top-level understanding of the project in a short \nperiod of time.\nIt’s very helpful to refer to the concept statement at all phases of the systems development \nprocess. Often, the intricate details of the development process tend to obscure the overarching \ngoal of the project. Simply reading the concept statement periodically can assist in refocusing a \nteam of developers.\nFunctional Requirements Determination\nOnce all stakeholders have agreed upon the concept statement, it’s time for the development \nteam to sit down and begin the functional requirements process. In this phase, specific system \nfunctionalities are listed and developers begin to think about how the parts of the system should \ninteroperate to meet the functional requirements. The deliverable from this phase of develop-\nment is a functional requirements document that lists the specific system requirements.\nAs with the concept statement, it’s important to ensure that all stakeholders agree on the \nfunctional requirements document before work progresses to the next level. When it’s finally \ncompleted, the document shouldn’t be simply placed on a shelf to gather dust—the entire devel-\nopment team should constantly refer to this document during all phases to ensure that the \nproject is on track. In the final stages of testing and evaluation, the project managers should use \nthis document as a checklist to ensure that all functional requirements are met.\nProtection Specifications Development\nSecurity-conscious organizations also ensure that adequate protections are designed into every \nsystem from the earliest stages of development. It’s often very useful to have a protection spec-\nifications development phase in your life cycle model. This phase takes place soon after the \ndevelopment of functional requirements and often continues as the design and design review \nphases progress.\n" }, { "page_number": 281, "text": "236\nChapter 7\n\u0002 Data and Application Security Issues\nDuring the development of protection specifications, it’s important to analyze the system \nfrom a number of security perspectives. First, adequate access controls must be designed into \nevery system to ensure that only authorized users are allowed to access the system and that they \nare not permitted to exceed their level of authorization. Second, the system must maintain the \nconfidentiality of vital data through the use of appropriate encryption and data protection tech-\nnologies. Next, the system should provide both an audit trail to enforce individual accountabil-\nity and a detective mechanism for illegitimate activity. Finally, depending upon the criticality of \nthe system, availability and fault-tolerance issues should be addressed.\nKeep in mind that designing security into a system is not a one-shot process and it must be \ndone proactively. All too often, systems are designed without security planning and then devel-\nopers attempt to retrofit the system with appropriate security mechanisms. Unfortunately, these \nmechanisms are an afterthought and do not fully integrate with the system’s design, which \nleaves gaping security vulnerabilities. Also, the security requirements should be revisited each \ntime a significant change is made to the design specification. If a major component of the system \nchanges, it’s very likely that the security requirements will change as well.\nDesign Review\nOnce the functional and protection specifications are complete, let the system designers do their \nthing! In this often lengthy process, the designers determine exactly how the various parts of the \nsystem will interoperate and how the modular system structure will be laid out. Also during this \nphase, the design management team commonly sets specific tasks for various teams and lays out \ninitial timelines for completion of coding milestones.\nAfter the design team completes the formal design documents, a review meeting with the \nstakeholders should be held to ensure that everyone’s in agreement that the process is still on \ntrack for successful development of a system with the desired functionality.\nCode Review Walk-Through\nOnce the stakeholders have given the software design their blessing, it’s time for the software \ndevelopers to start writing code. Project managers should schedule several code review walk-\nthough meetings at various milestones throughout the coding process. These technical meetings \nusually involve only development personnel who sit down with a copy of the code for a specific \nmodule and walk through it, looking for problems in logical flow or other design/security flaws. \nThe meetings play an instrumental role in ensuring that the code produced by the various devel-\nopment teams performs according to specification.\nSystem Test Review\nAfter many code reviews and a lot of long nights, there will come a point at which a developer puts \nin that final semicolon and declares the system complete. As any seasoned software engineer \nknows, the system is never complete. Now it’s time to begin the system test review phase. Initially, \nmost organizations perform the initial system tests using development personnel to seek out any \nobvious errors. Once this phase is complete, a series of beta test deployments takes place to ensure \nthat customers agree that the system meets all functional requirements and performs according to \nthe original specification. As with any critical development process, it’s important that you main-\ntain a copy of the written system test plan and test results for future review.\n" }, { "page_number": 282, "text": "Systems Development Controls\n237\nMaintenance\nOnce a system is operational, a variety of maintenance tasks are necessary to ensure continued \noperation in the face of changing operational, data processing, storage, and environmental \nrequirements. It’s essential that you have a skilled support team in place to handle any routine \nor unexpected maintenance. It’s also important that any changes to the code be handled through \na formalized change request/control process, as described in Chapter 5.\nLife Cycle Models\nOne of the major complaints you’ll hear from practitioners of the more established engineering \ndisciplines (such as civil, mechanical, and electrical engineering) is that software engineering is \nnot an engineering discipline at all. In fact, they contend, it’s simply a combination of chaotic \nprocesses that somehow manage to scrape out workable solutions from time to time. Indeed, \nsome of the “software engineering” that takes place in today’s development environments is \nnothing but bootstrap coding held together by “duct tape and chicken wire.”\nHowever, the adoption of more formalized life cycle management processes is being seen in \nmainstream software engineering as the industry matures. After all, it’s hardly fair to compare \nthe processes of an age-old discipline such as civil engineering to those of an industry that’s \nbarely a few decades old. In the 1970s and 1980s, pioneers like Winston Royce and Barry \nBoehm proposed several software development life cycle (SDLC) models to help guide the prac-\ntice toward formalized processes. In 1991, the Software Engineering Institute introduced the \nCapability Maturity Model, which described the process organizations undertake as they move \ntoward incorporating solid engineering principles into their software development processes. In \nthis section, we’ll take a look at the work produced by these studies.\nHaving a management model in place should improve the resultant products. However, if the \nSDLC methodology is inadequate, the project may fail to meet business and user needs. Thus, \nit is important to verify that the SDLC model is properly implemented and is appropriate for \nyour environment. Furthermore, one of the initial steps of implementing an SDLC should \ninclude management approval.\nWaterfall Model\nOriginally developed by Winston Royce in 1970, the waterfall model seeks to view the systems \ndevelopment life cycle as a series of iterative activities. As shown in Figure 7.4, the traditional \nwaterfall model has seven stages of development. As each stage is completed, the project moves \ninto the next phase. As illustrated by the backward arrows, the modern waterfall model does \nallow development to return to the previous phase to correct defects discovered during the sub-\nsequent phase. This is often known as the feedback loop characteristic of the waterfall model.\nThe waterfall model was one of the first comprehensive attempts to model the software \ndevelopment process while taking into account the necessity of returning to previous phases to \ncorrect system faults. However, one of the major criticisms of this model is that it allows the \ndevelopers to step back only one phase in the process. It does not make provisions for the later \ndiscovery of errors.\n" }, { "page_number": 283, "text": "238\nChapter 7\n\u0002 Data and Application Security Issues\nF I G U R E\n7 . 4\nThe waterfall life cycle model\nMore recently, the waterfall model has been improved by adding validation \nand verification steps to each phase. Verification evaluates the product against \nspecifications, while validation evaluates how well the product satisfies real-\nworld requirements. The improved model was labeled the modified waterfall \nmodel. However, it did not gain widespread use before the spiral model dom-\ninated the project management scene.\nSpiral Model\nIn 1988, Barry Boehm of TRW proposed an alternative life cycle model that allows for multiple \niterations of a waterfall-style process. An illustration of this model is shown in Figure 7.5. \nBecause the spiral model encapsulates a number of iterations of another model (the waterfall \nmodel), it is known as a metamodel, or a “model of models.”\nNotice that each “loop” of the spiral results in the development of a new system prototype \n(represented by P1, P2, and P3 in the illustration). Theoretically, system developers would apply \nthe entire waterfall process to the development of each prototype, thereby incrementally work-\ning toward a mature system that incorporates all of the functional requirements in a fully \nSystem\nRequirements\nSoftware\nRequirements\nPreliminary\nDesign\nDetailed\nDesign\nCode\nand Debug\nTesting\nOperations and\nMaintenance\n" }, { "page_number": 284, "text": "Systems Development Controls\n239\nvalidated fashion. Boehm’s spiral model provides a solution to the major criticism of the water-\nfall model—it allows developers to return to the planning stages as changing technical demands \nand customer requirements necessitate the evolution of a system.\nF I G U R E\n7 . 5\nThe spiral life cycle model\nSoftware Capability Maturity Model\nThe Software Engineering Institute (SEI) at Carnegie Mellon University introduced the Capability \nMaturity Model for Software (or Software Capability Maturity Model) (SW-CMM or CMM or \nSCMM), which contends that all organizations engaged in software development move through \na variety of maturity phases in sequential fashion. The SW-CMM describes the principles and \npractices underlying software process maturity. It is intended to help software organizations \nimprove the maturity and quality of their software processes by implementing an evolutionary \npath from ad hoc, chaotic processes to mature, disciplined software processes. The idea behind the \nSW-CMM is that the quality of software is dependent on the quality of its development process.\nThe stages of the SW-CMM are as follows:\nLevel 1: Initial\nIn this phase, you’ll often find hard-working people charging ahead in a dis-\norganized fashion. There is usually little or no defined software development process.\nLevel 2: Repeatable\nIn this phase, basic life cycle management processes are introduced. Reuse \nof code in an organized fashion begins to enter the picture and repeatable results are expected \nfrom similar projects. SEI defines the key process areas for this level as Requirements Manage-\nment, Software Project Planning, Software Project Tracking and Oversight, Software Subcontract \nManagement, Software Quality Assurance, and Software Configuration Management.\nLevel 3: Defined\nIn this phase, software developers operate according to a set of formal, \ndocumented software development processes. All development projects take place within the \nP1\nP2\nP3\nPlan next phases.\nDevelop and verify\nnext-level product.\nEvaluate alternatives.\nIdentify and resolve risks.\nDetermine objectives,\nalternatives, and constraints.\n" }, { "page_number": 285, "text": "240\nChapter 7\n\u0002 Data and Application Security Issues\nconstraints of the new standardized management model. SEI defines the key process areas for \nthis level as Organization Process Focus, Organization Process Definition, Training Program, \nIntegrated Software Management, Software Product Engineering, Intergroup Coordination, \nand Peer Reviews.\nLevel 4: Managed\nIn this phase, management of the software process proceeds to the next \nlevel. Quantitative measures are utilized to gain a detailed understanding of the development \nprocess. SEI defines the key process areas for this level as Quantitative Process Management and \nSoftware Quality Management.\nLevel 5: Optimizing\nIn the optimized organization, a process of continuous improvement \noccurs. Sophisticated software development processes are in place that ensure that feedback \nfrom one phase reaches back to the previous phase to improve future results. SEI defines the key \nprocess areas for this level as Defect Prevention, Technology Change Management, and Process \nChange Management.\nFor more information on the Capability Maturity Model for Software, visit the Software Engi-\nneering Institute’s website at www.sei.cmu.edu.\nIDEAL Model\nThe Software Engineering Institute also developed the IDEAL model for software development, \nwhich implements many of the CMM attributes. The IDEAL model, illustrated in Figure 7.6, \nhas five phases:\nI: Initiating\nIn the initiating phase of the IDEAL model, the business reasons behind the \nchange are outlined, support is built for the initiative, and the appropriate infrastructure is put \nin place.\nD: Diagnosing\nDuring the diagnosing phase, engineers analyze the current state of the orga-\nnization and make general recommendations for change.\nE: Establishing\nIn the establishing phase, the organization takes the general recommenda-\ntions from the diagnosing phase and develops a specific plan of action that helps achieve \nthose changes.\nA: Acting\nIn the acting phase, it’s time to stop “talking the talk” and “walk the walk.” The \norganization develops solutions and then tests, refines, and implements them.\nL: Learning\nAs with any quality improvement process, the organization must continuously \nanalyze their efforts to determine whether they’ve achieved the desired goals and, when neces-\nsary, propose new actions to put the organization back on course.\nGantt Charts and PERT\nA Gantt chart is a type of bar chart that shows the interrelationships over time between projects \nand schedules. It provides a graphical illustration of a schedule that helps to plan, coordinate, \nand track specific tasks in a project. An example of a Gantt chart is shown in Figure 7.7.\n" }, { "page_number": 286, "text": "Systems Development Controls\n241\nF I G U R E\n7 . 6\nThe IDEAL Model\nF I G U R E\n7 . 7\nA Gantt chart\nSpecial permission to reproduce “IDEAL Model,” ©2004 by Carnegie Mellon University,\nis granted by the Carnegie Mellon Software Engineering Institute.\nCharacterize\nCurrent & \nDesired States\n Develop\nRecommendations\nSet\nPriorities\nDevelop\nApproach\nPlan\nActions\nCreate\nSolution\nPilot Test\nSolution\nRefine\nSolution\nImplement\nSolution\nAnalyze\nand\nValidate\nPropose\nFuture\nActions\nCharter\nInfrastructure\nSet\nContext\nBuild\nSponsorship\nLearning\nActing\nEstablishing\nDiagnosing\nInitiating\nStimulus for Change\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19\nTask Name\nDo Initial Design\nPrice Design\nOrder Materials\nProduct Testing\nDistribution\nID\n1\n2\n3\n4\n5\nWeeks\n" }, { "page_number": 287, "text": "242\nChapter 7\n\u0002 Data and Application Security Issues\nProgram Evaluation Review Technique (PERT) is a project scheduling tool used to judge \nthe size of a software product in development and calculate the Standard Deviation (SD) for \nrisk assessment. PERT relates the estimated lowest possible size, the most likely size, and \nthe highest possible size of each component. PERT is used to direct improvements to project \nmanagement and software coding in order to produce more efficient software. As the capa-\nbilities of programming and management improve, the actual produced size of software \nshould be smaller.\nChange Control and Configuration Management\nOnce software has been released into a production environment, users will inevitably request \nthe addition of new features, correction of bugs, and other modifications to the code. Just as the \norganization developed a regimented process for developing software, they must also put a pro-\ncedure in place to manage changes in an organized fashion.\nThe change control process has three basic components:\nRequest control\nThe request control process provides an organized framework within which \nusers can request modifications, managers can conduct cost/benefit analysis, and developers can \nprioritize tasks.\nChange control\nThe change control process is used by developers to re-create the situation \nencountered by the user and analyze the appropriate changes to remedy the situation. It also \nprovides an organized framework within which multiple developers can create and test a solu-\ntion prior to rolling it out into a production environment. Change control includes conforming \nto quality control restrictions, developing tools for update or change deployment, properly \nSW-CMM and IDEAL Model Memorization\nTo help you remember the initial letters of each of the 10 level names of the SW-CMM and \nIDEAL model (II DR ED AM LO), imagine yourself sitting on the couch in a psychiatrist’s office \nsaying, “I … I, Dr. Ed, am lo(w).” If you can remember that phrase, then you can extract the 10 \ninitial letters of the level names. If you write the letters out into two columns, you can recon-\nstruct the level names in order of the two systems. The left column is the IDEAL model and the \nright represents the levels of the SW-CMM.\nInitiating\nInitiating\nDiagnosing\nRepeatable\nEstablishing\nDefined\nActing\nManaged\nLearning\nOptimized\n" }, { "page_number": 288, "text": "Systems Development Controls\n243\ndocumenting any coded changes, and restricting the effects of new code to minimize diminish-\nment of security.\nRelease control\nOnce the changes are finalized, they must be approved for release through the \nrelease control procedure. An essential step of the release control process is to double-check and \nensure that any code inserted as a programming aid during the change process (such as debug-\nging code and/or back doors) is removed before releasing the new software to production. \nRelease control should also include acceptance testing to ensure that any alterations to end-user \nwork tasks are understood and functional.\nIn addition to the change control process, security administrators should be aware of the \nimportance of configuration management. This process is used to control the version(s) of soft-\nware used throughout an organization and formally track and control changes to the software \nconfiguration. It has four main components:\nConfiguration identification\nDuring the configuration identification process, administrators \ndocument the configuration of covered software products throughout the organization.\nConfiguration control\nThe configuration control process ensures that changes to software \nversions are made in accordance with the change control and configuration management poli-\ncies. Updates can be made only from authorized distributions in accordance with those policies.\nConfiguration status accounting\nFormalized procedures are used to keep track of all autho-\nrized changes that take place.\nConfiguration Audit\nA periodic configuration audit should be conducted to ensure that the \nactual production environment is consistent with the accounting records and that no unautho-\nrized configuration changes have taken place.\nTogether, change control and configuration management techniques form an important part \nof the software engineer’s arsenal and protect the organization from development-related secu-\nrity issues.\nSoftware Testing\nAs part of the development process, your organization should thoroughly test any software \nbefore distributing it internally (or releasing it to market). The best time to address testing is as \nthe modules are designed. In other words, the mechanisms you use to test a product and the data \nsets you use to explore that product should be designed in parallel with the product itself. Your \nprogramming team should develop special test suites of data that exercise all paths of the soft-\nware to the fullest extent possible and know the correct resulting outputs beforehand. This \nextensive test suite process is known as a reasonableness check. Furthermore, while conducting \nstress tests, you should check how the product handles normal and valid input data, incorrect \ntypes, out-of-range values, and other bounds and/or conditions. Live workloads provide the \nbest stress testing possible. However, you should not use live or actual field data for testing, \nespecially in the early development stages, since a flaw or error could result in the violation of \nintegrity or confidentiality of the test data.\n" }, { "page_number": 289, "text": "244\nChapter 7\n\u0002 Data and Application Security Issues\nWhen testing software, you should apply the same rules of separation of duties that you do \nfor other aspects of your organization. In other words, you should assign the testing of your \nsoftware to someone other than the programmer(s) to avoid a conflict of interest and assure a \nmore successful finished product. When a third party tests your software, you are assured that \nthe third party performs an objective and nonbiased examination. The third-party test allows \nfor a broader and more thorough test and prevents the bias and inclinations of the programmers \nfrom affecting the results of the test.\nYou can utilize three testing methods or ideologies for software testing:\nWhite box testing\nWhite box testing examines the internal logical structures of a program.\nBlack box testing\nBlack box testing examines the input and output of a program without \nfocusing on the internal logical structures.\nTest data method\nTest data method examines the extent of the system testing in order to \nlocate untested program logic.\nProper software test implementation is a key element in the project development process. \nMany of the common mistakes and oversights often found in commercial and in-house soft-\nware can be eliminated. Keep the test plan and results as part of the system's permanent \ndocumentation.\nSecurity Control Architecture\nAll secure systems implement some sort of security control architecture. At the hardware and \noperating system levels, controls should ensure enforcement of basic security principles. The fol-\nlowing sections examine several basic control principles that should be enforced in a secure \ncomputing environment.\nProcess Isolation\nProcess isolation is one of the fundamental security procedures put into place during system \ndesign. Basically, using process isolation mechanisms (whether part of the operating system or \npart of the hardware itself) ensures that each process has its own isolated memory space for stor-\nage of data and the actual executing application code itself. This guarantees that processes can-\nnot access each other’s reserved memory areas and protects against confidentiality violations or \nintentional/unintentional modification of data by an unauthorized process. Hardware segmen-\ntation is a technique that implements process isolation at the hardware level by enforcing mem-\nory access constraints.\nProtection Rings\nThe ring-oriented protection scheme provides for several modes of system operation, thereby \nfacilitating secure operation by restricting processes to running in the appropriate security ring. \nAn illustration of the four-layer ring protection scheme supported by Intel microprocessors \nappears in Figure 7.8.\n" }, { "page_number": 290, "text": "Systems Development Controls\n245\nF I G U R E\n7 . 8\nRing protection scheme\nIn this scheme, each of the rings has a separate and distinct function:\nLevel 0\nRepresents the ring where the operating system itself resides. This ring contains the \nsecurity kernel—the core set of operating system services that handles all user/application \nrequests for access to system resources. The kernel also implements the reference monitor, an \noperating system component that validates all user requests for access to resources against an \naccess control scheme. Processes running at Level 0 are often said to be running in supervisory \nmode, also called privileged mode. Level 0 processes have full control of all system resources, \nso it’s essential to ensure that they are fully verified and validated before implementation.\nLevels 1 and 2\nContain device drivers and other operating system services that provide \nhigher-level interfaces to system resources. However, in practice, most operating systems do not \nimplement either one of these layers.\nLevel 3\nRepresents the security layer where user applications and processes reside. This layer \nis commonly referred to as user mode, or protected mode, and applications running here are not \npermitted direct access to system resources. In fact, when an application running in protected \nmode attempts to access an unauthorized resource, the commonly seen General Protection Fault \n(GPF) occurs.\nThe security kernel and reference monitor are extremely important computer \nsecurity topics that must be understood by any information security practitioner.\nThe reference monitor component (present at Level 0) is an extremely important element of \nany operating system offering multilevel secure services. This concept was first formally \ndescribed in the Department of Defense Trusted Computer System Evaluation Criteria (com-\nmonly referred to as the “Orange Book” due to the color of its cover). The DoD set forth the \nfollowing three requirements for an operational reference monitor:\n\u0002\nIt must be tamperproof.\nLevel 3\nLevel 2\nLevel 1\nLevel 0\n" }, { "page_number": 291, "text": "246\nChapter 7\n\u0002 Data and Application Security Issues\n\u0002\nIt must always be invoked.\n\u0002\nIt must be small enough to be subject to analysis and tests, the completeness of which can \nbe assured.\nAbstraction\nAbstraction is a valuable tool drawn from the object-oriented software development model \nthat can be extrapolated to apply to the design of all types of information systems. In effect, \nabstraction states that a thorough understanding of a system’s operational details is not \noften necessary to perform day-to-day activities. For example, a system developer might \nneed to know that a certain procedure, when invoked, writes information to disk, but it’s \nnot necessary for the developer to understand the underlying principles that enable the data \nto be written to disk or the exact format that the disk procedures use to store and retrieve \ndata. The process of developing increasingly sophisticated objects that draw upon the \nabstracted methods of lower-level objects is known as encapsulation. The deliberate con-\ncealment of lower levels of functionality from higher-level processes is known as data hid-\ning or information hiding.\nSecurity Modes\nIn a secure environment, information systems are configured to process information in one of \nfour security modes. These modes are set out by the Department of Defense as follows:\n\u0002\nSystems running in compartmented security mode may process two or more types of \ncompartmented information. All system users must have an appropriate clearance to \naccess all information processed by the system but do not necessarily have a need to \nknow all of the information in the system. Compartments are subcategories or com-\npartments within the different classification levels, and extreme care is taken to pre-\nserve the information within the different compartments. The system may be classified \nat the secret level but contain five different compartments, all classified secret. If a user \nhas the need to know about only two of the five different compartments to do their job, \nthat user can access the system but can access only the two compartments.\n\u0002\nSystems running in dedicated security mode are authorized to process only a specific clas-\nsification level at a time, and all system users must have clearance and a need to know that \ninformation.\n\u0002\nSystems running in multilevel security mode are authorized to process information at more \nthan one level of security even when all system users do not have appropriate clearances or \na need to know for all information processed by the system.\n\u0002\nSystems running in system-high security mode are authorized to process only information \nthat all system users are cleared to read and have a valid need to know. These systems are \nnot trusted to maintain separation between security levels, and all information processed by \nthese systems must be handled as if it were classified at the same level as the most highly \nclassified information processed by the system.\n" }, { "page_number": 292, "text": "Summary\n247\nService Level Agreements\nUsing service level agreements (SLAs) is an increasingly popular way to ensure that organiza-\ntions providing services to internal and/or external customers maintain an appropriate level of \nservice agreed upon by both the service provider and the vendor. It’s a wise move to put SLAs \nin place for any data circuits, applications, information processing systems, databases, or other \ncritical components that are vital to your organization’s continued viability. The following \nissues are commonly addressed in SLAs:\n\u0002\nSystem uptime (as a percentage of overall operating time)\n\u0002\nMaximum consecutive downtime (in seconds/minutes/etc.)\n\u0002\nPeak load\n\u0002\nAverage load\n\u0002\nResponsibility for diagnostics\n\u0002\nFailover time (if redundancy is in place)\nService level agreements also often commonly include financial and other contractual remedies \nthat kick in if the agreement is not maintained. For example, if a critical circuit is down for more than \n15 minutes, the service provider might agree to waive all charges on that circuit for one week.\nSummary\nAs we continue our journey into the Information Age, data is quickly becoming the most valu-\nable resource many organizations possess. Therefore, it’s critical that information security prac-\ntitioners understand the necessity of safeguarding the data itself and the systems and \napplications that assist in the processing of that data. Protections against malicious code, data-\nbase vulnerabilities, and system/application development flaws must be implemented in every \ntechnology-aware organization.\nThere is number of malicious code objects that can pose a threat to the computing resources \nof organizations. In the nondistributed environment, such threats include viruses, logic bombs, \nTrojan horses, and worms. Chapter 8 delves more deeply into specific types of malicious code \nobjects, as well as other attacks commonly used by hackers. We’ll also explore some effective \ndefense mechanisms to safeguard your network against their insidious effects.\nBy this point, you no doubt recognize the importance of placing adequate access controls and \naudit trails on these valuable information resources. Database security is a rapidly growing field; \nif databases play a major role in your security duties, take the time to sit down with database \nadministrators, courses, and textbooks and learn the underlying theory. It’s a valuable investment.\nFinally, there are various controls that can be put into place during the system and applica-\ntion development process to ensure that the end product of these processes is compatible with \noperation in a secure environment. Such controls include process isolation, hardware segmen-\ntation abstraction, and service level agreements (SLAs). Security should always be introduced in \nthe early planning phases of any development project and continually monitored throughout the \ndesign, development, deployment, and maintenance phases of production.\n" }, { "page_number": 293, "text": "248\nChapter 7\n\u0002 Data and Application Security Issues\nExam Essentials\nUnderstand the application threats present in a local/nondistributed environment.\nDescribe \nthe functioning of viruses, worms, Trojan horses, and logic bombs. Understand the impact each \ntype of threat may have on a system and the methods they use to propagate.\nUnderstand the application threats unique to distributed computing environments.\nKnow \nthe basic functioning of agents and the impact they may have on computer/network security. \nUnderstand the functionality behind Java applets and ActiveX controls and be able to determine \nthe appropriate applet security levels for a given computing environment.\nExplain the basic architecture of a relational database management system (RDBMS).\nKnow the structure of relational databases. Be able to explain the function of tables (relations), \nrows (records/tuples), and columns (fields/attributes). Know how relationships are defined \nbetween tables.\nUnderstand the various types of keys used to identify information stored in a database.\nYou \nshould be familiar with the basic types of keys. Understand that each table has one or more can-\ndidate keys that are chosen from a column heading in a database and that uniquely identify rows \nwithin a table. The database designer selects one candidate key as the primary key for the table. \nForeign keys are used to enforce referential integrity between tables participating in a relationship.\nRecognize the various common forms of DBMS safeguards.\nThe common DBMS safeguards \ninclude concurrency, edit control, semantic integrity mechanisms, use of time and date stamps, \ngranular control of objects, content-dependant access control, context-dependant access con-\ntrol, cell suppression, database partitioning, noise, perturbation, and polyinstantiation.\nExplain the database security threats posed by aggregation and inference.\nAggregation uti-\nlizes specialized database functions to draw conclusions about a large amount of data based on \nindividual records. Access to these functions should be restricted if aggregate information is \nconsidered more sensitive than the individual records. Inference occurs when database users can \ndeduce sensitive facts from less-sensitive information.\nKnow the various types of storage.\nExplain the differences between primary memory and vir-\ntual memory, secondary storage and virtual storage, random access storage and sequential \naccess storage, and volatile storage and nonvolatile storage.\nExplain how expert systems function.\nExpert systems consist of two main components: a \nknowledge base that contains a series of “if/then” rules and an inference engine that uses that \ninformation to draw conclusions about other data.\nDescribe the functioning of neural networks.\nNeural networks simulate the functioning of \nthe human mind to a limited extent by arranging a series of layered calculations to solve prob-\nlems. Neural networks require extensive training on a particular problem before they are able \nto offer solutions.\n" }, { "page_number": 294, "text": "Written Lab\n249\nUnderstand the waterfall and spiral models of systems development.\nKnow that the water-\nfall model describes a sequential development process that results in the development of a fin-\nished product. Developers may step back only one phase in the process if errors are discovered. \nThe spiral model uses several iterations of the waterfall model to produce a number of fully \nspecified and tested prototypes.\nExplain the ring protection scheme.\nUnderstand the four rings of the ring protection scheme \nand the activities that typically occur within each ring. Know that most operating systems only \nimplement Level 0 (privileged or supervisory mode) and Level 3 (protected or user mode).\nDescribe the function of the security kernel and reference monitor.\nThe security kernel is the \ncore set of operating system services that handles user requests for access to system resources. \nThe reference monitor is a portion of the security kernel that validates user requests against the \nsystem’s access control mechanisms.\nUnderstand the importance of testing.\nSoftware testing should be designed as part of the devel-\nopment process. Testing should be used as a management tool to improve the design, develop-\nment, and production processes.\nUnderstand the four security modes approved by the Department of Defense.\nKnow the dif-\nferences between compartmented security mode, dedicated security mode, multilevel security \nmode, and system-high security mode. Understand the different types of classified information \nthat can be processed in each mode and the types of users that can access each system.\nWritten Lab\nAnswer the following questions about data and application security issues.\n1.\nHow does a worm travel from system to system?\n2.\nDescribe three benefits of using applets instead of server-side code for web applications.\n3.\nWhat are the three requirements set for an operational reference monitor in a secure com-\nputing system?\n4.\nWhat operating systems are capable of processing ActiveX controls posted on a website?\n5.\nWhat type of key is selected by the database developer to uniquely identify data within a \nrelational database table?\n6.\nWhat database security technique appears to permit the insertion of multiple rows sharing \nthe same uniquely identifying information?\n7.\nWhat type of storage is commonly referred to as a RAM disk?\n8.\nHow far backward does the waterfall model allow developers to travel when a develop-\nment flaw is discovered?\n" }, { "page_number": 295, "text": "250\nChapter 7\n\u0002 Data and Application Security Issues\nReview Questions\n1.\nWhich one of the following malicious code objects might be inserted in an application by a dis-\ngruntled software developer with the purpose of destroying system data upon the deletion of the \ndeveloper’s account (presumably following their termination)?\nA. Virus\nB. Worm\nC. Trojan horse\nD. Logic bomb\n2.\nWhat term is used to describe code objects that act on behalf of a user while operating in an unat-\ntended manner?\nA. Agent\nB. Worm\nC. Applet\nD. Browser\n3.\nWhich form of DBMS primarily supports the establishment of one-to-many relationships?\nA. Relational\nB. Hierarchical\nC. Mandatory\nD. Distributed\n4.\nWhich of the following characteristics can be used to differentiate worms from viruses?\nA. Worms infect a system by overwriting data in the Master Boot Record of a storage device.\nB. Worms always spread from system to system without user intervention.\nC. Worms always carry a malicious payload that impacts infected systems.\nD. All of the above.\n5.\nWhat programming language(s) can be used to develop ActiveX controls for use on an Internet site?\nA. Visual Basic\nB. C\nC. Java\nD. All of the above\n6.\nWhat form of access control is concerned with the data stored by a field rather than any other issue?\nA. Content-dependent\nB. Context-dependent\nC. Semantic integrity mechanisms\nD. Perturbation\n" }, { "page_number": 296, "text": "Review Questions\n251\n7.\nWhich one of the following key types is used to enforce referential integrity between database tables?\nA. Candidate key\nB. Primary key\nC. Foreign key\nD. Super key\n8.\nRichard believes that a database user is misusing his privileges to gain information about the \ncompany’s overall business trends by issuing queries that combine data from a large number of \nrecords. What process is the database user taking advantage of?\nA. Inference\nB. Contamination\nC. Polyinstantiation\nD. Aggregation\n9.\nWhat database technique can be used to prevent unauthorized users from determining classified \ninformation by noticing the absence of information normally available to them?\nA. Inference\nB. Manipulation\nC. Polyinstantiation\nD. Aggregation\n10. Which one of the following terms cannot be used to describe the main RAM of a typical com-\nputer system?\nA. Nonvolatile\nB. Sequential access\nC. Real memory\nD. Primary memory\n11. What type of information is used to form the basis of an expert system’s decision-making process?\nA. A series of weighted layered computations\nB. Combined input from a number of human experts, weighted according to past performance\nC. A series of “if/then” rules codified in a knowledge base\nD. A biological decision-making process that simulates the reasoning process used by the \nhuman mind\n12. Which one of the following intrusion detection systems makes use of an expert to detect anom-\nalous user activity?\nA. PIX\nB. IDIOT\nC. AAFID\nD. NIDES\n" }, { "page_number": 297, "text": "252\nChapter 7\n\u0002 Data and Application Security Issues\n13. Which of the following acts as a proxy between two different systems to support interaction and \nsimplify the work of programmers?\nA. SDLC\nB. ODBC\nC. DSS\nD. Abstraction\n14. Which software development life cycle model allows for multiple iterations of the development \nprocess, resulting in multiple prototypes, each produced according to a complete design and test-\ning process?\nA. Software Capability Maturity Model\nB. Waterfall model\nC. Development cycle\nD. Spiral model\n15. In systems utilizing a ring protection scheme, at what level does the security kernel reside?\nA. Level 0\nB. Level 1\nC. Level 2\nD. Level 3\n16. Which database security risk occurs when data from a higher classification level is mixed with \ndata from a lower classification level?\nA. Aggregation\nB. Inference\nC. Contamination\nD. Polyinstantiation\n17.\nWhich of the following programming languages is least prone to the insertion of malicious code \nby a third party?\nA. C++\nB. Java\nC. VBScript\nD. FORTRAN\n18. Which one of the following is not part of the change control process?\nA. Request control\nB. Release control\nC. Configuration audit\nD. Change control\n" }, { "page_number": 298, "text": "Review Questions\n253\n19. What transaction management principle ensures that two transactions do not interfere with each \nother as they operate on the same data?\nA. Atomicity\nB. Consistency\nC. Isolation\nD. Durability\n20. Which subset of the Structured Query Language is used to create and modify the database schema?\nA. Data Definition Language\nB. Data Structure Language\nC. Database Schema Language\nD. Database Manipulation Language\n" }, { "page_number": 299, "text": "254\nChapter 7\n\u0002 Data and Application Security Issues\nAnswers to Review Questions\n1.\nD. Logic bombs are malicious code objects programmed to lie dormant until certain logical con-\nditions, such as a certain date, time, system event, or other criteria, are met. At that time, they \nspring into action, triggering their malicious payload.\n2.\nA. Intelligent agents are code objects programmed to perform certain operations on behalf of a \nuser in their absence. They are also often referred to as bots.\n3.\nB. Hierarchical DBMS supports one-to-many relationships. Relational DBMS supports one-to-\none. Distributed DBMS supports many-to-many. Mandatory is not a DBMS but an access con-\ntrol model.\n4.\nB. The major difference between viruses and worms is that worms are self-replicating whereas \nviruses require user intervention to spread from system to system. Infection of the Master Boot \nRecord is a characteristic of a subclass of viruses known as MBR viruses. Both viruses and \nworms are capable of carrying malicious payloads.\n5.\nD. Microsoft’s ActiveX technology supports a number of programming languages, including \nVisual Basic, C, C++, and Java. On the other hand, only the Java language may be used to write \nJava applets.\n6.\nA. Content-dependent access control is focused on the internal data of each field.\n7.\nC. Foreign keys are used to enforce referential integrity constraints between tables that partici-\npate in a relationship.\n8.\nD. In this case, the process the database user is taking advantage of is aggregation. Aggregation \nattacks involve the use of specialized database functions to combine information from a large \nnumber of database records to reveal information that may be more sensitive than the informa-\ntion in individual records would reveal.\n9.\nC. Polyinstantiation allows the insertion of multiple records that appear to have the same pri-\nmary key values into a database at different classification levels.\n10. B. Random access memory (RAM) allows for the direct addressing of any point within the \nresource. A sequential access storage medium, such as a magnetic tape, requires scanning \nthrough the entire media from the beginning to reach a specific address.\n11. C. Expert systems utilize a knowledge base consisting of a series of “if/then” statements to form \ndecisions based upon the previous experience of human experts.\n12. D. The Next-Generation Intrusion Detection Expert System (NIDES) system is an expert sys-\ntem-based intrusion detection system. PIX is a firewall, and IDIOT and AAFID are intrusion \ndetection systems that do not utilize expert systems.\n13. B. ODBC acts as a proxy between applications and the back-end DBMS.\n14. D. The spiral model allows developers to repeat iterations of another life cycle model (such as \nthe waterfall model) to produce a number of fully tested prototypes.\n" }, { "page_number": 300, "text": "Answers to Review Questions\n255\n15. A. The security kernel and reference monitor reside at Level 0 in the ring protection scheme, \nwhere they have unrestricted access to all system resources.\n16. C. Contamination is the mixing of data from a higher classification level and/or need-to-know \nrequirement with data from a lower classification level and/or need-to-know requirement.\n17.\nC. Of the languages listed, VBScript is the least prone to modification by third parties because \nit is an interpreted language whereas the other three languages (C++, Java, and \nFORTRAN) are compiled languages.\n18. C. Configuration audit is part of the configuration management process rather than the change \ncontrol process.\n19. C. The isolation principle states that two transactions operating on the same data must be tem-\nporally separated from each other such that one does not interfere with the other.\n20. A. The Data Manipulation Language (DML) is used to make modifications to a relational data-\nbase’s schema.\n" }, { "page_number": 301, "text": "256\nChapter 7\n\u0002 Data and Application Security Issues\nAnswers to Written Lab\nFollowing are answers to the questions in this chapter’s written lab:\n1.\nWorms travel from system to system under their own power by exploiting flaws in net-\nworking software.\n2.\nThe processing burden is shifted from the server to the client, allowing the web server to \nhandle a greater number of simultaneous requests. The client uses local resources to process \nthe data, usually resulting in a quicker response. The privacy of client data is protected \nbecause information does not need to be transmitted to the web server.\n3.\nIt must be tamperproof, it must always be invoked, and it must be small enough to be sub-\nject to analysis and tests, the completeness of which can be assured.\n4.\nMicrosoft Windows platforms only.\n5.\nPrimary key.\n6.\nPolyinstantiation.\n7.\nVirtual storage.\n8.\nOne phase.\n" }, { "page_number": 302, "text": "Chapter\n8\nMalicious Code and \nApplication Attacks\nTHE CISSP EXAM TOPICS COVERED IN THIS \nCHAPTER INCLUDE:\n\u0001 Malicious Code\n\u0001 Methods of Attack\n" }, { "page_number": 303, "text": "In previous chapters, you learned about many general security \nprinciples and the policy and procedure mechanisms that help \nsecurity practitioners develop adequate protection against mali-\ncious individuals. This chapter takes an in-depth look at some of the specific threats faced on \na daily basis by administrators in the field.\nThis material is not only critical for the CISSP exam, it’s also some of the most basic infor-\nmation a computer security professional must understand to effectively practice their trade. \nWe’ll begin this chapter by looking at the risks posed by malicious code objects—viruses, \nworms, logic bombs, and Trojan horses. We’ll then take a look at some of the other security \nexploits used by someone attempting to gain unauthorized access to a system or to prevent legit-\nimate users from gaining such access.\nMalicious Code\nMalicious code objects include a broad range of programmed computer security threats that \nexploit various network, operating system, software, and physical security vulnerabilities to \nspread malicious payloads to computer systems. Some malicious code objects, such as computer \nviruses and Trojan horses, depend upon irresponsible computer use by human beings to spread \nfrom system to system with any success. Other objects, such as worms, spread rapidly among \nvulnerable systems under their own power.\nAll computer security practitioners must be familiar with the risks posed by the various types \nof malicious code objects so they can develop adequate countermeasures to protect the systems \nunder their care as well as implement appropriate responses if their systems are compromised.\nSources\nWhere does malicious code come from? In the early days of computer security, malicious code \nwriters were extremely skilled (albeit misguided) software developers who took pride in care-\nfully crafting innovative malicious code techniques. Indeed, they actually served a somewhat \nuseful function by exposing security holes in popular software packages and operating systems, \nraising the security awareness of the computing community. For an example of this type of code \nwriter, see the sidebar in this chapter entitled “RTM and the Internet Worm.”\nModern times have given rise to the script kiddie—the malicious individual who doesn’t \nunderstand the technology behind security vulnerabilities but downloads ready-to-use software \n(or scripts) from the Internet and uses them to launch attacks against remote systems. This trend \n" }, { "page_number": 304, "text": "Malicious Code\n259\ngave birth to a new breed of virus creation software that allows anyone with a minimal level of \ntechnical expertise to create a virus and unleash it upon the Internet. This is reflected in the large \nnumber of viruses documented by antivirus authorities to date. These amateur malicious code \ndevelopers are usually just experimenting with the new tool they downloaded or attempting to \ncause problems for one or two enemies. Unfortunately, these objects sometimes spread rapidly \nand cause problems for Internet users in general.\nViruses\nThe computer virus is perhaps the earliest form of malicious code to plague security adminis-\ntrators. Indeed, viruses are so prevalent nowadays that major outbreaks receive attention from \nthe mass media and provoke mild hysteria among average computer users. According to Syman-\ntec, one of the major antivirus software vendors, there were approximately 65,000 strains of \nviruses roaming the global network in early 2004. Hundreds of thousands of variations of these \nviruses strike unsuspecting computer users each day. Many carry malicious payloads that cause \ndamage ranging in scope from displaying a profane message on the screen all the way to causing \ncomplete destruction of all data stored on the local hard drive.\nAs with biological viruses, computer viruses have two main functions—propagation and \ndestruction. Miscreants who create viruses carefully design code to implement these functions \nin new and innovative methods that they hope escape detection and bypass increasingly sophis-\nticated antivirus technology. It’s fair to say that an arms race has developed between virus writ-\ners and antivirus technicians, each hoping to develop technology one step ahead of the other. \nThe propagation function defines how the virus will spread from system to system, infecting \neach machine it leaves in its wake. A virus’s payload delivers the destructive power by imple-\nmenting whatever malicious activity the virus writer had in mind.\nVirus Propagation Techniques\nBy definition, a virus must contain technology that enables it to spread from system to system, \nsometimes aided by unsuspecting computer users seeking to share data by exchanging disks, \nsharing networked resources, sending electronic mail, or using some other means. Once they’ve \n“touched” a new system, they use one of several propagation techniques to infect the new victim \nand expand their reach. In the following sections, we’ll look at three common propagation tech-\nniques: Master Boot Record infection, file infection, and macro infection.\nMaster Boot Record (MBR) Viruses\nThe Master Boot Record (MBR) virus is one of the earliest known forms of virus infection. \nThese viruses attack the MBR, the portion of a hard drive or floppy disk that the computer uses \nto load the operating system during the boot process. Because the MBR is extremely small (usu-\nally 512 bytes), it can’t contain all of the code required to implement the virus’s propagation and \ndestructive functions. To bypass this space limitation, MBR viruses store the majority of their \ncode on another portion of the storage media. When the system reads the infected MBR, the \nvirus instructs it to read and execute the code stored in this alternate location, thereby loading \nthe entire virus into memory and potentially triggering the delivery of the virus’s payload.\n" }, { "page_number": 305, "text": "260\nChapter 8\n\u0002 Malicious Code and Application Attacks\nMost MBR viruses are spread between systems through the use of an infected floppy disk \ninadvertently shared between users. If the infected disk is in the drive during the boot process, \nthe target system reads the floppy’s infected MBR and the virus loads into memory, infects the \nMBR on the target system’s hard drive, and spreads its infection to yet another machine.\nFile Infector Viruses\nMany viruses infect different types of executable files and trigger when the operating system \nattempts to execute them. For Windows-based systems, these files end with .EXE and .COM \nextensions. The propagation routines of file infector viruses may slightly alter the code of an \nexecutable program, therefore implanting the technology the virus needs to replicate and dam-\nage the system. In some cases, the virus might actually replace the entire file with an infected ver-\nsion. Standard file infector viruses that do not use cloaking techniques like stealth or encryption \n(see the section titled “Virus Technologies” later in this chapter) are often easily detected by \ncomparing file characteristics (such as size and modification date) before and after infection or \nby comparing hash values. The section titled “Antivirus Mechanisms” provides technical details \nbehind these techniques.\nA variation of the file infector virus is the companion virus. These viruses are self-contained \nexecutable files that escape detection by using a filename similar to, but slightly different from, \na legitimate operating system file. They rely on the default extensions that DOS-based operating \nsystems append to commands when executing program files (.COM, .EXE, and .BAT, in that \norder). For example, if you had a program on your hard disk named GAME.EXE, a companion \nvirus might use the name GAME.COM. If you then open up a DOS prompt and simply type GAME, \nthe operating system would execute the virus file, GAME.COM, instead of the file you actually \nintended to execute, GAME.EXE. This is a very good reason to avoid shortcuts and fully specify \nthe name of the file you want to execute when working at the DOS prompt.\nThe Boot Sector and the Master Boot Record\nYou’ll often see the terms boot sector and Master Boot Record used interchangeably to \ndescribe the portion of a storage device used to load the operating system and the types of \nviruses that attack that process. This is not technically correct. The MBR is a single disk sector, \nnormally the first sector of the media that is read in the initial stages of the boot process. The \nMBR determines which media partition contains the operating system and then directs the sys-\ntem to read that partition’s boot sector to load the operating system.\nViruses can attack both the MBR and the boot sector, with substantially similar results. MBR \nviruses act by redirecting the system to an infected boot sector, which loads the virus into \nmemory before loading the operating system from the legitimate boot sector. Boot sector \nviruses actually infect the legitimate boot sector and are loaded into memory during the oper-\nating system load process.\n" }, { "page_number": 306, "text": "Malicious Code\n261\nMacro Viruses\nMany common software applications implement some sort of scripting functionality to assist \nwith the automation of repetitive tasks. These functionalities often use simple, yet powerful, \nprogramming languages like Visual Basic for Applications (VBA). Although macros do indeed \noffer great productivity-enhancing opportunities to computer users, they also expose systems to \nyet another avenue of infection—macro viruses.\nMacro viruses first appeared on the scene in the mid-1990s, utilizing crude technologies to \ninfect documents created in the popular Microsoft Word environment. Although they were rel-\natively unsophisticated, these viruses spread rapidly because the antivirus community didn’t \nanticipate them and, therefore, antivirus applications didn’t provide any defense against them. \nMacro viruses quickly became more and more commonplace, and vendors rushed to modify \ntheir antivirus platforms to scan application documents for malicious macros. In 1999, the Mel-\nissa virus spread through the use of a Word document that exploited a security vulnerability in \nMicrosoft Outlook to replicate. The infamous I Love You virus quickly followed on its heels, \nexploiting similar vulnerabilities in early 2000.\nMacro viruses proliferate because of the ease of writing code in the scripting \nlanguages (such as VBA) utilized by modern productivity applications.\nAlthough the vast majority of macro viruses infect documents created by applications \nbelonging to the Microsoft Office suite (including Word, Excel, PowerPoint, Access, and Out-\nlook), users of other applications are not immune. Viruses exist that infect Lotus, AmiPro, \nWordPerfect, and more.\nPlatforms\nJust as most macro viruses infect systems running the popular Microsoft Office suite of appli-\ncations, most computer viruses are designed to disrupt activity on systems running versions of \nthe world’s most popular operating system—Microsoft Windows. It’s estimated that less than \none percent of the viruses in the wild today are designed to impact other operating systems, such \nas Unix and MacOS. This may be the result of two influencing factors.\nFirst, there really is no “Unix” operating system. Rather, there is a series of many similar \noperating systems that implement the same functions in a similar fashion and that are indepen-\ndently designed by a large number of developers. Large-scale corporate efforts, like Sun’s Solaris \nand SCO Unix, compete with the myriad of freely available versions of the Linux operating sys-\ntem developed by the public at large. The sheer number of Unix versions and the fact that they \nare developed on entirely different kernels (the core code of an operating system) make it diffi-\ncult to write a virus that would impact a large portion of Unix systems.\nSecond, according to a National Computer Security Association (NCSA) Virus Prevalence \nStudy, 80 percent of all viruses are macro viruses, all but a slim percentage of which target \nMicrosoft Office applications. There simply isn’t a software package for non-Windows plat-\nforms that is anywhere near as prevalent as Office is among PC users, making it difficult to \ndevelop effective macro viruses for non-Windows platforms.\n" }, { "page_number": 307, "text": "262\nChapter 8\n\u0002 Malicious Code and Application Attacks\nThat said, Macintosh and Unix users should not rest on their laurels. The fact that there are \nonly a few viruses out there that pose a risk to their system does not mean that one of those \nviruses couldn’t affect their system at any moment. Anyone responsible for the security of a \ncomputer system should implement adequate antivirus mechanisms to ensure the continued \nsafety of their resources.\nAntivirus Mechanisms\nAlmost every desktop computer in service today runs some sort of antivirus software package. \nPopular desktop titles include McAfee VirusScan and Norton AntiVirus, but there are a pleth-\nora of other products on the market today offering protection for anything from a single system \nto an entire enterprise, as well as packages designed to protect against specific common types \nof virus invasion vectors, such as inbound e-mail.\nThe vast majority of these packages utilize a method known as signature-based detection to \nidentify potential virus infections on a system. Essentially, an antivirus package maintains an \nextremely large database that contains the telltale characteristics of all known viruses. Depend-\ning upon the antivirus package and configuration settings, it scans storage media periodically, \nchecking for any files that contain data matching those criteria. If any are detected, the antivirus \npackage takes one of the following actions:\n\u0002\nIf the software can eradicate the virus, it disinfects the affected files and restores the \nmachine to a safe condition.\n\u0002\nIf the software recognizes the virus but doesn’t know how to disinfect the files, it may quar-\nantine the files until the user or an administrator can examine them manually.\n\u0002\nIf security settings/policies do not provide for quarantine or the files exceed a predefined \ndanger threshold, the antivirus package may delete the infected files in an attempt to pre-\nserve system integrity.\nWhen using a signature-based antivirus package, it’s essential to remember that the package \nis only as effective as the virus definition file it’s based upon. If you don’t frequently update your \nvirus definitions (usually requiring an annual subscription fee), your antivirus software will not \nbe able to detect newly created viruses. With thousands of viruses appearing on the Internet \neach year, an outdated definition file will quickly render your defenses ineffective.\nMost of the modern antivirus software products are able to detect, remove, and clean a sys-\ntem for a wide variety of types of malicious code. In other words, antivirus solutions are rarely \nlimited to just viruses. These tools are often able to provide protection against worms, Trojan \nhorses, logic bombs, and various other forms of e-mail or Web-borne code. In the event that you \nsuspect new malicious code is sweeping the Internet, your best course of action is to contact \nyour antivirus software vendor to inquire about your state of protection against the new threat. \nDon't wait until the next scheduled or automated signature dictionary update. Furthermore, \nnever accept the word of any third party about protection status offered by an antivirus solu-\ntion. Always contact the vendor directly. Most responsible antivirus vendors will send alerts to \ntheir customers as soon as new, substantial threats are identified, so be sure to register for such \nnotifications as well.\n" }, { "page_number": 308, "text": "Malicious Code\n263\nOther security packages, such as the popular Tripwire data integrity assurance package, also \nprovide a secondary antivirus functionality. Tripwire is designed to alert administrators of \nunauthorized file modifications. It’s often used to detect web server defacements and similar \nattacks, but it also may provide some warning of virus infections if critical system executable \nfiles, such as COMMAND.COM, are modified unexpectedly. These systems work by maintaining a \ndatabase of hash values for all files stored on the system (see Chapter 9, “Cryptography and Pri-\nvate Key Algorithms,” for a full discussion of the hash functions used to create these values). \nThese archived hash values are then compared to current computed values to detect any files \nthat were modified between the two periods.\nVirus Technologies\nAs virus detection and eradication technology rises to meet new threats programmed by malicious \ndevelopers, new kinds of viruses designed to defeat those systems emerge. The following sections \nexamine four specific types of viruses that use sneaky techniques in an attempt to escape detec-\ntion—multipartite viruses, stealth viruses, polymorphic viruses, and encrypted viruses.\nMultipartite Viruses\nMultipartite viruses use more than one propagation technique in an attempt to penetrate sys-\ntems that defend against only one method or the other. For example, the Marzia virus discov-\nered in 1993 infects critical .COM and .EXE files, most notably the COMMAND.COM system file, by \nadding 2,048 bytes of malicious code to each file. This characteristic qualifies it as a file infector \nvirus. In addition, two hours after it infects a system, it writes malicious code to the system’s \nMaster Boot Record, qualifying it as a boot sector virus.\nStealth Viruses\nStealth viruses hide themselves by actually tampering with the operating system to fool antivirus \npackages into thinking that everything is functioning normally. For example, a stealth boot sec-\ntor virus might overwrite the system’s Master Boot Record with malicious code but then also \nmodify the operating system’s file access functionality to cover its tracks. When the antivirus \npackage requests a copy of the MBR, the modified operating system code provides it with \nexactly what the antivirus package expects to see—a clean version of the MBR free of any virus \nsignatures. However, when the system boots, it reads the infected MBR and loads the virus into \nmemory.\nPolymorphic Viruses\nPolymorphic viruses actually modify their own code as they travel from system to system. The \nvirus’s propagation and destruction techniques remain exactly the same, but the signature of the \nvirus is somewhat different each time it infects a new system. It is the hope of polymorphic virus \ncreators that this constantly changing signature will render signature-based antivirus packages \nuseless. However, antivirus vendors have “cracked the code” of many polymorphism tech-\nniques and current versions of antivirus software are able to detect known polymorphic viruses. \nThe only concern that remains is that it takes vendors longer to generate the necessary signature \nfiles to stop a polymorphic virus in its tracks, resulting in a lengthened period that the virus can \nrun free on the Internet.\n" }, { "page_number": 309, "text": "264\nChapter 8\n\u0002 Malicious Code and Application Attacks\nEncrypted Viruses\nEncrypted viruses use cryptographic techniques, such as those described in Chapter 9, to avoid \ndetection. In their outward appearance, they are actually quite similar to polymorphic viruses—\neach infected system has a virus with a different signature. However, they do not generate these \nmodified signatures by changing their code; instead, they alter the way they are stored on the disk. \nEncrypted viruses use a very short segment of code, known as the virus decryption routine, that \ncontains the cryptographic information necessary to load and decrypt the main virus code stored \nelsewhere on the disk. Each infection utilizes a different cryptographic key, causing the main code \nto appear completely different on each system. However, the virus decryption routines often con-\ntain telltale signatures that render them vulnerable to updated antivirus software packages.\nHoaxes\nNo discussion of viruses is complete without mentioning the nuisance and wasted resources \ncaused by virus hoaxes. Almost every e-mail user has, at one time or another, received a message \nforwarded by a friend or relative that warns of the latest virus threat to roam the Internet. \nInvariably, this purported “virus” is the most destructive virus ever unleashed and no antivirus \npackage is able to detect and/or eradicate it. One famous example of such a hoax is the Good \nTimes virus warning that first surfaced on the Internet in 1994 and still circulates today.\nFor more information on this topic, the renowned virus hoax expert Rob Rosenberger edits a web-\nsite that contains a comprehensive repository of virus hoaxes. You can find it at www.vmyths.com.\nLogic Bombs\nAs you learned in Chapter 7, logic bombs are malicious code objects that infect a system and lie dor-\nmant until they are triggered by the occurrence of one or more conditions such as time, program \nlaunch, website logon, and so on. The vast majority of logic bombs are programmed into custom-built \napplications by software developers seeking to ensure that their work is destroyed if they unexpectedly \nleave the company. The previous chapter provided several examples of this type of logic bomb.\nHowever, it’s important to remember that, like any malicious code object, logic bombs come \nin many shapes and sizes. Indeed, many viruses and Trojan horses contain a logic bomb com-\nponent. The famous Michelangelo virus caused a media frenzy when it was discovered in 1991 \ndue to the logic bomb trigger it contained. The virus infects a system’s Master Boot Record \nthrough the sharing of infected floppy disks and then hides itself until March 6—the birthday \nof the famous Italian artist Michelangelo Buonarroti. On that date, it springs into action, refor-\nmatting the hard drives of infected systems and destroying all of the data they contain.\nTrojan Horses\nSystem administrators constantly warn computer users not to download and install software \nfrom the Internet unless they are absolutely sure it comes from a trusted source. In fact, many \ncompanies strictly prohibit the installation of any software not prescreened by the IT depart-\nment. These policies serve to minimize the risk that an organization’s network will be compro-\nmised by a Trojan horse—a software program that appears benevolent but carries a malicious, \nbehind-the-scenes payload that has the potential to wreak havoc on a system or network.\n" }, { "page_number": 310, "text": "Malicious Code\n265\nTrojans differ very widely in functionality. Some will destroy all of the data stored on a system \nin an attempt to cause a large amount of damage in as short a time frame as possible. Some are fairly \ninnocuous. For example, a series of Trojans appeared on the Internet in mid-2002 that claimed to \nprovide PC users with the ability to run games designed for the Microsoft Xbox gaming system on \ntheir computers. When users ran the program, it simply didn’t work. However, it also inserted a \nvalue into the Windows Registry that caused a specific web page to open each time the computer \nbooted. The Trojan creators hoped to cash in on the advertising revenue generated by the large num-\nber of page views their website received from the Xbox Trojan horses. Unfortunately for them, anti-\nvirus experts quickly discovered their true intentions and the website was shut down.\nBack Orifice is a well-known Trojan horse that affects various versions of the Windows oper-\nating system. To install Back Orifice on the systems of unsuspecting users, malicious individuals \nplace it within the installation package for legitimate software. When a victim installs the legit-\nimate software, they unknowingly install Back Orifice at the same time. The package then runs \nin the background and gives the miscreant the ability to remotely access the target computer and \ngain administrative access.\nWorms\nWorms pose an unparalleled risk to network security. They contain the same destructive poten-\ntial as other malicious code objects with an added twist—they propagate themselves without \nrequiring any human intervention.\nThe Internet Worm was the first major computer security incident to occur on the Internet. \nSince that time, hundreds of new worms (with thousands of variant strains) have unleashed \ntheir destructive power on the Internet.\nThe Code Red worm received a good deal of media attention in the summer of 2001 when \nit rapidly spread among web servers running unpatched versions of Microsoft’s Internet Infor-\nmation Server (IIS). Code Red performed three malicious actions on the systems it penetrated:\n\u0002\nIt randomly selected hundreds of IP addresses and then probed those hosts to see if they \nwere running a vulnerable version of IIS. Any systems it found were quickly compromised. \nThis greatly magnified Code Red’s reach as each host it infected sought many new targets.\n\u0002\nIt defaced HTML pages on the local web server, replacing normal content with the text\n Welcome to http://www.worm.com!\n Hacked By Chinese!\n\u0002\nIt planted a logic bomb that would initiate a denial of service (DoS) attack against the IP \naddress 198.137.240.91, which at that time belonged to the web server hosting the White \nHouse’s home page. Quick-thinking government web administrators changed the White House’s \nIP address before the attack actually began.\nThe destructive power of the Internet Worm, Code Red, and their many variants poses an \nextreme risk to the modern Internet. This presents a strong argument that system administrators \nsimply must ensure that they apply appropriate security patches to their Internet-connected sys-\ntems as software vendors release them. A security fix for IIS vulnerability exploited by Code Red \nwas available from Microsoft over a month before the worm attacked the Internet. Had security \nadministrators applied it promptly, Code Red would have been a miserable failure.\n" }, { "page_number": 311, "text": "266\nChapter 8\n\u0002 Malicious Code and Application Attacks\nRTM and the Internet Worm\nIn November 1988, a young computer science student named Robert Tappan Morris brought \nthe fledgling Internet to its knees with a few lines of computer code. A malicious worm he \nclaimed to have created as an experiment and accidentally released onto the Internet spread \nquickly and crashed a large number of systems.\nThis worm spread by exploiting four specific security holes in the Unix operating system:\nSendmail debug mode\nThen-current versions of the popular sendmail software package \nused to route electronic mail messages across the Internet contained a security vulnerability. \nThis vulnerability allowed the worm to spread itself by sending a specially crafted e-mail mes-\nsage that contained the worm’s code to the sendmail program on a remote system. When the \nremote system processed the message, it became infected.\nPassword attack\nThe worm also used a dictionary attack to attempt to gain access to remote \nsystems by utilizing the username and password of a valid system user (you’ll find more on dic-\ntionary attacks later in this chapter).\nFinger vulnerability\nThe popular Internet utility finger allowed users to determine who was \nlogged on to a remote system. Then-current versions of the finger software contained a buffer \noverflow vulnerability that allowed the worm to spread (there is a detailed discussion of buffer \noverflows later in this chapter). The finger program has since been removed from most \nInternet-connected systems.\nTrust relationships\nAfter the worm infected a system, it analyzed any existing trust relation-\nships with other systems on the network and attempted to spread itself to those systems \nthrough the trusted path.\nThis multipronged approach made the Internet Worm extremely dangerous. Fortunately, the \n(then-small) computer security community quickly put together a crack team of investigators \nwho disarmed the worm and patched the affected systems. Their efforts were facilitated by \nseveral inefficient routines in the worm’s code that limited the rate of its spread.\nDue to the lack of experience among law enforcement authorities and the court system in deal-\ning with computer crimes, Morris received only a slap on the wrist for his transgression. He was \nsentenced to three years’ probation, 400 hours of community service, and a $10,000 fine under \nthe Computer Fraud and Abuse Act of 1986. Ironically, Morris’s father, Robert Morris, was serv-\ning as director of the National Security Agency’s (NSA’s) National Computer Security Center \n(NCSC) at the time of the incident.\n" }, { "page_number": 312, "text": "Malicious Code\n267\nActive Content\nThe increasing demand of web users for more and more dynamic content on the sites they visit \nhas created a dilemma for web administrators. Delivery of this dynamic content requires the use \nof web applications that can place an enormous computational burden on the server and \nincreased demand for them requires commitment of a large number of resources.\nIn an effort to solve this problem, software developers created the concept of active content, \nweb programs that are downloaded to users’ own computers for execution rather than consum-\ning server-side resources. These programs, utilizing technologies like Java applets and ActiveX \ncontrols, greatly reduce the load on the server and client waiting time. Most web browsers allow \nusers to choose to have the active content automatically downloaded, installed, and executed \nfrom trusted sites.\nUnfortunately, this very technology can pose a major threat to client systems. Unsuspecting \nusers may download active content from an untrusted source and allow it to execute on their \nsystems, creating a significant security vulnerability. This vulnerability led to the creation of a \nwhole new type of malicious code—the hostile applet. Like other forms of malware, hostile \napplets have a variety of intentions, from causing a denial of service attack that merely con-\nsumes system resources to more insidious goals, such as theft of data.\nCountermeasures\nThe primary means of defense against malicious code is the use of antivirus filtering software. \nThese packages are primarily signature-based systems, designed to detect known viruses run-\nning on a system. It’s wise to consider implementing antivirus filters in at least three key areas:\nRemoval is often possible within hours after new malicious code is discov-\nered. Removal removes the malicious code but does not repair the damage \ncaused by it. Cleaning capabilities are usually made available within a few days \nafter a new malicious code is discovered. Cleaning not only removes the code, \nit also repairs any damage it causes.\nClient systems\nEvery workstation on a network should have updated antivirus software \nsearching the local file system for malicious code.\nServer systems\nServers should have similar protections. This is even more critical than pro-\ntecting client systems because a single virus on a common server could quickly spread through-\nout an entire network.\nContent filters\nThe majority of viruses today are exchanged over the Internet. It’s a wise move \nto implement on your network content filtering that scans inbound and outbound electronic \nmail and web traffic for signs of malicious code.\n" }, { "page_number": 313, "text": "268\nChapter 8\n\u0002 Malicious Code and Application Attacks\nRemember, most antivirus filters are signature based. Therefore, they’re only \nas good as the most recent update to their virus definition files. It’s critical that \nyou update these files frequently, especially when a new piece of high-profile \nmalicious code appears on the Internet.\nSignature-based filters rely upon the descriptions of known viruses provided by software \ndevelopers. Therefore, there is a period of time between when any given virus first appears “in \nthe wild” and when updated filters are made available. There are two solutions to this problem \ncommonly used today:\n\u0002\nIntegrity checking software, such as Tripwire (an open-source version is available at \nwww.tripwire.org), scans your file system for unexpected modifications and reports to \nyou on a periodic basis.\n\u0002\nAccess controls should be strictly maintained and enforced to limit the ability of malicious \ncode to damage your data and spread on your network.\nThere are two additional techniques used specifically to prevent systems from being infected \nby malicious code embedded in active content:\n\u0002\nJava’s sandbox provides applets with an isolated environment in which they can run safely \nwithout gaining access to critical system resources.\n\u0002\nActiveX control signing utilizes a system of digital signatures to ensure that the code orig-\ninates from a trusted source. It is up to the end user to determine whether the authenticated \nsource should be trusted.\nFor an in-depth explanation of digital signature technology, see Chapter 10, “PKI and Cryp-\ntographic Applications.”\nThese techniques provide added protection against hostile applets. Most con-\ntent filtering solutions also scan active content for malicious code as well.\nPassword Attacks\nOne of the simplest techniques hackers use to gain illegitimate access to a system is to learn the \nusername and password of an authorized system user. Once they’ve gained access as a regular \nuser, they have a foothold into the system. At that point, they can use other techniques, includ-\ning automated rootkit packages, to gain increased levels of access to the system (see the section \n“Rootkits” later in this chapter). They may also use the compromised system as a jumping-off \npoint for attacks on other, more attractive targets on the same network.\nThe following sections examine three methods hackers use to learn the passwords of legitimate \nusers and access a system: password guessing attacks, dictionary attacks, and social engineering \nattacks. Many of these attacks rely upon weak password storage mechanisms. For example, many \nUnix operating systems store encrypted versions of a user’s password in the /etc/passwd file.\n" }, { "page_number": 314, "text": "Password Attacks\n269\nPassword Guessing\nIn the most basic type of password attack, hackers simply attempt to guess a user’s password. No \nmatter how much security education users receive, they often use extremely weak passwords. If \nhackers are able to obtain a list of authorized system users, they can often quickly figure out the \ncorrect usernames. (On most networks, usernames consist of the first initial of the user’s first name \nfollowed by a portion of their last name.) With this information, they can begin making some edu-\ncated guesses about the user’s password. The most commonly used password is some form of the \nuser’s last name, first name, or username. For example, the user mchapple might use the weak \npassword elppahcm because it’s easy to remember. Unfortunately, it’s also easy to guess.\nIf that attempt fails, hackers turn to widely available lists of the most common passwords on \nthe Internet. Some of these are shown in the sidebar “Most Common Passwords.”\nFinally, a little knowledge about a person can provide extremely good clues to their pass-\nword. Many people use the name of a spouse, child, family pet, relative, or favorite entertainer. \nCommon passwords also include birthdays, anniversaries, Social Security numbers, phone \nnumbers, and (believe it or not!) ATM PINs.\nDictionary Attacks\nAs mentioned previously, many Unix systems store encrypted versions of user passwords in an \n/etc/passwd file accessible to all system users. To provide some level of security, the file doesn’t \ncontain the actual user passwords; it contains an encrypted value obtained from a one-way encryp-\ntion function (see Chapter 9 for a discussion of encryption functions). When a user attempts to log \non to the system, access verification routines use the same encryption function to encrypt the pass-\nword entered by the user and then compare it with the encrypted version of the actual password \nstored in the /etc/passwd file. If the values match, the user is allowed access.\nMost Common Passwords\nHackers often use the Internet to distribute lists of commonly used passwords based on data \ngathered during system compromises. Many of these are no great surprise. Here are just a very \nfew of the 815 passwords contained in a hacker list retrieved from the Internet in July 2002:\npassword\nsecret\nsex\nmoney\nlove\ncomputer\nfootball\nhello\nmorning\nibm\nwork\noffice\nonline\nterminal\ninternet\nAlong with these common words, the password list contained over 300 first names, 70 percent \nof which were female names.\n" }, { "page_number": 315, "text": "270\nChapter 8\n\u0002 Malicious Code and Application Attacks\nPassword hackers use automated tools like the Crack program to run automated dictionary \nattacks that exploit a simple vulnerability in this mechanism. They take a large dictionary file \nthat contains thousands of words and then run the encryption function against all of those \nwords to obtain their encrypted equivalents. Crack then searches the password file for any \nencrypted values for which there is a match in the encrypted dictionary. When a match is found, \nit reports the username and password (in plain text) and the hacker gains access to the system.\nIt sounds like simple security mechanisms and education would prevent users from using \npasswords that are easily guessed by Crack, but the tool is surprisingly effective at compromis-\ning live systems. As new versions of Crack are released, more advanced features are introduced \nto defeat common techniques used by users to defeat password complexity rules. Some of these \nare included in the following list:\n\u0002\nRearranging the letters of a dictionary word\n\u0002\nAppending a number to a dictionary word\n\u0002\nReplacing each occurrence of the letter O in a dictionary word with the number 0 (or the \nletter l with the number 1)\n\u0002\nCombining two dictionary words in some form\nSocial Engineering\nSocial engineering is one of the most effective tools hackers use to gain access to a system. In its \nmost basic form, a social engineering attack consists of simply calling the user and asking for their \npassword, posing as a technical support representative or other authority figure that needs the \ninformation immediately. Fortunately, most contemporary computer users are aware of these \nscams and the effectiveness of simply asking a user for a password is somewhat diminished today.\nHowever, social engineering still poses a significant threat to the security of passwords (and \nnetworks in general). Hackers can often obtain sensitive personal information by “chatting up” \ncomputer users, office gossips, and administrative personnel. This information can provide \nexcellent ammunition when mounting a password guessing attack. Furthermore, hackers can \nsometimes obtain sensitive network topography or configuration data that is useful when plan-\nning other types of electronic attacks against an organization.\nCountermeasures\nThe cornerstone of any security program is education. Security personnel should continually \nremind users of the importance of choosing a secure password and keeping it secret. Users should \nreceive training when they first enter an organization, and they should receive periodic refresher \ntraining, even if it’s just an e-mail from the administrator reminding them of the threats.\nProvide users with the knowledge they need to create secure passwords. Tell them about the \ntechniques hackers use when guessing passwords and give them advice on how to create a \nstrong password. One of the most effective password techniques is to use a mnemonic device \nsuch as thinking of an easy-to-remember sentence and creating a password out of the first letter \nof each word. For example, “My son Richard likes to eat 4 pies” would become MsRlte4p—\nan extremely strong password.\n" }, { "page_number": 316, "text": "Denial of Service Attacks\n271\nOne of the most common mistakes made by overzealous security administrators is to create \na series of strong passwords and then assign them to users (who are then prevented from chang-\ning their password). At first glance, this seems to be a sound security policy. However, the first \nthing a user will do when they receive a password like 1mf0A8flt is write it down on a Post-It \nnote and stick it under the computer keyboard. Whoops! Security just went out the window (or \nunder the keyboard)!\nIf your network includes Unix operating systems that implement the /etc/passwd file, con-\nsider using some other access verification mechanism to increase security. One popular tech-\nnique available in many versions of Unix and Linux is the use of a shadow password file, /etc/\nshadow. This file contains the true encrypted passwords of each user, but it is not accessible to \nanyone but the administrator. The publicly accessible /etc/passwd file then simply contains a \nlist of usernames without the data necessary to mount a dictionary attack.\nDenial of Service Attacks\nAs you learned in Chapter 2, malicious individuals often use denial of service (DoS) attacks in \nan attempt to prevent legitimate users from accessing resources. This is often a “last ditch” \neffort when a hacker realizes that they can’t penetrate a system—“If I can’t have it, then nobody \ncan.” In the following sections, we’ll take a look at five specific denial of service attacks and the \nmechanisms they use to disable computing systems. In some of these attacks, a brute force \nattack is used, simply overwhelming a targeted system with so many requests that it can’t pos-\nsibly sort out the legitimate ones from those that are part of the attack. Others include elegantly \ncrafted commands that cause vulnerable systems to crash or hang indefinitely.\nSYN Flood\nRecall from Chapter 2 that the TCP/IP protocol utilizes a three-way handshaking process to set \nup connections between two hosts. In a typical connection, the originating host sends a single \npacket with the SYN flag enabled, attempting to open one side of the communications channel. \nThe destination host receives this packet and sends a reply with the ACK flag enabled (confirm-\ning that the first side of the channel is open) and the SYN flag enabled (attempting to open the \nreverse channel). Finally, the originating host transmits a packet with the ACK flag enabled, \nconfirming that the reverse channel is open and the connection is established. If, for some rea-\nson, the process is not completed, the communicating hosts leave the connection in a half-open \nstate for a predetermined period of time before aborting the attempt. The standard handshaking \nprocess is illustrated in Figure 8.1.\nIn a SYN flood attack, hackers use special software that sends a large number of fake packets \nwith the SYN flag set to the targeted system. The victim then reserves space in memory for the con-\nnection and attempts to send the standard SYN/ACK reply but never hears back from the origi-\nnator. This process repeats hundreds or even thousands of times, and the targeted computer \neventually becomes overwhelmed and runs out of available resources for the half-opened connec-\ntions. At that time, it either crashes or simply ignores all inbound connection requests because it \n" }, { "page_number": 317, "text": "272\nChapter 8\n\u0002 Malicious Code and Application Attacks\ncan’t possibly handle any more half-open connections. This prevents everyone—both hackers and \nlegitimate users—from connecting to the machine and results in an extremely effective denial of \nservice attack. The SYN flood modified handshaking process is shown in Figure 8.2.\nThe SYN flood attack crippled many computing systems in the late 1990s and the year 2000. \nWeb servers were especially vulnerable to this type of attack. Fortunately, modern firewalls con-\ntain specialized technology designed to prevent successful SYN flood attacks in the future. For \nexample, Checkpoint Software’s popular Firewall-1 package contains the SYNDefender func-\ntionality that acts as a proxy for SYN requests and shelters the destination system from any bar-\nrage of requests.\nF I G U R E\n8 . 1\nStandard TCP/IP three-way handshaking\nF I G U R E\n8 . 2\nSYN flood modified handshaking process\nDistributed DoS Toolkits\nDistributed denial of service (DDoS) attacks allow hackers to harness the power of many third-\nparty systems to attack the ultimate target. In many DDoS attacks, a hacker will first use some \nother technique to compromise a large number of systems. They then install on those compro-\nmised systems software that enables them to participate in the main attack, effectively enlisting \nthose machines into an army of attackers.\nTrinoo and the Tribal Flood Network (TFN) are two commonly used DDoS toolkits. Hack-\ners compromise third-party systems and install Trinoo/TFN clients that lie dormant waiting for \nSYN\nSYN/ACK\nACK\nSource\nDestination\nSYN\nSYN/ACK\nSYN\nSYN/ACK\nSYN\nSYN/ACK\nSource\nDestination\n" }, { "page_number": 318, "text": "Denial of Service Attacks\n273\ninstructions to begin an attack. When the hacker is satisfied that enough clients are lying in wait, \nthey use a Trinoo/TFN master server to “wake up” the clients and initiate a coordinated attack \nagainst a single destination system or network from many directions. The current versions of \nTrinoo and TFN allow the master server to initiate many common DoS attacks, including SYN \nfloods and Smurf attacks, from the third-party client machines.\nDistributed denial of service attacks using these toolkits pose extreme risk to Internet-con-\nnected systems and are very difficult to defend against. In February 2000, hackers launched a \nweek-long DDoS campaign against a number of high-profile websites, including those of \nYahoo!, CNN, and Amazon.com. The attacks rendered these sites virtually inaccessible to legit-\nimate users for an extended period of time. In fact, many security practitioners consider DDoS \nattacks the single greatest threat facing the Internet today.\nSmurf\nThe Smurf attack takes the distributed denial of service attack to the next level by harnessing \nthe power of many unwitting third-party hosts to attack a system. Attacks that are like Smurf \nand are amplified using third-party networks are known as distributed reflective denial of ser-\nvice (DRDoS) attacks.\nThe Smurf DRDoS attack in particular exploits a vulnerability in the implementation of the \nInternet Control Message Protocol (ICMP)’s ping functionality. The intended use of ping allows \nusers to send single “Are you there?” packets to other systems. If the system is alive and \nresponding, it sends back a single “Yes, I am” packet. It offers a very efficient way to check net-\nwork connectivity and diagnose potential networking issues. The typical exchange involves \nonly two packets transiting the network and consumes minimal computer/network resources.\nIn a Smurf attack, the originating system creates a false ping packet that appears to be from \nthe target of the attack. The destination of the packet is the broadcast address of the third-party \nnetwork. Therefore, each machine on the third-party network receives a copy of the ping \nrequest. According to the request they received, the originator is the victim system and each \nmachine on the network sends a “Yes, I’m alive” packet to the victim. The originator repeats \nthis process by rapidly sending a large number of these requests through different intermediary \nnetworks and the victim quickly becomes overwhelmed by the number of requests. The Smurf \nattack data flow is illustrated in Figure 8.3. A similar attack, the Fraggle attack, works in the \nsame manner as Smurf but uses User Datagram Protocol (UDP) instead of ICMP.\nPrevention of Smurf attacks depends upon the use of responsible filtering rules by networks \nacross the entire Internet. System administrators should set rules at the router and/or firewall \nthat prohibit inbound ping packets sent to a broadcast address (or perhaps even prohibit \ninbound pings entirely!). Furthermore, administrators should use egress filtering—a technique \nthat prohibits systems on a network from transmitting packets with IP addresses that do not \nbelong to the network. This prevents a network from being utilized by malicious individuals \nseeking to initiate a Smurf attack or any type of masquerading attack aimed at a remote network \n(see the section “Masquerading Attacks” for more information on this topic).\n" }, { "page_number": 319, "text": "274\nChapter 8\n\u0002 Malicious Code and Application Attacks\nF I G U R E\n8 . 3\nSmurf attack data flow\nTeardrop\nThe teardrop attack is a member of a subclass of DoS attacks known as fragmentation attacks, \nwhich exploit vulnerabilities in the fragment reassembly functionality of the TCP/IP protocol \nstack. System administrators can configure the maximum size allowed for TCP/IP packets that \ntraverse each network that carries them. They usually choose this value based upon the avail-\nable hardware, quality of service, and typical network traffic parameters to maximize network \nefficiency and throughput.\nWhen a network receives a packet larger than its maximum allowable packet size, it breaks \nit up into two or more fragments. These fragments are each assigned a size (corresponding to \nthe length of the fragment) and an offset (corresponding to the starting location of the fragment). \nFor example, if a packet is 250 bytes long and the maximum packet size for the network is 100 \nFraggle\nFraggle is another distributed reflective denial of service (DRDoS) attack that works in a manner \nvery similar to that of Smurf attacks. However, rather than using ICMP packets, Fraggle takes \nadvantage of the uncommonly used chargen and echo UDP services. An easy way to prevent \nFraggle attacks on your network is to disable these services. It’s more than likely that you’ll \nnever have a legitimate use for them.\nHacker\nVictim\nThird-Party Network A\nThird-Party Network B\nThird-Party Network C\n" }, { "page_number": 320, "text": "Denial of Service Attacks\n275\nbytes, it will require fragmentation. In a correctly functioning TCP/IP stack, the packet would \nbe broken up into three fragments, as shown in Figure 8.4.\nIn the teardrop attack, hackers use software that sends out packet fragments that don’t con-\nform to the protocol specification. Specifically, they send two or more overlapping fragments. \nThis process is illustrated in Figure 8.5. The malicious individual might send out fragment 1, a \nperfectly normal packet fragment of length 100. Under normal conditions, this fragment would \nbe followed by a second fragment with offset 100 (correlating to the length of the first frag-\nment). However, in the teardrop attack, the hacker sends a second fragment with an offset value \nthat is too low, placing the second fragment right in the middle of the first fragment. When the \nreceiving system attempts to reassemble the fragmented packet, it doesn’t know how to prop-\nerly handle the overlapping fragments and freezes or crashes.\nAs with many of the attacks described in this book, the teardrop attack is a well-known \nexploit, and most operating system vendors have released security patches that prevent this type \nof attack from crippling updated systems. However, attacks like teardrop continue to cause \ndamage on a daily basis due to the neglect of system administrators who fail to apply appro-\npriate patches, leaving their systems vulnerable to attack.\nF I G U R E\n8 . 4\nStandard packet fragmentation\nF I G U R E\n8 . 5\nTeardrop attack\nOriginal Packet\nFragment 1\nFragment 2\nFragment 3\nReassembled Packet\nLength = 250\nLength = 100, Offset = 0\nLength = 100, Offset = 100\nLength = 50, Offset = 100\nF1\nF2\nF3\nFragment 1\nFragment 2\nReassembled Packet\nLength = 100, Offset = 0\nLength = 1, Offset = 50\nF1\nF2\n" }, { "page_number": 321, "text": "276\nChapter 8\n\u0002 Malicious Code and Application Attacks\nLand\nThe Land denial of service attack causes many older operating systems (such as Windows NT 4, \nWindows 95, and SunOS 4.1.4) to freeze and behave in an unpredictable manner. It works by cre-\nating an artificial TCP packet that has the SYN flag set. The attacker sets the destination IP address \nto the address of the victim machine and the destination port to an open port on that machine. Next, \nthe attacker sets the source IP address and source port to the same values as the destination IP \naddress and port. When the targeted host receives this unusual packet, the operating system doesn’t \nknow how to process it and freezes, crashes, or behaves in an unusual manner as a result.\nDNS Poisoning\nAnother DoS attack, DNS poisoning, works without ever touching the targeted host. Instead, \nit exploits vulnerabilities in the Domain Name System (DNS) protocol and attempts to redirect \ntraffic to an alternative server without the knowledge of the targeted victim.\nConsider an example—suppose a hacker wants to redirect all legitimate traffic headed for \nwww.whitehouse.gov to an alternative site, say www.youvebeenhacked.com. We can assume \nthat the White House site, as a frequent target of hackers, is highly secure. Instead of attempting \nto directly penetrate that site, the hacker might try to insert into the DNS system false data that \nprovides the IP address of www.youvebeenhacked.com when users query for the IP address of \nwww.whitehouse.gov.\nHow can this happen? When you create a domain name, you use one of several domain name \nregistrars that serve as central clearinghouses for DNS registrations. If a hacker is able to gain \naccess to your registrar account (or the registrar’s infrastructure itself), they might be able to \nalter your DNS records without your knowledge. In the early days of DNS, authentication was \nweak and users could change DNS information by simply sending an unauthenticated e-mail \nmessage. Fortunately, registrars have since implemented more secure authentication techniques \nthat use cryptographic technology to verify user identities.\nDNS authentication techniques will protect you only if you use them! Ensure \nthat you’ve enabled all of the security features offered by your registrar. Also, \nwhen an administrator leaves your organization, remember to change the \npasswords for any accounts used to manage DNS information. DNS poisoning \nis an easy way for a disgruntled former employee to get revenge!\nPing of Death\nThe final denial of service attack we’ll examine is the infamous ping of death attack that plagued \nsystems in the mid-1990s. This exploit is actually quite simple. According to the ICMP specifi-\ncation, the largest permissible ICMP packet is 65,536 bytes. However, many early operating \nsystem developers simply relied upon the assumption that the protocol stacks of sending \nmachines would never exceed this value and did not build in error-handling routines to monitor \nfor packets that exceeded this maximum.\n" }, { "page_number": 322, "text": "Application Attacks\n277\nHackers seeking to exploit the ping of death vulnerability simply use a packet generation \nprogram to create a ping packet destined for the victim host with a size of at least 65,537 bytes. \nIf the victim’s operating system doesn’t check the length of the packet and attempts to process \nit, unpredictable results occur. Some operating systems may hang or crash.\nAfter this exploit was discovered, operating system manufacturers quickly updated their \nICMP algorithms to prevent future occurrences. However, machines running older versions of \ncertain operating systems may still be vulnerable to this attack. Some notable versions include \nWindows 3.11 and MacOS 7, along with unpatched versions of Windows 95, Windows NT 4, \nand Solaris 2.4–2.5.1. If you’re running any of those operating systems on your network, update \nthem to the appropriate patch level or version to protect yourself against this exploit.\nApplication Attacks\nIn Chapter 7, you learned about the importance of utilizing solid software engineering processes \nwhen developing operating systems and applications. In the following sections, we’ll take a brief \nlook at some of the specific techniques hackers use to exploit vulnerabilities left behind by \nsloppy coding practices.\nBuffer Overflows\nWhen creating software, developers must pay special attention to variables that allow user \ninput. Many programming languages do not enforce size limits on variables intrinsically—they \nrely on the programmer to perform this bounds checking in the code. This is an inherent vul-\nnerability because many programmers feel that parameter checking is an unnecessary burden \nthat slows down the development process. As a security practitioner, it’s your responsibility to \nensure that developers in your organization are aware of the risks posed by buffer overflow vul-\nnerabilities and take appropriate measures to protect their code against this type of attack.\nAny time a program variable allows user input, the programmer should take steps to ensure \nthat each of the following conditions are met:\n\u0002\nThe user can’t enter a value longer than the size of any buffer that will hold it (e.g., a 10-\nletter word into a 5-letter string variable).\n\u0002\nThe user can’t enter an invalid value for the variable types that will hold it (e.g., a character \ninto a numeric variable).\n\u0002\nThe user can’t enter a value that will cause the program to operate outside of its specified \nparameters (e.g., answer a “Yes or No” question with “Maybe”).\nFailure to perform simple checks to make sure these conditions are met can result in a buffer \noverflow vulnerability that may cause the system to crash or even allow the user to execute shell \ncommands and gain access to the system. Buffer overflow vulnerabilities are especially prevalent \nin code developed rapidly for the Web using CGI or other languages that allow unskilled pro-\ngrammers to quickly create interactive web pages.\n" }, { "page_number": 323, "text": "278\nChapter 8\n\u0002 Malicious Code and Application Attacks\nTime-of-Check-to-Time-of-Use\nThe time-of-check-to-time-of-use (TOCTTOU or TOC/TOU) issue is a timing vulnerability \nthat occurs when a program checks access permissions too far in advance of a resource request. \nFor example, if an operating system builds a comprehensive list of access permissions for a user \nupon logon and then consults that list throughout the logon session, a TOCTTOU vulnerability \nexists. If the system administrator revokes a particular permission, that restriction would not be \napplied to the user until the next time they log on. If the user is logged on when the access revo-\ncation takes place, they will have access to the resource indefinitely. The user simply needs to \nleave the session open for days and the new restrictions will never be applied.\nTrap Doors\nTrap doors are undocumented command sequences that allow software developers to bypass \nnormal access restrictions. They are often used during the development and debugging process \nto speed up the workflow and avoid forcing developers to continuously authenticate to the sys-\ntem. Occasionally, developers leave these trap doors in the system after it reaches a production \nstate, either by accident or so they can “take a peek” at their system when it is processing sen-\nsitive data to which they should not have access.\nObviously, the undocumented nature of trap doors makes them a significant threat to the \nsecurity of any system that contains them, especially when they are undocumented and forgot-\nten. If a developer leaves the firm, they could later use the trap door to access the system and \nretrieve confidential information or participate in industrial sabotage.\nRootkits\nRootkits are specialized software packages that have only one purpose—to allow hackers to \ngain expanded access to a system. Rootkits are freely available on the Internet and exploit \nknown vulnerabilities in various operating systems. Hackers often obtain access to a standard \nsystem user account through the use of a password attack or social engineering and then use a \nrootkit to increase their access to the root (or administrator) level.\nThere is one simple measure administrators can take to protect their systems against the vast \nmajority of rootkit attacks—and it’s nothing new. Administrators must keep themselves informed \nabout new security patches released for operating systems used in their environment and apply \nthese corrective measures consistently. This straightforward step will fortify a network against \nalmost all rootkit attacks as well as a large number of other potential vulnerabilities.\nReconnaissance Attacks\nAs with any attacking force, hackers require solid intelligence to effectively focus their efforts \nagainst the targets most likely to yield the best results. To assist with this targeting, hacker tool \ndevelopers have created a number of automated tools that perform network reconnaissance. In \n" }, { "page_number": 324, "text": "Reconnaissance Attacks\n279\nthe following sections, we’ll examine three of those automated techniques—IP probes, port \nscans, and vulnerability scans—and then look at how these techniques can be supplemented by \nthe more physically intensive dumpster-diving technique.\nIP Probes\nIP probes (also called IP sweeps) are often the first type of network reconnaissance carried out \nagainst a targeted network. With this technique, automated tools simply attempt to ping each \naddress in a range. Systems that respond to the ping request are logged for further analysis. \nAddresses that do not produce a response are assumed to be unused and are ignored.\nIP probes are extremely prevalent on the Internet today. Indeed, if you configure a system \nwith a public IP address and connect it to the Internet, you’ll probably receive at least one IP \nprobe within hours of booting up. The widespread use of this technique makes a strong case for \ndisabling ping functionality, at least for users external to a network.\nPort Scans\nAfter a hacker performs an IP probe, they are left with a list of active systems on a given net-\nwork. The next task is to select one or more systems to target with additional attacks. Often, \nhackers have a type of target in mind—web servers, file servers, or other critical operations are \nprime targets.\nTo narrow down their search, hackers use port scan software to probe all of the active sys-\ntems on a network and determine what public services are running on each machine. For exam-\nple, if the hacker wants to target a web server, they might run a port scan to locate any systems \nwith a service running on port 80, the default port for HTTP services.\nVulnerability Scans\nThe third technique is the vulnerability scan. Once the hacker determines a specific system to \ntarget, they need to discover in that system a specific vulnerability that can be exploited to gain \nthe desired access permissions. A variety of tools available on the Internet assist with this task. \nTwo of the more popular ones are the Satan and Saint vulnerability scanners. These packages \ncontain a database of known vulnerabilities and probe targeted systems to locate security flaws. \nThey then produce very attractive reports that detail every vulnerability detected. From that \npoint, it’s simply a matter of locating a script that exploits a specific vulnerability and launching \nan attack against the victim.\nIt’s important to note that vulnerability scanners are highly automated tools. They can be \nused to launch an attack against a specific system, but it’s just as likely that a hacker used a series \nof IP probes, port scans, and vulnerability scans to narrow down a list of potential victims. \nHowever, chances are an intruder will run a vulnerability scanner against an entire network to \nprobe for any weakness that could be exploited.\nOnce again, simply updating operating systems to the most recent security patch level can \nrepair almost every weakness reported by a vulnerability scanner. Furthermore, wise system \n" }, { "page_number": 325, "text": "280\nChapter 8\n\u0002 Malicious Code and Application Attacks\nadministrators learn to think like the enemy—they download and run these vulnerability scan-\nners against their own networks (with the permission of upper management) to see what secu-\nrity holes might be pointed out to a potential hacker. This allows them to quickly focus their \nresources on fortifying the weakest points on their networks.\nDumpster Diving\nEvery organization generates trash—often significant amounts on a daily basis. Have you ever \ntaken the time to sort through your trash and look at the sensitivity of the materials that hit the \nrecycle bin? Give it a try—the results may frighten you. When you’re analyzing the working \npapers thrown away each day, look at them from a hacker’s perspective. What type of intelli-\ngence could you glean from them that might help you launch an attack? Is there sensitive data \nabout network configurations or installed software versions? A list of employees’ birthdays \nfrom a particular department that might be used in a social engineering attack? A policy manual \nthat contains detailed procedures on the creation of new accounts? Discarded floppy disks or \nother storage media?\nDumpster diving is one of the oldest hacker tools in the book and it’s still used today. The \nbest defense against these attacks is quite simple—make them more difficult. Purchase shredders \nfor key departments and encourage employees to use them. Keep the trash locked up in a secure \narea until the garbage collectors arrive. A little common sense goes a long way in this area.\nMasquerading Attacks\nOne of the easiest ways to gain access to resources you’re not otherwise entitled to use is to \nimpersonate someone who does have the appropriate access permissions. In the offline world, \nteenagers often borrow the driver’s license of an older sibling to purchase alcohol—the same \nthing happens in the computer security world. Hackers borrow the identities of legitimate users \nand systems to gain the trust of third parties. In the following sections, we’ll take a look at two \ncommon masquerading attacks—IP spoofing and session hijacking.\nIP Spoofing\nIn an IP spoofing attack, the malicious individual simply reconfigures their system so that it has \nthe IP address of a trusted system and then attempts to gain access to other external resources. \nThis is surprisingly effective on many networks that don’t have adequate filters installed to pre-\nvent this type of traffic from occurring. System administrators should configure filters at the \nperimeter of each network to ensure that packets meet at least the following criteria:\n\u0002\nPackets with internal source IP addresses don’t enter the network from the outside.\n\u0002\nPackets with external source IP addresses don’t exit the network from the inside.\n\u0002\nPackets with private IP addresses don’t pass through the router in either direction (unless \nspecifically allowed as part of an intranet configuration).\n" }, { "page_number": 326, "text": "Decoy Techniques\n281\nThese three simple filtering rules can eliminate the vast majority of IP spoofing attacks and \ngreatly enhance the security of a network.\nSession Hijacking\nSession hijacking attacks occur when a malicious individual intercepts part of the communica-\ntion between an authorized user and a resource and then uses a hijacking technique to take over \nthe session and assume the identity of the authorized user. The following list includes some com-\nmon techniques:\n\u0002\nCapturing details of the authentication between a client and server and using those details \nto assume the client’s identity\n\u0002\nTricking the client into thinking the hacker’s system is the server, acting as the middleman as \nthe client sets up a legitimate connection with the server, and then disconnecting the client\n\u0002\nAccessing a web application using the cookie data of a user who did not properly close the \nconnection\nAll of these techniques can have disastrous results for the end user and must be addressed \nwith both administrative controls (such as anti-replay authentication techniques) and applica-\ntion controls (such as expiring cookies within a reasonable period of time).\nDecoy Techniques\nHackers aren’t the only ones with tricks up their sleeves—security administrators have also \nmastered sleight-of-hand tricks and use them to lure hackers into a sense of false security. After \nthey’ve had the opportunity to observe hackers and trace their actions back to the source, they \nsend law enforcement or other authorities to swoop in and stop the malicious activity cold. In \nthe following sections, we’ll examine two such techniques used by creative system administra-\ntors: honey pots and pseudo-flaws.\nHoney Pots\nAdministrators often create honey pot systems that appear to be extremely lucrative hacker tar-\ngets. They may contain files that appear to be sensitive and/or valuable or run false services (like \na web server) that appear to be critical to an organization’s operations. In reality, these systems \nare nothing but decoys set up to lure hackers away from truly critical resources and allow \nadministrators to monitor and trace their activities.\nPseudo-Flaws\nPseudo-flaws are false vulnerabilities or apparent loopholes intentionally implanted into a sys-\ntem in an attempt to detect hackers. They are often used on honey-pot systems and on critical \n" }, { "page_number": 327, "text": "282\nChapter 8\n\u0002 Malicious Code and Application Attacks\nresources to emulate well-known operating system vulnerabilities. Hackers seeking to exploit a \nknown flaw might stumble across a pseudo-flaw and think that they have successfully pene-\ntrated a system. More sophisticated pseudo-flaw mechanisms actually simulate the penetration \nand convince the hacker that they have gained additional access privileges to a system. How-\never, while the hacker is exploring the bounds of these newfound rights, monitoring and alerting \nmechanisms trigger in the background to alert administrators to the threat and increase the \ndefensive posture surrounding critical network resources.\nSummary\nThroughout history, criminals have always been extremely creative. No matter what security \nmechanisms have been put in place to deter them, criminals have found methods to bypass them \nand reach their ultimate goals. This is no less true in the realm of computer security than in any \nother aspect of criminal psychology. Hackers use a number of automated tools to perform net-\nwork reconnaissance so they can focus their efforts on the targets most likely to yield the best \nresults. Examples include IP probes, port scans, malicious code, password attacks, denial of ser-\nvice attacks, application attacks, reconnaissance attacks, masquerading attacks, and decoy \ntechniques.\nBy no means was this a comprehensive look at all possible hacking methods—that would be \nan impossible task. New tools and techniques appear in the hacking subculture almost on a \ndaily basis. However, you should now have a good feeling for the types of weapons hackers \nhave at their disposal as well as some of the best defense mechanisms security administrators \ncan use to fortify their protected systems and networks against hacker intrusions.\nRemember the following key actions you can take to increase your security posture:\n\u0002\nUse strong passwords.\n\u0002\nUpdate operating systems and applications with security patches as they are released by \nvendors.\n\u0002\nUse common-sense filtering techniques to ensure that traffic on your network is what it \nappears to be.\nPay particular attention to the technical details of the attacks presented in this chapter. Be \nfamiliar with the technology underlying each attack and be prepared to identify them in a mul-\ntiple-choice format. Just as important, understand the countermeasures system administrators \ncan apply to prevent each one of those attacks from occurring on protected networks.\n" }, { "page_number": 328, "text": "Exam Essentials\n283\nExam Essentials\nUnderstand the propagation techniques used by viruses.\nViruses use three main propagation \ntechniques—file infection, boot sector infection, and macro infection—to penetrate systems and \nspread their malicious payloads.\nKnow how antivirus software packages detect known viruses.\nMost antivirus programs use \nsignature-based detection algorithms to look for telltale patterns of known viruses. This makes \nit essential to periodically update virus definition files in order to maintain protection against \nnewly authored viruses as they emerge.\nBe able to explain the techniques viruses use to escape detection.\nViruses use polymorphism \nand encryption to avoid leaving behind signature footprints. Multipartite viruses use more than \none propagation technique to infiltrate systems. Stealth viruses alter operating systems to trick \nantivirus packages into thinking everything is normal.\nUnderstand the basic principles behind logic bombs, Trojan horses, and worms.\nLogic \nbombs remain dormant until one or more conditions are met. At that time, they trigger their \nmalicious payload. Trojan horses penetrate systems by masquerading as a benevolent program \nwhile unleashing their payload in the background. Worms spread from system to system under \ntheir own power, potentially consuming massive amounts of resources.\nBe familiar with common password attacks and understand how to develop strong passwords.\nHackers attempting to gain access to a system use straightforward guessing in combination with \ndictionary attacks and social engineering techniques to learn user passwords. System adminis-\ntrators should implement security education programs and operating system controls to ensure \nthat users choose strong passwords.\nUnderstand common denial of service attacks and appropriate countermeasures.\nHackers \nuse standard denial of service attacks like SYN flooding, teardrop fragmentation attacks, and \nthe ping of death to cripple targeted systems. They also harness the power of the global com-\nputing grid through the use of Smurf attacks and other distributed denial of service attacks.\nBe familiar with the various types of application attacks hackers use to exploit poorly written \nsoftware.\nBuffer overflow vulnerabilities are one of the greatest threats to modern computing. \nHackers also exploit trap doors, time-of-check-to-time-of-use vulnerabilities, and rootkits to \ngain illegitimate access to a system.\nKnow the network reconnaissance techniques used by hackers preparing to attack a network.\nBefore launching an attack, hackers use IP sweeps to search out active hosts on a network. These \nhosts are then subjected to port scans and other vulnerability probes to locate weak spots that \nmight be attacked in an attempt to compromise the network.\nUnderstand decoy techniques used by system administrators seeking to lure hackers into a trap.\nSystem administrators use honey-pot systems that appear to be lucrative, easy-to-hit targets for \nhackers in attempts to draw them away from critical systems and track their activities. These \nsystems might contain pseudo-flaws—apparent vulnerabilities that don’t really exist—in an \nattempt to lull malicious individuals into a false sense of security.\n" }, { "page_number": 329, "text": "284\nChapter 8\n\u0002 Malicious Code and Application Attacks\nWritten Lab\nAnswer the following questions about malicious code and application attacks:\n1.\nWhat is the major difference between a virus and a worm?\n2.\nExplain the four propagation methods used by Robert Tappan Morris’s Internet Worm.\n3.\nDescribe how the normal TCP/IP handshaking process works and how the SYN flood \nattack exploits this process to cause a denial of service.\n4.\nWhat are the actions an antivirus software package might take when it discovers an \ninfected file?\n5.\nExplain how a data integrity assurance package like Tripwire provides some secondary \nvirus detection capabilities.\n" }, { "page_number": 330, "text": "Review Questions\n285\nReview Questions\n1.\nWhat is the size of the Master Boot Record on a system installed with a typical configuration?\nA. 256 bytes\nB. 512 bytes\nC. 1,024 bytes\nD. 2,048 bytes\n2.\nHow many steps take place in the standard TCP/IP handshaking process?\nA. One\nB. Two\nC. Three\nD. Four\n3.\nWhich one of the following types of attacks relies upon the difference between the timing of two \nevents?\nA. Smurf\nB. TOCTTOU\nC. Land\nD. Fraggle\n4.\nWhat propagation technique does the Good Times virus use to spread infection?\nA. File infection\nB. Boot sector infection\nC. Macro infection\nD. None of the above\n5.\nWhat advanced virus technique modifies the malicious code of a virus on each system it infects?\nA. Polymorphism\nB. Stealth\nC. Encryption\nD. Multipartitism\n6.\nWhich one of the following files might be modified or created by a companion virus?\nA. COMMAND.EXE\nB. CONFIG.SYS\nC. AUTOEXEC.BAT\nD. WIN32.DLL\n" }, { "page_number": 331, "text": "286\nChapter 8\n\u0002 Malicious Code and Application Attacks\n7.\nWhat is the best defensive action that system administrators can take against the threat posed by \nbrand new malicious code objects that exploit known software vulnerabilities?\nA. Update antivirus definitions monthly\nB. Install anti-worm filters on the proxy server\nC. Apply security patches as they are released\nD. Prohibit Internet use on the corporate network\n8.\nWhich one of the following passwords is least likely to be compromised during a dictionary attack?\nA. mike\nB. elppa\nC. dayorange\nD. dlayna\n9.\nWhat file is instrumental in preventing dictionary attacks against Unix systems?\nA. /etc/passwd\nB. /etc/shadow\nC. /etc/security\nD. /etc/pwlog\n10. Which one of the following tools can be used to launch a distributed denial of service attack \nagainst a system or network?\nA. Satan\nB. Saint\nC. Trinoo\nD. Nmap\n11. Which one of the following network attacks takes advantages of weaknesses in the fragment \nreassembly functionality of the TCP/IP protocol stack?\nA. Teardrop\nB. Smurf\nC. Ping of death\nD. SYN flood\n12. What type of reconnaissance attack provides hackers with useful information about the services \nrunning on a system?\nA. Session hijacking\nB. Port scan\nC. Dumpster diving\nD. IP sweep\n" }, { "page_number": 332, "text": "Review Questions\n287\n13. A hacker located at IP address 12.8.0.1 wants to launch a Smurf attack on a victim machine \nlocated at IP address 129.74.15.12 utilizing a third-party network located at 141.190.0.0/16. \nWhat would be the source IP address on the single packet the hacker transmits?\nA. 12.8.0.1\nB. 129.74.15.12\nC. 141.190.0.0\nD. 141.190.255.255\n14. What type of virus utilizes more than one propagation technique to maximize the number of \npenetrated systems?\nA. Stealth virus\nB. Companion virus\nC. Polymorphic virus\nD. Multipartite virus\n15. What is the minimum size a packet can be to be used in a ping of death attack?\nA. 2,049 bytes\nB. 16,385 bytes\nC. 32,769 bytes\nD. 65,537 bytes\n16. Jim recently downloaded an application from a website that ran within his browser and caused \nhis system to crash by consuming all available resources. Of what type of malicious code was Jim \nmost likely the victim of?\nA. Virus\nB. Worm\nC. Trojan horse\nD. Hostile applet\n17.\nAlan is the security administrator for a public network. In an attempt to detect hacking attempts, \nhe installed a program on his production servers that imitates a well-known operating system \nvulnerability and reports exploitation attempts to the administrator. What is this type of tech-\nnique called?\nA. Honey pot\nB. Pseudo-flaw\nC. Firewall\nD. Bear trap\n" }, { "page_number": 333, "text": "288\nChapter 8\n\u0002 Malicious Code and Application Attacks\n18. What technology does the Java language use to minimize the threat posed by applets?\nA. Confidentiality\nB. Encryption\nC. Stealth\nD. Sandbox\n19. Renee is the security administrator for a research network. She’s attempting to convince her boss \nthat they should disable two unused services—chargen and echo. What attack is the network \nmore vulnerable to with these services running?\nA. Smurf\nB. Land\nC. Fraggle\nD. Ping of death\n20. Which one of the following attacks uses a TCP packet with the SYN flag set and identical source/\ndestination IP addresses and ports?\nA. Smurf\nB. Land\nC. Fraggle\nD. Ping of death\n" }, { "page_number": 334, "text": "Answers to Review Questions\n289\nAnswers to Review Questions\n1.\nB. The Master Boot Record is a single sector of a floppy disk or hard drive. Each sector is nor-\nmally 512 bytes. The MBR contains only enough information to direct the proper loading of the \noperating system.\n2.\nC. The TCP/IP handshake consists of three phases: SYN, SYN/ACK, and ACK. Attacks like the \nSYN flood abuse this process by taking advantage of weaknesses in the handshaking protocol \nto mount a denial of service attack.\n3.\nB. The time-of-check-to-time-of-use (TOCTTOU) attack relies upon the timing of the execution \nof two events.\n4.\nD. The Good Times virus is a famous hoax that does not actually exist.\n5.\nA. In an attempt to avoid detection by signature-based antivirus software packages, polymor-\nphic viruses modify their own code each time they infect a system.\n6.\nA. Companion viruses are self-contained executable files with filenames similar to those of exist-\ning system/program files but with a modified extension. The virus file is executed when an \nunsuspecting user types the filename without the extension at the command prompt.\n7.\nC. The vast majority of new malicious code objects exploit known vulnerabilities that were \nalready addressed by software manufacturers. The best action administrators can take against \nnew threats is to maintain the patch level of their systems.\n8.\nD. All of the other choices are forms of common words that might be found during a dictionary \nattack. mike is a name and would be easily detected. elppa is simply apple spelled backwards, \nand dayorange combines two dictionary words. Crack and other utilities can easily see through \nthese “sneaky” techniques. dlayna is simply a random string of characters that a dictionary \nattack would not uncover.\n9.\nB. Shadow password files move encrypted password information from the publicly readable /\netc/passwd file to the protected /etc/shadow file.\n10. C. Trinoo and the Tribal Flood Network (TFN) are the two most commonly used distributed \ndenial of service (DDoS) attack toolkits. The other three tools mentioned are reconnaissance \ntechniques used to map networks and scan for known vulnerabilities.\n11. A. The teardrop attack uses overlapping packet fragments to confuse a target system and cause \nthe system to reboot or crash.\n12. B. Port scans reveal the ports associated with services running on a machine and available to the \npublic.\n13. B. The single packet would be sent from the hacker to the third-party network. The source address \nof this packet would be the IP address of the victim (129.74.15.12), and the destination address \nwould be the broadcast address of the third-party network (141.190.255.255).\n" }, { "page_number": 335, "text": "290\nChapter 8\n\u0002 Malicious Code and Application Attacks\n14. D. Multipartite viruses use two or more propagation techniques (e.g., file infection and boot sec-\ntor infection) to maximize their reach.\n15. D. The maximum allowed ping packet size is 65,536 bytes. To engage in a ping of death attack, \nan attacker must send a packet that exceeds this maximum. Therefore, the smallest packet that \nmight result in a successful attack would be 65,537 bytes.\n16. D. Hostile applets are a type of malicious code that users download from a remote website and \nrun within their browsers. These applets, written using technologies like ActiveX and Java, may \nthen perform a variety of malicious actions.\n17.\nB. Alan has implemented pseudo-flaws in his production systems. Honey pots often use pseudo-\nflaws, but they are not the technology used in this case because honey pots are stand-alone sys-\ntems dedicated to detecting hackers.\n18. D. The Java sandbox isolates applets and allows them to run within a protected environment, \nlimiting the effect they may have on the rest of the system.\n19. C. The Fraggle attack utilizes the uncommonly used UDP services chargen and echo to imple-\nment a denial of service attack.\n20. B. The Land attack uses a TCP packet constructed with the SYN flag set and identical source and \ndestination sockets. It causes older operating systems to behave in an unpredictable manner.\n" }, { "page_number": 336, "text": "Answers to Written Lab\n291\nAnswers to Written Lab\nFollowing are answers to the questions in this chapter’s written lab:\n1.\nViruses and worms both travel from system to system attempting to deliver their malicious \npayloads to as many machines as possible. However, viruses require some sort of human \nintervention, such as sharing a file, network resource, or e-mail message, to propagate. \nWorms, on the other hand, seek out vulnerabilities and spread from system to system under \ntheir own power, thereby greatly magnifying their reproductive capability, especially in a \nwell-connected network.\n2.\nThe Internet Worm used four propagation techniques. First, it exploited a bug in the send-\nmail utility that allowed the worm to spread itself by sending a specially crafted e-mail mes-\nsage that contained the worm’s code to the sendmail program on a remote system. Second, \nit used a dictionary-based password attack to attempt to gain access to remote systems by \nutilizing the username and password of a valid system user. Third, it exploited a buffer \noverflow vulnerability in the finger program to infect systems. Finally, it analyzed any exist-\ning trust relationships with other systems on the network and attempted to spread itself to \nthose systems through the trusted path.\n3.\nIn a typical connection, the originating host sends a single packet with the SYN flag enabled, \nattempting to open one side of the communications channel. The destination host receives \nthis packet and sends a reply with the ACK flag enabled (confirming that the first side of the \nchannel is open) and the SYN flag enabled (attempting to open the reverse channel). Finally, \nthe originating host transmits a packet with the ACK flag enabled, confirming that the reverse \nchannel is open and the connection is established. In a SYN flood attack, hackers use special \nsoftware that sends a large number of fake packets with the SYN flag set to the targeted sys-\ntem. The victim then reserves space in memory for the connection and attempts to send the \nstandard SYN/ACK reply but never hears back from the originator. This process repeats hun-\ndreds or even thousands of times and the targeted computer eventually becomes over-\nwhelmed and runs out of available memory for the half-opened connections.\n4.\nIf possible, it may try to disinfect the file, removing the virus’s malicious code. If that fails, \nit might either quarantine the file for manual review or automatically delete it to prevent \nfurther infection.\n5.\nData integrity assurance packages like Tripwire compute checksum values for each file stored \non a protected system. If a file infector virus strikes the system, this would result in a change \nin the affected file’s checksum value and would, therefore, trigger a file integrity alert.\n" }, { "page_number": 337, "text": "" }, { "page_number": 338, "text": "Chapter\n9\nCryptography and \nPrivate Key \nAlgorithms\nTHE CISSP EXAM TOPICS COVERED IN THIS \nCHAPTER INCLUDE:\n\u0001 Use of Cryptography to Achieve Confidentiality, Integrity, \nAuthentication, and Nonrepudiation\n\u0001 Cryptographic Concepts, Methodologies, and Practices\n\u0001 Private Key Algorithms\n" }, { "page_number": 339, "text": "Cryptography provides added levels of security to data during \nprocessing, storage, and communications. Over the years, math-\nematicians and computer scientists developed a series of increas-\ningly complex algorithms designed to ensure confidentiality, integrity, authentication, and \nnonrepudiation. During that same period, hackers and governments alike devoted significant \nresources to undermining those cryptographic algorithms. This led to an “arms race” in cryp-\ntography and resulted in the development of the extremely sophisticated algorithms in use \ntoday. This chapter takes a look at the history of cryptography, the basics of cryptographic \ncommunications, and the fundamental principles of private key cryptosystems. The next chap-\nter continues the discussion of cryptography by examining public key cryptosystems and the \nvarious techniques attackers use to defeat cryptography.\nHistory\nSince the beginning of mankind, human beings devised various systems of written communica-\ntion, ranging from ancient hieroglyphics written on cave walls to CD-ROMs stuffed with ency-\nclopedias full of information in modern English. As long as mankind has been communicating, \nit has also used secretive means to hide the true meaning of those communications from the \nuninitiated. Ancient societies used a complex system of secret symbols to represent safe places \nto stay during times of war. Modern civilizations use a variety of codes and ciphers to facilitate \nprivate communication between individuals and groups. In the following sections, we’ll take a \nbrief look at the evolution of modern cryptography and several famous attempts to covertly \nintercept and decipher encrypted communications.\nCaesar Cipher\nOne of the earliest known cipher systems was used by Julius Caesar to communicate with Cicero \nin Rome while he was conquering Europe. Caesar knew that there were several risks when sending \nmessages—the messengers themselves might be an enemy spy or they might be ambushed while en \nroute to the deployed forces. For that reason, he developed a cryptographic system now known \nas the Caesar cipher. The system itself is extremely simple. To encrypt a message, you simply shift \neach letter of the alphabet three places to the right. For example, A would become D and B would \nbecome E. If you reach the end of the alphabet during this process, you simply wrap around to the \nbeginning so that X becomes A, Y becomes B, and Z becomes C. For this reason, the Caesar cipher \nalso became known as the ROT3 (or Rotate 3) cipher. The Caesar cipher is a substitution cipher \nthat is monoalphabetic; it’s also known as a C3 cipher.\n" }, { "page_number": 340, "text": "History\n295\nHere’s an example of the Caesar cipher in action. The first line contains the original sentence, and \nthe second line shows what the sentence looks like when it is encrypted using the Caesar cipher:\nTHE DIE HAS BEEN CAST\nWKH GLH KDV EHHQ FDVW\nTo decrypt the message, you simply shift each letter three places to the left.\nAlthough the Caesar cipher is relatively easy to use, it’s also relatively easy to \ncrack. It’s vulnerable to a type of attack known as frequency analysis. As you \nmay know, the most common letters in the English language are E, T, A, O, N, \nR, I, S, and H. An attacker seeking to break a Caesar-style cipher merely needs \nto find the most common letters in the encrypted text and experiment with sub-\nstitutions of the letters above to help determine the pattern.\nAmerican Civil War\nBetween the time of Caesar and the early years of the United States, scientists and mathemati-\ncians made significant advances beyond the early ciphers used by ancient civilizations. During \nthe American Civil War, Union and Confederate troops both used relatively advanced crypto-\ngraphic systems to secretly communicate along the front lines, due to the fact that both sides \nwere tapping into the telegraph lines to spy on the other side. These systems used complex com-\nbinations of word substitutions and transposition (see the section on ciphers for more details) \nto attempt to defeat enemy decryption efforts. Another system used widely during the Civil War \nwas a series of flag signals developed by army doctor Albert Myer.\nPhotos of many of the items discussed in this chapter are available online at \nwww.nsa.gov/museum/tour.html.\nUltra vs. Enigma\nAmericans weren’t the only ones who expended significant resources in the pursuit of superior \ncode making machines. Prior to World War II, the German military-industrial complex adapted \na commercial code machine nicknamed Enigma for government use. This machine used a series \nof three to six rotors to implement an extremely complicated substitution cipher. The only possi-\nble way to decrypt the message with contemporary technology was to use a similar machine with \nthe same rotor settings used by the transmitting device. The Germans recognized the importance \nof safeguarding these devices and made it extremely difficult for the Allies to acquire one.\nThe Allied forces began a top-secret effort known by the code name Ultra to attack the \nEnigma codes. Eventually, their efforts paid off when the Polish military successfully recon-\nstructed an Enigma prototype and shared their findings with British and American cryptology \n" }, { "page_number": 341, "text": "296\nChapter 9\n\u0002 Cryptography and Private Key Algorithms\nexperts. The Allies successfully broke the Enigma code in 1940, and historians credit this tri-\numph as playing a significant role in the eventual defeat of the Axis powers.\nThe Japanese used a similar machine, known as the Japanese Purple Machine, during World \nWar II. A significant American attack on this cryptosystem resulted in the breaking of the Jap-\nanese code prior to the end of the war. The Americans were aided by the fact that Japanese com-\nmunicators used very formal message formats that resulted in a large amount of similar text in \nmultiple messages, easing the cryptanalytic effort.\nCryptographic Basics\nThe study of any science must begin with a discussion of some of the fundamental principles it \nis built upon. The following sections lay this foundation with a review of the goals of cryptog-\nraphy, an overview of the basic concepts of cryptographic technology, and a look at the major \nmathematical principles utilized by cryptographic systems.\nGoals of Cryptography\nSecurity practitioners utilize cryptographic systems to meet four fundamental goals: confiden-\ntiality, integrity, authentication, and nonrepudiation. Achieving each of these goals requires the \nsatisfaction of a number of design requirements, and not all cryptosystems are intended to \nachieve all four goals. In the following sections, we’ll examine each goal in detail and give a brief \ndescription of the technical requirements necessary to achieve it.\nConfidentiality\nConfidentiality ensures that a message remains private during transmission between two or \nmore parties. This is perhaps the most widely cited goal of cryptosystems—the facilitation of \nsecret communications between individuals and groups. There are two main types of crypto-\nsystems that enforce confidentiality. Symmetric key cryptosystems make use of a shared secret \nkey available to all users of the cryptosystem. Public key cryptosystems utilize individual com-\nbinations of public and private keys for each user of the system. Both of these concepts are \nexplored in the section “Modern Cryptography” later in this chapter.\nIntegrity\nIntegrity ensures that a message is not altered while in transit. If integrity mechanisms are in place, \nthe recipient of a message can be certain that the message received is identical to the message that \nwas sent. This protects against all forms of alteration: intentional alteration by a third party \nattempting to insert false information and unintentional alteration by faults in the transmission \nprocess. Message integrity is enforced through the use of digitally signed message digests created \nupon transmission of a message. The recipient of the message simply verifies that the message’s \ndigest and signature is valid, ensuring that the message was not altered in transit. Integrity can be \nenforced by both public and secret key cryptosystems. This concept is discussed in detail in the sec-\ntion “Digital Signatures” in Chapter 10, “PKI and Cryptographic Applications.”\n" }, { "page_number": 342, "text": "Cryptographic Basics\n297\nAuthentication\nAuthentication verifies the claimed identity of system users and is a major function of crypto-\nsystems. For example, suppose that Jim wants to establish a communications session with Bob \nand they are both participants in a shared secret communications system. Jim might use a chal-\nlenge-response authentication technique to ensure that Bob is who he claims to be.\nFigure 9.1 shows how this challenge-response protocol might work in action. In this exam-\nple, the shared-secret code used by Jim and Bob is quite simple—the letters of each word are \nsimply reversed. Bob first contacts Jim and identifies himself. Jim then sends a challenge mes-\nsage to Bob, asking him to encrypt a short message using the secret code known only to Jim and \nBob. Bob replies with the encrypted message. After Jim verifies that the encrypted message is \ncorrect, he trusts that Bob himself is truly on the other end of the connection.\nF I G U R E\n9 . 1\nChallenge-response authentication protocol\nNonrepudiation\nNonrepudiation provides assurance to the recipient that the message was actually originated by \nthe sender and not someone masquerading as the sender. It prevents the sender from claiming \nthat they never sent the message in the first place (also known as repudiating the message). \nSecret key, or symmetric key, cryptosystems (such as the ROT3 cipher) do not provide this guar-\nantee of nonrepudiation. If Jim and Bob participate in a secret key communication system, they \ncan both produce the same encrypted message using their shared secret key. Nonrepudiation is \noffered only by public key, or asymmetric, cryptosystems, a topic discussed in greater detail in \nChapter 10.\nCryptography Concepts\nAs with any science, you must be familiar with certain terminology before studying cryptog-\nraphy. Let’s take a look at a few of the key terms used to describe codes and ciphers. Before \na message is put into a coded form, it is known as a plaintext message and is represented by \nthe letter P when encryption functions are described. The sender of a message uses a crypto-\ngraphic algorithm to encrypt the plaintext message and produce a ciphertext message, repre-\nsented by the letter C. This message is transmitted by some physical or electronic means to the \nrecipient. The recipient then uses a predetermined algorithm to decrypt the ciphertext mes-\nsage and retrieve the plaintext version.\n“Hi, I’m Bob!”\n“Prove it. Encrypt ‘apple.’”\n“elppa”\n“Hi Bob, good to talk to you again.”\n" }, { "page_number": 343, "text": "298\nChapter 9\n\u0002 Cryptography and Private Key Algorithms\nAll cryptographic algorithms rely upon keys to maintain their security. For the most part, a key \nis nothing more than a number. It’s usually a very large binary number, but a number nonetheless. \nEvery algorithm has a specific key space. The key space is the range of values that are valid for use \nas a key for a specific algorithm. A key space is defined by its bit size. Bit size is nothing more than \nthe number of binary bits or digits in the key. The key space is the range between the key that has \nall 0s and the key that has all 1s. Or to state it another way, the key space is the range of numbers \nfrom 0 to 2n, where n is the bit size of the key. So a 128-bit key can have a value from 0 to 2128 \n(which is roughly 3.40282367 *1038, that is, very big number!). Even though a key is just a num-\nber, it is a very important number. In fact, if the algorithm is known, then the all security you gain \nfrom cryptography rests on your ability to keep the keys used private.\nAs you’ll learn in this chapter and the next, different types of algorithms require different \ntypes of keys. In private key (or secret key) cryptosystems, all participants use a single shared \nkey. In public key cryptosystems, each participant has their own pair of keys. Cryptographic \nkeys are sometimes referred to as cryptovariables.\nThe art of creating and implementing secret codes and ciphers is known as cryptography. \nThis practice is paralleled by the art of cryptanalysis—the study of methods to defeat codes and \nciphers. Together, cryptography and cryptanalysis are commonly referred to as cryptology. Spe-\ncific implementations of a code or cipher in hardware and software are known as cryptosys-\ntems. Federal Information Processing Standards-140 (FIPS-140), “Security Requirements for \nCryptographic Modules,” defines the hardware and software requirements for cryptographic \nmodules that the federal government uses.\nKerchoff’s Principle\nAll cryptography is based upon the idea of an algorithm. An algorithm is a set of rules, usually \nmathematical, that dictates how enciphering and deciphering processes are to take place. Most \nalgorithms are dictated by the Kerchoff principle, a concept that makes algorithms known and \npublic, allowing anyone to examine and test them. Specifically, the Kerchoff principle (also \nknown as Kerchoff’s assumption) is that all algorithms should be public but all keys should \nremain private. A large number of cryptologists adhere to this principle, but not all of them do. \nIn fact, a significant group adheres to the opposite view and believes better overall security can \nbe maintained by keeping both the algorithm and the key private. Kerchoff’s adherents retort that \nthe opposite approach includes the practice of “security through obscurity” and believe that pub-\nlic exposure produces more activity and exposes more weaknesses more readily, leading to the \nabandonment of insufficiently strong algorithms and quicker adoption of suitable ones.\n" }, { "page_number": 344, "text": "Cryptographic Basics\n299\nBe sure to understand the meanings of these terms before continuing your \nstudy of this chapter and the following chapter. They are essential to under-\nstanding the technical details of the cryptographic algorithms presented in the \nfollowing sections.\nCryptographic Mathematics\nCryptography is no different than most computer science disciplines in that it finds its founda-\ntions in the science of mathematics. To fully understand cryptography, you must first under-\nstand the basics of binary mathematics and the logical operations used to manipulate binary \nvalues. The following sections present a brief look at some of the most fundamental concepts \nwith which you should be familiar.\nBinary Mathematics\nBinary mathematics defines the rules used for the bits and bytes that form the nervous system \nof any computer. You’re most likely familiar with the decimal system. It is a base 10 system in \nwhich an integer from 0 to 9 is used in each place and each place value is a multiple of 10. It’s \nlikely that our reliance upon the decimal system has biological origins—human beings have 10 \nfingers that can be used to count.\nBinary math can be very confusing at first, but it’s well worth the investment of \ntime to learn how the various logical operations work, specifically logical func-\ntions. More important, you need to understand these concepts to truly under-\nstand the inner workings of cryptographic algorithms.\nSimilarly, the computer’s reliance upon the binary system has electrical origins. In an elec-\ntrical circuit, there are only two possible states—on (representing the presence of electrical cur-\nrent) and off (representing the absence of electrical current). All computation performed by an \nelectrical device must be expressed in these terms, giving rise to the use of binary computation \nin modern electronics. In general, computer scientists refer to the on condition as a true value \nand the off condition as a false value.\nLogical Operations\nThe binary mathematics of cryptography utilizes a variety of logical functions to manipulate \ndata. We’ll take a brief look at several of these operations.\n" }, { "page_number": 345, "text": "300\nChapter 9\n\u0002 Cryptography and Private Key Algorithms\nAND\nThe AND operation (represented by the ∧ symbol) checks to see whether two values are both \ntrue. The truth table that follows illustrates all four possible outputs for the AND function. \nRemember, the AND function takes only two variables as input. In binary math, there are only \ntwo possible values for each of these variables, leading to four possible inputs to the AND func-\ntion. It’s this finite number of possibilities that makes it extremely easy for computers to imple-\nment logical functions in hardware. Notice in the following truth table that only one combination \nof inputs (where both inputs are true) produces an output value of true:\nLogical operations are often performed on entire binary words rather than single values. \nTake a look at the following example:\nX: 0 1 1 0 1 1 0 0\nY: 1 0 1 0 0 1 1 1\n___________________________\nX ∧ Y: 0 0 1 0 0 1 0 0\nNotice that the AND function is computed by comparing the values of X and Y in each column. \nThe output value is true only in columns where both X and Y are true.\nOR\nThe OR operation (represented by the ∨ symbol) checks to see whether at least one of the input \nvalues is true. Refer to the following truth table for all possible values of the OR function. Notice \nthat the only time the OR function returns a false value is when both of the input values are false:\nX\nY\nX ∧ Y\n0\n0\n0\n0\n1\n0\n1\n0\n0\n1\n1\n1\nX\nY\nX ∨ Y\n0\n0\n0\n0\n1\n1\n1\n0\n1\n1\n1\n1\n" }, { "page_number": 346, "text": "Cryptographic Basics\n301\nWe’ll use the same example we used in the previous section to show you what the output \nwould be if X and Y were fed into the OR function rather than the AND function:\nX: 0 1 1 0 1 1 0 0\nY: 1 0 1 0 0 1 1 1\n___________________________\nX ∨ Y: 1 1 1 0 1 1 1 1\nNOT\nThe NOT operation (represented by the ~ or ! symbol) simply reverses the value of an input \nvariable. This function operates on only one variable at a time. Here’s the truth table for the \nNOT function:\nIn this example, we take the value of X from the previous examples and run the NOT func-\ntion against it:\nX: 0 1 1 0 1 1 0 0\n___________________________\n~X: 1 0 0 1 0 0 1 1\nExclusive OR\nThe final logical function we’ll examine in this chapter is perhaps the most important and most \ncommonly used in cryptographic applications—the exclusive OR (XOR) function. It’s referred \nto in mathematical literature as the XOR function and is commonly represented by the ⊕ sym-\nbol. The XOR function returns a true value when only one of the input values is true. If both \nvalues are false or both values are true, the output of the XOR function is false. Here is the truth \ntable for the XOR operation:\nX\n~X\n0\n1\n1\n0\nX\nY\nX ⊕ Y\n0\n0\n0\n0\n1\n1\n1\n0\n1\n1\n1\n0\n" }, { "page_number": 347, "text": "302\nChapter 9\n\u0002 Cryptography and Private Key Algorithms\nThe following operation shows the X and Y values when they are used as input to the XOR \nfunction:\nX: 0 1 1 0 1 1 0 0\nY: 1 0 1 0 0 1 1 1\n___________________________\nX ⊕ Y: 1 1 0 0 1 0 1 1\nModulo Function\nThe modulo function is extremely important in the field of cryptography. Think back to the \nearly days when you first learned division. At that time, you weren’t familiar with decimal num-\nbers and compensated by showing a remainder value each time you performed a division oper-\nation. Computers don’t naturally understand the decimal system either, and these remainder \nvalues play a critical role when computers perform many mathematical functions. The modulo \nfunction is, quite simply, the remainder value left over after a division operation is performed.\nThe modulo function is just as important to cryptography as the logical opera-\ntions are. Be sure you’re familiar with its functionality and can perform simple \nmodular math.\nThe modulo function is usually represented in equations by the abbreviation mod, although \nit’s also sometimes represented by the % operator. Here are several inputs and outputs for the \nmodulo function:\n8 mod 6 = 2\n6 mod 8 = 6\n10 mod 3 = 1\n10 mod 2 = 0\n32 mod 8 = 0\nHopefully, this introduction gives you a good understanding of how the modulo function \nworks. We’ll revisit this function in Chapter 10 when we explore the RSA public key encryption \nalgorithm (named after Rivest, Shamir, and Adleman, its inventors).\nOne-Way Functions\nIn theory, a one-way function is a mathematical operation that easily produces output values for \neach possible combination of inputs but makes it impossible to retrieve the input values. Public \nkey cryptosystems are all based upon some sort of one-way function. In practice, however, it’s \nnever been proven that any specific known function is truly one way. Cryptographers rely upon \nfunctions that they suspect may be one way, but it’s theoretically possible that they might be \nbroken by future cryptanalysts.\nHere’s an example. Imagine you have a function that multiplies three numbers together. If \nyou restrict the input values to single-digit numbers, it’s a relatively straightforward matter to \n" }, { "page_number": 348, "text": "Cryptographic Basics\n303\nreverse-engineer this function and determine the possible input values by looking at the numer-\nical output. For example, the output value 15 was created by using the input values 1, 3, and 5. \nHowever, suppose you restrict the input values to five-digit prime numbers. It’s still quite simple \nto obtain an output value by using a computer or a good calculator, but reverse-engineering is \nnot quite so simple. Can you figure out what three prime numbers were used to obtain the out-\nput value 10,718,488,075,259? Not so simple, eh? (That number is the product of the prime \nnumbers 17093, 22441, and 27943.) There are actually 8,363 five-digit prime numbers, so this \nproblem might be attacked using a computer and a brute force algorithm, but there’s no easy \nway to figure it out in your head, that’s for sure!\nConfusion and Diffusion\nCryptographic algorithms rely upon two basic operations to obscure plaintext messages—con-\nfusion and diffusion. Confusion occurs when the relationship between the plaintext and the key \nis so complicated that an attacker can’t merely continue altering the plaintext and analyzing the \nresulting ciphertext to determine the key. Diffusion occurs when a change in the plaintext \nresults in multiple changes spread out throughout the ciphertext.\nNonce\nCryptography often gains strength by adding randomness to the encryption process. One \nmethod by which this is accomplished is through the use of a nonce. A nonce is a random num-\nber generator. It acts as a placeholder variable in mathematical functions. When the function is \nexecuted, the nonce is replaced with a random number generated at the moment of processing. \nThe nonce produces a unique number each time it is used. One of the more recognizable exam-\nples of a nonce is an initialization vector (IV), a random bit string that is the same length as the \nblock size and is XORed with the message. IVs are used to create unique ciphertext every time \nthe same message is encrypted using the same key.\nLeast and Most Significant String Bit\nWhen striving to provide protection via cryptography, it is often important to know which por-\ntion of a message is the most vulnerable or, if compromised, provides the attacker with the \ngreatest advantage. If a cryptography attack can successfully extract the original data from the \nmost significant part of an encrypted message, the rest of the message is often easily obtained. \nHowever, if all the attacker can break is the least significant portion, they don’t gain any lever-\nage against the remainder of the encrypted communication. The least significant bit in a string \nis the rightmost bit. The most significant bit in a string is the leftmost bit. This means that there is \nmore information present in the leftmost bit in a string, especially in encrypted material, than \nin the rightmost bit. There is an easy way to remember this concept: just think about how you \nwould like to see the five digits of 0,0,0,0, and 1 arranged on a check made out to you. Obvi-\nously, placing the 1 in the leftmost position is most significant (and valuable) because that \nwould make the check worth $10,000! Any other arrangement, in fact, puts less money into \nyour account.\n" }, { "page_number": 349, "text": "304\nChapter 9\n\u0002 Cryptography and Private Key Algorithms\nZero Knowledge Proof\nOne of the benefits of cryptography is found in the mechanism to prove an individual’s or orga-\nnization’s identity digitally. This is often accomplished using zero knowledge proof. Zero \nknowledge proof is a concept of communication whereby a specific type of information is \nexchanged but no real data is transferred. Great examples of this idea includes digital signatures \nand digital certificates. With either system, the recipient is able to prove the sender’s identity. \nHowever, neither digital signatures nor digital certificates provide the recipient with any actual \ndata. There is nothing for the recipient to save to a hard drive or transmit to someone else. Thus, \nthey get proof of identity but zero knowledge about anything else.\nSplit Knowledge\nWhen the information or privilege required to perform an operation is divided among multiple \nusers, no single person has sufficient privileges to compromise the security of an environment. \nThis separation of duties and two-man control contained in a single solution is called split \nknowledge. Split knowledge is mentioned in Chapter 13, “Administrative Management,” but it \nmakes most sense as it relates to cryptography.\nThe best example of split knowledge is seen in the concept of key escrow when the security \npractice of M of N Control is enforced (we’ll explain M of N Control in a second). Using key \nescrow cryptographic keys, digital signatures and even digital certificates can be stored or backed \nup in a special database called the key escrow database. In the event a user loses or damages their \nkey, that key can be extracted from the backup. However, if only a single key escrow recovery \nagent exists, there is opportunity for fraud and abuse of this privilege. So, M of N Control requires \nthat a minimum number of agents (M) out of the total number of agents (N) work together to per-\nform high-security tasks. So, implementing 3 of 8 control would require 3 people out of the 8 with \nthe assigned work task of Key Escrow Recovery Agent to work together to pull a single key out \nof the key escrow database (thereby also illustrating that M is always less than or equal to N).\nWork Function\nYou can measure the strength of a cryptography system by measuring the effort in terms of cost \nand/or time using a work function or work factor. Usually the time and effort required to per-\nform a complete brute force attack against an encryption system is what the work function rep-\nresents. The security and protection offered by a cryptosystem is directly proportional to the \nvalue of the work function/factor. The size of the work function should be matched against the \nrelative value of the protected asset. The work function need be only slightly greater than the \ntime value of that asset. In other words, all security, including cryptography, should be cost \neffective and cost efficient. Spend no more effort to protect an asset than it warrants, but be sure \nto provide sufficient protection. Thus, if information loses its value over time, the work function \nneeds to be only large enough to ensure protection until the value of the data is gone.\nClustering\nCryptography is not without its drawbacks. Clustering (a.k.a. key clustering) is a weakness in \ncryptography where a plaintext message generates identical ciphertext messages using the same \n" }, { "page_number": 350, "text": "Cryptographic Basics\n305\nalgorithm but using different keys. One of the often underemphasized truisms of cryptography \nis that repetition is bad. Whenever two duplicate cryptography elements exist, you halve the dif-\nficulty of breaking the protection. This is the inverse of the binary law of keys, which states that \nfor every additional binary bit added to a key, you double its work factor/function. Thus, never \nencrypt the exact same message twice. Never use the same key twice (for encryption purposes, \nnot for authentication and nonrepudiation purposes). Don’t use a cryptography system that \nproduces duplicate ciphertext outputs (i.e., different messages may use different keys yet still \nproduce the same ciphertext); that admonition applies to symmetric and asymmetric keys as \nwell as hashing techniques.\nCiphers\nCipher systems have long been used by individuals and governments interested in preserving the \nconfidentiality of their communications. In the following sections, we’ll take a brief look at the \ndefinition of a cipher and several common cipher types that form the basis of modern ciphers. \nIt’s important to remember that these concepts seem somewhat basic, but when used in com-\nbination, they can be formidable opponents and cause cryptanalysts many hours of frustration.\nCodes vs. Ciphers\nPeople often use the words code and cipher interchangeably, but technically, they aren’t inter-\nchangeable. There are important distinctions between the two concepts. Codes, which are cryp-\ntographic systems of symbols that represent words or phrases, are sometime secret but they are \nnot necessarily meant to provide confidentiality. A common example of a code is the “10 sys-\ntem” of communications used by law enforcement agencies. Under this system, the sentence “I \nreceived your communication and understand the contents” is represented by the code phrase \n“10-4.” This code is commonly known by the public, but it does provide for ease of commu-\nnication. Some codes are secret. They may use mathematical functions or a secret dictionary to \nconvey confidential messages by representing words, phrases, or sentences. For example, a spy \nmight transmit the sentence “the eagle has landed” to report the arrival of an enemy aircraft.\nCiphers, on the other hand, are always meant to hide the true meaning of a message. They \nuse a variety of techniques to alter and/or rearrange the characters or bits of a message to \nachieve confidentiality. Ciphers convert messages from plaintext to ciphertext on a bit basis \n(i.e., a single digit of a binary code), character basis (i.e., a single character of an ASCII mes-\nsage), or block basis (i.e., a fixed-length segment of a message, usually expressed in number of \nbits). The following sections look at several common ciphers in use today.\nAn easy way to keep the difference between codes and ciphers straight is to \nremember that codes work on words and phrases whereas ciphers work on \nindividual characters and bits.\n" }, { "page_number": 351, "text": "306\nChapter 9\n\u0002 Cryptography and Private Key Algorithms\nTransposition Ciphers\nTransposition ciphers use an encryption algorithm to rearrange the letters of a plaintext mes-\nsage, forming the ciphertext message. The decryption algorithm simply reverses the encryption \ntransformation to retrieve the original message.\nIn the challenge-response protocol example in the section “Authentication” earlier in this chap-\nter, a simple transposition cipher was used to simply reverse the letters of the message so that apple \nbecame elppa. Transposition ciphers can be much more complicated than this. For example, you \ncan use a keyword to perform a columnar transposition. In this example, we’re attempting to \nencrypt the message “The fighters will strike the enemy bases at noon” using the secret key \nattacker. Our first step is to take the letters of the keyword and number them in alphabetical order. \nThe first appearance of the letter A receives the value 1; the second appearance is numbered 2. The \nnext letter in sequence, C, is numbered 3, and so on. This results in the following sequence:\nA T T A C K E R\n1 7 8 2 3 5 4 6\nNext, the letters of the message are written in order underneath the letters of the keyword:\nA T T A C K E R\n1 7 8 2 3 5 4 6\nT H E F I G H T\nE R S W I L L S\nT R I K E T H E\nE N E M Y B A S\nE S A T N O O N\nFinally, the sender enciphers the message by reading down each column; the order in which \nthe columns are read corresponds to the numbers assigned in the first step. This produces the \nfollowing ciphertext:\nT E T E E F W K M T I I E Y N H L H A O G L T B O T S E S\n N H R R N S E S I E A\nOn the other end, the recipient reconstructs the eight-column matrix using the ciphertext and \nthe same keyword and then simply reads the plaintext message across the rows.\nSubstitution Ciphers\nSubstitution ciphers use the encryption algorithm to replace each character or bit of the plaintext \nmessage with a different character. The Caesar cipher discussed in the beginning of this chapter is a \ngood example of a substitution cipher. Now that you’ve learned a little bit about cryptographic \nmath, we’ll take another look at the Caesar cipher. Recall that we simply shifted each letter three \nplaces to the right in the message to generate the ciphertext. However, we ran into a problem when \nwe got to the end of the alphabet and ran out of letters. We solved this by wrapping around to the \nbeginning of the alphabet so that the plaintext character Z became the ciphertext character C.\n" }, { "page_number": 352, "text": "Cryptographic Basics\n307\nYou can express the ROT3 cipher in mathematical terms by converting each letter to its decimal \nequivalent (where A is 0 and Z is 25). You can then add three to each plaintext letter to determine \nthe ciphertext. You account for the wrap-around by using the modulo function discussed in the sec-\ntion “Cryptographic Mathematics.” The final encryption function for the Caesar cipher is then this:\nC = (P + 3) mod 26\nThe corresponding decryption function is as follows:\nP = (C - 3) mod 26\nAs with transposition ciphers, there are many substitution ciphers that are more sophisti-\ncated than the examples provided in this chapter. Polyalphabetic substitution ciphers make use \nof multiple alphabets in the same message to hinder decryption efforts. One of the most notable \nexamples of a polyalphabetic substitution cipher system is the Vigenere cipher. The Vigenere \ncipher uses a single encryption/decryption chart shown here:\nA B C D E F G H I J K L M N O P Q R S T U V W X Y Z\nA B C D E F G H I J K L M N O P Q R S T U V W X Y Z\nB C D E F G H I J K L M N O P Q R S T U V W X Y Z A\nC D E F G H I J K L M N O P Q R S T U V W X Y Z A B\nD E F G H I J K L M N O P Q R S T U V W X Y Z A B C\nE F G H I J K L M N O P Q R S T U V W X Y Z A B C D\nF G H I J K L M N O P Q R S T U V W X Y Z A B C D E\nG H I J K L M N O P Q R S T U V W X Y Z A B C D E F\nH I J K L M N O P Q R S T U V W X Y Z A B C D E F G\nI J K L M N O P Q R S T U V W X Y Z A B C D E F G H\nJ K L M N O P Q R S T U V W X Y Z A B C D E F G H I\nK L M N O P Q R S T U V W X Y Z A B C D E F G H I J\nL M N O P Q R S T U V W X Y Z A B C D E F G H I J K\nM N O P Q R S T U V W X Y Z A B C D E F G H I J K L\nN O P Q R S T U V W X Y Z A B C D E F G H I J K L M\nO P Q R S T U V W X Y Z A B C D E F G H I J K L M N\nP Q R S T U V W X Y Z A B C D E F G H I J K L M N O\nQ R S T U V W X Y Z A B C D E F G H I J K L M N O P\nR S T U V W X Y Z A B C D E F G H I J K L M N O P Q\nS T U V W X Y Z A B C D E F G H I J K L M N O P Q R\nT U V W X Y Z A B C D E F G H I J K L M N O P Q R S\nU V W X Y Z A B C D E F G H I J K L M N O P Q R S T\nV W X Y Z A B C D E F G H I J K L M N O P Q R S T U\nW X Y Z A B C D E F G H I J K L M N O P Q R S T U V\nX Y Z A B C D E F G H I J K L M N O P Q R S T U V W\nY Z A B C D E F G H I J K L M N O P Q R S T U V W X\nZ A B C D E F G H I J K L M N O P Q R S T U V W X Y\n" }, { "page_number": 353, "text": "308\nChapter 9\n\u0002 Cryptography and Private Key Algorithms\nNotice that the chart is simply the alphabet written repeatedly (26 times) under the master \nheading shifting by one letter each time. You need a key to use the Vigenere system. For exam-\nple, the key could be secret. Then, you would perform the following encryption process:\n1.\nWrite out the plaintext followed by the key.\n2.\nRepeat the key as many times as needed to establish a line of text that is the same length as \nthe plaintext.\n3.\nConvert each letter position from plaintext to ciphertext.\na.\nLocate the column headed by the first plaintext character (a).\nb.\nNext, locate the row headed by the first key word character (s).\nc.\nFinally, locate where these two items intersect and write down the letter that appears \nthere (s). This is the ciphertext for that letter position.\n4.\nRepeat steps 1 through 3 for each letter in the plaintext.\nWhile polyalphabetic substitution protects against direct frequency analysis, it is vulnerable \nto a second-order form of frequency analysis called period analysis, which is an examination of \nfrequency based upon the repeated use of the key.\nOne-Time Pads\nA one-time pad is an extremely powerful type of substitution cipher. One-time pads use a dif-\nferent alphabet for each letter of the plaintext message. They can be represented by the follow-\ning encryption function, where K is the encryption key for the letter represented by C:\nC = (P + K) mod 26\nNormally, one-time pads are written as a very long series of numbers to be plugged into the \nfunction.\nOne-time pads are also known as Vernam ciphers, after the name of their \ninventor—Gilbert Sandford Vernam of AT&T.\nThe great advantage of one-time pads is that, when used properly, they are an unbreakable \nencryption scheme. There is no repeating pattern of alphabetic substitution, rendering cryptanalytic \nefforts useless. However, several requirements must be met to ensure the integrity of the algorithm:\n\u0002\nThe encryption key must be randomly generated. Using a phrase or a passage from a book \nwould introduce the possibility of cryptanalysts breaking the code.\n\u0002\nThe one-time pad must be physically protected against disclosure. If the enemy has a copy \nof the pad, they can easily decrypt the enciphered messages.\nPlaintext:\na t t a c k a t d a w n\nKey Word:\ns e c r e t s e c r e t\nCiphertext:\ns x v r g d s x f r a g\n" }, { "page_number": 354, "text": "Cryptographic Basics\n309\n\u0002\nEach one-time pad must be used only once. If pads are reused, cryptanalysts can compare \nsimilarities in multiple messages encrypted with the same pad and possibly determine the \nkey values used.\n\u0002\nThe key must be at least as long as the message to be encrypted. This is because each key \nelement is used to encode only one character of the message.\nThese one-time pad security requirements are essential knowledge for any net-\nwork security professional. All too often, people attempt to implement a one-\ntime pad cryptosystem but fail to meet one or more of these fundamental \nrequirements. Read on for an example of how an entire Soviet code system \nwas broken due to carelessness in this area.\nIf any one of these requirements is not met, the impenetrable nature of the one-time pad \ninstantly breaks down. In fact, one of the major intelligence successes of the United States resulted \nwhen cryptanalysts broke a top-secret Soviet cryptosystem that relied upon the use of one-time \npads. In this project, code-named VENONA, a pattern in the way the Soviets generated the key \nvalues used in their pads was discovered. The existence of this pattern violated the first require-\nment of a one-time pad cryptosystem: the keys must be randomly generated without the use of any \nrecurring pattern. The entire VENONA project was recently declassified and is publicly available \non the National Security Agency website at www.nsa.gov/docs/venona/index.html.\nOne-time pads have been used throughout history to protect extremely sensitive communi-\ncations. The major obstacle to their widespread use is the difficulty of generating, distributing, \nand safeguarding the lengthy keys required. One-time pads can realistically be used only for \nshort messages, due to key lengths.\nRunning Key Ciphers\nMany cryptographic vulnerabilities surround the limited length of the cryptographic key. As \nyou learned in the previous section, the one-time pad avoids these vulnerabilities by using sep-\narate alphabets for each cryptographic transformation during encryption and decryption. How-\never, one-time pads are awkward to implement because they require physical exchange of pads.\nOne common solution to this dilemma is the use of a running key cipher (also known as a \nbook cipher). In this cipher, the encryption key is as long as the message itself and is often cho-\nsen from a common book. For example, the sender and recipient might agree in advance to use \nthe text of a chapter from Moby Dick, beginning with the third paragraph, as the key. They \nwould both simply use as many consecutive characters as necessary to perform the encryption \nand decryption operations.\nLet’s look at an example. Suppose you wanted to encrypt the message “Richard will deliver \nthe secret package to Matthew at the bus station tomorrow” using the key just described. This \nmessage is 66 characters in length, so you’d use the first 66 characters of the running key: “With \nmuch interest I sat watching him. Savage though he was, and hideously marred.” Any algorithm \ncould then be used to encrypt the plaintext message using this key. Let’s look at the example of \nmodulo 26 addition, which converts each letter to a decimal equivalent, then adds the plaintext \n" }, { "page_number": 355, "text": "310\nChapter 9\n\u0002 Cryptography and Private Key Algorithms\nto the key, and then performs a modulo 26 operation to yield the ciphertext. If you assign the \nletter A the value 1 and the letter Z the value 26, you have the following encryption operation \nfor the first two words of the ciphertext:\nWhen the recipient receives the ciphertext, they use the same key and then subtract the key \nfrom the ciphertext, perform a modulo 26 operation, and then convert the resulting plaintext \nback to alphabetic characters.\nBlock Ciphers\nBlock ciphers operate on “chunks,” or blocks, of a message and apply the encryption algorithm \nto an entire message block at the same time. The transposition ciphers are examples of block \nciphers. The simple algorithm used in the challenge-response algorithm takes an entire word \nand reverses its letters. The more complicated columnar transposition cipher works on an entire \nmessage (or a piece of a message) and encrypts it using the transposition algorithm and a secret \nkeyword. Most modern encryption algorithms implement some type of block cipher.\nStream Ciphers\nStream ciphers are ciphers that operate on each character or bit of a message (or data stream) \none character/bit at a time. The Caesar cipher is an example of a stream cipher. The one-time \npad is also a stream cipher because the algorithm operates on each letter of the plaintext mes-\nsage independently. Stream ciphers can also function as a type of block cipher. In such opera-\ntions there is a buffer that fills up to real-time data that is then encrypted as a block and \ntransmitted to the recipient.\nModern Cryptography\nModern cryptosystems utilize computationally complex algorithms and long cryptographic \nkeys to meet the cryptographic goals of confidentiality, integrity, authentication, and nonrepu-\ndiation. The following sections take a look at the roles cryptographic keys play in the world of \ndata security and examines three types of algorithms commonly used today: symmetric encryp-\ntion algorithms, asymmetric encryption algorithms, and hashing algorithms.\nPlaintext\nR\nI\nC\nH\nA\nR\nD\nW\nI\nL\nL\nKey\nW\nI\nT\nH\nM\nU\nC\nH\nI\nN\nT\nDecimal Plaintext\n17\n8\n2\n7\n0\n17\n3\n22\n8\n11\n11\nDecimal Key\n22\n8\n19\n7\n12\n20\n2\n7\n8\n13\n19\nDecimal Ciphertext\n13\n16\n21\n14\n12\n11\n5\n3\n16\n24\n4\nCiphertext\nN\nQ\nV\nO\nM\nL\nF\nD\nQ\nY\nE\n" }, { "page_number": 356, "text": "Modern Cryptography\n311\nCryptographic Keys\nIn the early days of security, one of the predominant principles was “security through obscu-\nrity.” Security professionals felt that the best way to keep an encryption algorithm secure was \nto hide the details of the algorithm from outsiders. Old cryptosystems required communicating \nparties to keep the algorithm used to encrypt and decrypt messages secret from third parties. \nAny disclosure of the algorithm could lead to compromise of the entire system by an adversary.\nModern cryptosystems do not rely upon the secrecy of their algorithms. In fact, the algo-\nrithms for most cryptographic systems are widely available for public review in the accompa-\nnying literature and on the Internet. This actually improves the security of algorithms by \nopening them to public scrutiny. Widespread analysis of algorithms by the computer security \ncommunity allows practitioners to discover and correct potential security vulnerabilities and \nensure that the algorithms they use to protect their communications are as secure as possible.\nInstead of relying upon secret algorithms, modern cryptosystems rely upon the secrecy of one or \nmore cryptographic keys used to personalize the algorithm for specific users or groups of users. \nRecall from the discussion of transposition ciphers that a keyword is used with the columnar trans-\nposition to guide the encryption and decryption efforts. The algorithm used to perform columnar \ntransposition is well known—you just read the details of it in this book! However, columnar trans-\nposition can be used to securely communicate between parties as long as a keyword that would not \nbe guessed by an outsider is chosen. As long as the security of this keyword is maintained, it doesn’t \nmatter that third parties know the details of the algorithm. (Note, however, that columnar transpo-\nsition possesses several inherent weaknesses that make it vulnerable to cryptanalysis and therefore \nmake it an inadequate technology for use in modern secure communication.)\nKey Length\nIn the discussion of one-time pads earlier in this chapter, you learned that the main strength of \nthe one-time pad algorithm is derived from the fact that it uses an extremely long key. In fact, \nfor that algorithm, the key is at least as long as the message itself. Most modern cryptosystems \ndo not use keys quite that long, but the length of the key is still an extremely important factor \nin determining the strength of the cryptosystem and the likelihood that the encryption will not \nbe compromised through cryptanalytic techniques.\nThe rapid increase in computing power allows you to use increasingly long keys in your cryp-\ntographic efforts. However, this same computing power is also in the hands of cryptanalysts \nattempting to defeat the algorithms you use. Therefore, it’s essential that you outpace adver-\nsaries by using sufficiently long keys that will defeat contemporary cryptanalysis efforts. Addi-\ntionally, if you are concerned that your data remains safe from cryptanalysis some time into the \nfuture, you must strive to use keys that will outpace the projected increase in cryptanalytic capa-\nbility during the entire time period the data must be kept safe.\nSeveral decades ago, when the Data Encryption Standard (DES) was created, a 56-bit key \nwas considered sufficient to maintain the security of any data. However, there is now wide-\nspread agreement that the 56-bit DES algorithm is no longer secure due to advances in cryp-\ntanalysis techniques and supercomputing power. Modern cryptographic systems use at least a \n128-bit key to protect data against prying eyes.\n" }, { "page_number": 357, "text": "312\nChapter 9\n\u0002 Cryptography and Private Key Algorithms\nSymmetric Key Algorithms\nSymmetric key algorithms rely upon a “shared secret” encryption key that is distributed to all \nmembers who participate in the communications. This key is used by all parties to both encrypt \nand decrypt messages, so the sender and the receiver both possess a copy of the shared key. The \nsame key on both ends of the communication is used to encrypt and decrypt messages. When \nlarge-sized keys are used, symmetric encryption is very difficult to break. It is primarily employed \nto perform bulk encryption and only provides for the security service of confidentiality. Symmet-\nric key cryptography can also be called secret key cryptography and private key cryptography. \nThe symmetric key encryption and decryption processes are illustrated in Figure 9.2.\nThe use of the term private key can be tricky because it is part of three different \nterms that have two different meanings. The term private key always means \nthe private key from the key pair of public key cryptography (a.k.a. asymmet-\nric). However, both private key cryptography and shared private key refer to \nsymmetric cryptography. The meaning of the word private is stretched to mean \nwhen two people share a secret that they keep confidential instead of its true \nmeaning that only a single person has a secret that’s kept confidential. Be sure \nto keep these confusing terms straight in your studies.\nF I G U R E\n9 . 2\nSymmetric key cryptography\nSymmetric key cryptography has several weaknesses:\nKey distribution is a major problem.\nParties must have a secure method of exchanging the \nsecret key before establishing communications with the symmetric key protocol. If a secure elec-\ntronic channel is not available, an offline key distribution method must often be used (i.e., out-\nof-band exchange).\nSymmetric key cryptography does not implement nonrepudiation.\nBecause any communicat-\ning party can encrypt and decrypt messages with the shared secret key, there is no way to tell \nwhere a given message originated.\nSender\nReceiver\nEncryption\nAlgorithm\nP\nC\nSecret\nKey\nDecryption\nAlgorithm\nC\nP\nSecret\nKey\n" }, { "page_number": 358, "text": "Modern Cryptography\n313\nThe algorithm is not scalable.\nIt is extremely difficult for large groups to communicate using \nsymmetric key cryptography. Secure private communication between individuals in the group \ncould be achieved only if each possible combination of users shared a private key.\nKeys must be regenerated often.\nEach time a participant leaves the group, all keys that \ninvolved that participant must be discarded.\nThe major strength of symmetric key cryptography is the great speed at which it can operate. \nSymmetric keying is very fast, often 1,000 to 10,000 times faster than asymmetric. By nature of \nthe mathematics involved, symmetric key cryptography also naturally lends itself to hardware \nimplementations, creating the opportunity for even higher-speed operations.\nThe section “Symmetric Cryptography” later in this chapter provides a detailed look at the \nmajor secret key algorithms in use today.\nAsymmetric Key Algorithms\nAsymmetric key algorithms, also known as public key algorithms, provide a solution to the weak-\nnesses of symmetric key encryption. In these systems, each user has two keys: a public key, which \nis shared with all users, and a private key, which is kept secret and known only to the user. But \nhere’s a twist: opposite and related keys must be used in tandem to encrypt and decrypt. In other \nwords, if the public key encrypts a message, then only the private key can decrypt it and vice versa.\nThe algorithm used to encrypt and decrypt messages in a public key cryptosystem is shown \nin Figure 9.3. Consider this example: If Alice wants to send a message to Bob using public key \ncryptography, she creates the message and then encrypts it using Bob’s public key. The only pos-\nsible way to decrypt this ciphertext is to use Bob’s private key and the only user with access to \nthat key is Bob. Therefore, Alice can’t even decrypt the message herself after she encrypts it. If \nBob wants to send a reply to Alice, he simply encrypts the message using Alice’s public key and \nthen Alice reads the message by decrypting it with her private key.\nF I G U R E\n9 . 3\nAsymmetric key cryptography\nSender\nReceiver\nEncryption\nAlgorithm\nP\nC\nReceiver’s\nPublic Key\nDecryption\nAlgorithm\nC\nP\nReceiver’s\nPrivate Key\n" }, { "page_number": 359, "text": "314\nChapter 9\n\u0002 Cryptography and Private Key Algorithms\nAsymmetric key algorithms also provide support for digital signature technology. Basically, \nif Bob wants to assure other users that a message with his name on it was actually sent by him, \nhe first creates a message digest by using a hashing algorithm (there is more on hashing algo-\nrithms in the next section). Bob then encrypts that digest using his private key. Any user who \nwants to verify the signature simply decrypts the message digest using Bob’s public key and then \nverifies that the decrypted message digest is accurate. This process is explained in greater detail \nin Chapter 10.\nThe following is a list of the major strengths of asymmetric key cryptography:\nThe addition of new users requires the generation of only one public-private key pair.\nThis \nsame key pair is used to communicate with all users of the asymmetric cryptosystem. This \nmakes the algorithm extremely scalable.\nUsers can be removed far more easily from asymmetric systems.\nAsymmetric algorithms \nprovide a key revocation mechanism that allows a key to be canceled, effectively removing a \nuser from the system.\nKey Requirements\nThe fact that symmetric cryptosystems require each pair of potential communicators to have \na shared private key makes the algorithm nonscalable. The total number of keys required to \ncompletely connect n parties is given by the following formula:\nNumber of Keys = [n * (n - 1)]/2\nNow, this might not sound so bad (and it’s not for small systems), but consider the following \nfigures:\nNumber of Participants\nNumber of Keys Required\n2\n1\n3\n2\n4\n6\n5\n10\n10\n45\n100\n4,950\n1,000\n499,500\n10,000\n49,995,000\nObviously, the larger the population, the less likely a symmetric cryptosystem will be suitable \nto meet its needs.\n" }, { "page_number": 360, "text": "Modern Cryptography\n315\nKey regeneration is required only when a user’s private key is compromised.\nIf a user \nleaves the community, the system administrator simply needs to invalidate that user’s keys. \nNo other keys are compromised and therefore key regeneration is not required for any \nother user.\nAsymmetric key encryption can provide integrity, authentication, and nonrepudiation.\nIf a \nuser does not share their private key with other individuals, a message signed by that user can \nbe shown to be accurate and from a specific source and cannot be later repudiated.\nKey distribution is a simple process.\nUsers who want to participate in the system simply make \ntheir public key available to anyone with whom they want to communicate. There is no method \nby which the private key can be derived from the public key.\nNo preexisting communication link needs to exist.\nTwo individuals can begin communicat-\ning securely from the moment they start communicating. Asymmetric cryptography does not \nrequire a preexisting relationship to provide a secure mechanism for data exchange.\nThe major weakness of public key cryptography is its slow speed of operation. For this rea-\nson, many applications that require the secure transmission of large amounts of data use public \nkey cryptography to establish a connection and then exchange a symmetric secret key. The \nremainder of the session then uses symmetric cryptography. Table 9.1 compares the symmetric \nand asymmetric cryptography systems. Close examination of this table reveals that a weakness \nin one system is matched by a strength in the other.\nChapter 10, “PKI and Cryptographic Applications,” provides technical details \non modern public key encryption algorithms and some of their applications.\nT A B L E\n9 . 1\nComparison of Symmetric and Asymmetric\nSymmetric\nAsymmetric\nSingle shared key\nKey pair sets\nOut-of-band exchange\nIn-band exchange\nNot scalable\nScalable\nFast\nSlow\nBulk encryption\nSmall blocks of data, digital signatures, digital envelopes, digital \ncertificates\nConfidentiality\nIntegrity, authenticity, nonrepudiation\n" }, { "page_number": 361, "text": "316\nChapter 9\n\u0002 Cryptography and Private Key Algorithms\nHashing Algorithms\nIn the previous section, you learned that public key cryptosystems can provide digital signature \ncapability when used in conjunction with a message digest. Message digests are summaries of \na message’s content (not unlike a file checksum) produced by a hashing algorithm. It’s extremely \ndifficult, if not impossible, to derive a message from an ideal hash function, and it’s very unlikely \nthat two messages will produce the same hash value.\nThe following are some of the more common hashing algorithms in use today:\n\u0002\nMessage Digest 2 (MD2)\n\u0002\nMessage Digest 4 (MD4)\n\u0002\nMessage Digest 5 (MD5)\n\u0002\nSecure Hash Algorithm (SHA)\n\u0002\nHash-Based Message Authentication Code (HMAC)\nChapter 10 provides details on these contemporary hashing algorithms and explains how \nthey are used to provide digital signature capability, which helps meet the cryptographic goals \nof integrity and nonrepudiation.\nSymmetric Cryptography\nYou’ve learned the basic concepts underlying symmetric key cryptography, asymmetric key cryp-\ntography, and hashing functions. In the following sections, we’ll take an in-depth look at several \ncommon symmetric cryptosystems: the Data Encryption Standard (DES), Triple DES (3DES), \nInternational Data Encryption Algorithm (IDEA), Blowfish, Skipjack, and the Advanced Encryp-\ntion Standard (AES).\nData Encryption Standard (DES)\nThe United States government published the Data Encryption Standard (DES) in 1977 as a pro-\nposed standard cryptosystem for all government communications. Indeed, many government enti-\nties continue to use DES for cryptographic applications today, despite the fact that it was \nsuperseded by the Advanced Encryption Standard (AES) in December 2001. DES is a 64-bit block \ncipher that has four modes of operation: Electronic Codebook (ECB) mode, Cipher Block Chain-\ning (CBC) mode, Cipher Feedback (CFB) mode, and Output Feedback (OFB) mode. These modes \nare explained in the following sections. All of the DES modes operate on 64 bits of plaintext at a \ntime to generate 64-bit blocks of ciphertext. The key used by DES is 56 bits long.\nDES utilizes a long series of exclusive OR (XOR) operations to generate the ciphertext. This \nprocess is repeated 16 times for each encryption/decryption operation. Each repetition is com-\nmonly referred to as a “round” of encryption, explaining the statement that DES performs 16 \nrounds of encryption. In the following sections, we’ll take a look at each of the four modes uti-\nlized by DES.\n" }, { "page_number": 362, "text": "Symmetric Cryptography\n317\nAs mentioned in the text, DES uses a 56-bit key to drive the encryption and \ndecryption process. However, you may read in some literature that DES uses a \n64-bit key. This is not an inconsistency—there’s a perfectly logical explanation. \nThe DES specification calls for a 64-bit key. However, of those 64 bits, only 56 \nactually contain keying information. The remaining 8 bits are supposed to con-\ntain parity information to ensure that the other 56 bits are accurate. In practice, \nhowever, those parity bits are rarely used. You should commit the 56-bit figure \nto memory.\nElectronic Codebook (ECB) Mode\nElectronic Codebook (ECB) mode is the simplest mode to understand and the least secure. Each \ntime the algorithm processes a 64-bit block, it simply encrypts the block using the chosen secret \nkey. This means that if the algorithm encounters the same block multiple times, it will produce \nthe exact same encrypted block. If an enemy were eavesdropping on the communications, they \ncould simply build a “codebook” of all of the possible encrypted values. After a sufficient num-\nber of blocks were gathered, cryptanalytic techniques could be used to decipher some of the \nblocks and break the encryption scheme.\nThis vulnerability makes it impractical to use ECB mode on all but the shortest transmis-\nsions. In everyday use, ECB is used only for the exchange of small amounts of data, such as keys \nand parameters used to initiate other DES modes and well as the cells in a database.\nCipher Block Chaining (CBC) Mode\nIn Cipher Block Chaining (CBC) mode, each block of unencrypted text is XORed with the \nblock of ciphertext immediately preceding it before it is encrypted using the DES algorithm. The \ndecryption process simply decrypts the ciphertext and reverses the XOR operation. CBC imple-\nments an IV and XORs it with the first block of the message, producing a unique output every \ntime the operation is performed. The IV must be sent to the recipient, perhaps by tacking the IV \nonto the front of the completed ciphertext in plain form or by protecting it with ECB mode \nencryption using the same key used for the message. One important consideration when using \nCBC mode is that errors propagate—if one block is corrupted during transmission, it becomes \nimpossible to decrypt that block and the next block as well.\nCipher Feedback (CFB) Mode\nCipher Feedback (CFB) mode is the streaming cipher version of CBC. In other words, CFB oper-\nates against data produced in real time. However, instead of breaking a message into blocks, it \nuses memory buffers of the same block size. As the buffer becomes full, it is encrypted and then \nsent to the recipient(s). Then the system waits for the next buffer to be filled as the new data is \ngenerated before it is in turn encrypted and then transmitted. Other than the change from pre-\nexisting data to real-time data, CFB operates in the same fashion as CBC. It uses an IV and it \nuses chaining.\n" }, { "page_number": 363, "text": "318\nChapter 9\n\u0002 Cryptography and Private Key Algorithms\nCBC and CFB are best suited for authentication encryption.\nOutput Feedback (OFB) Mode\nIn Output Feedback (OFB) mode, DES operates in almost the same fashion as it does in CFB \nmode. However, instead of XORing an encrypted version of the previous preceding block of \nciphertext, DES XORs the plaintext with a seed value. For the first encrypted block, an initial-\nization vector is used to create the seed value. Future seed values are derived by running the DES \nalgorithm on the previous preceding seed value. The major advantages of OFB mode are that \nthere is no chaining function and transmission errors do not propagate to affect the decryption \nof future blocks.\nTriple DES (3DES)\nAs mentioned in previous sections, the Data Encryption Standard’s 56-bit key is no longer con-\nsidered adequate in the face of modern cryptanalytic techniques and supercomputing power. \nHowever, an adapted version of DES, Triple DES (3DES), uses the same algorithm to produce \na more secure encryption\nThere are four versions of 3DES. The first simply encrypts the plaintext three times, using \nthree different keys: K1, K2, and K3. It is known as DES-EEE3 mode (the Es indicate that there \nare three encryption operations, whereas the numeral 3 indicates that three different keys are \nused). DES-EEE3 can be expressed using the following notation, where E(K,P) represents the \nencryption of plaintext P with key K:\nE(K1,E(K2,E(K3,P)))\nDES-EEE3 has an effective key length of 168 bits.\nThe second variant (DES-EDE3) also uses three keys but replaces the second encryption \noperation with a decryption operation:\nE(K1,D(K2,E(K3,P)))\nThe third version of 3DES (DES-EEE2) uses only two keys, K1 and K2, as follows:\nE(K1,E(K2,E(K1,P)))\nThe fourth variant of 3DES (DES-EDE2) also uses two keys but uses a decryption operation \nin the middle:\nE(K1,D(K2,E(K1,P)))\nBoth the third and fourth variants have an effective key length of 112 bits.\n" }, { "page_number": 364, "text": "Symmetric Cryptography\n319\nTechnically, there is a fifth variant of 3DES, DES-EDE1, which uses only one cryp-\ntographic key. However, it results in the exact same algorithm (and strength) as \nstandard DES and is only provided for backward compatibility purposes.\nThese four variants of 3DES were developed over the years because several cryptologists put \nforth theories that one variant was more secure than the others. However, the current belief is \nthat all modes are equally secure.\nTake some time to understand the variants of 3DES. Sit down with a pencil and \npaper and be sure you understand the way each variant uses two or three keys \nto achieve stronger encryption.\nThis discussion begs an obvious question—what happened to Double DES (2DES)? You’ll \nread in Chapter 10 that Double DES was tried but quickly abandoned when it was proven that \nan attack existed that rendered 2DES no more secure than standard DES.\nInternational Data Encryption Algorithm (IDEA)\nThe International Data Encryption Algorithm (IDEA) block cipher was developed in response \nto complaints about the insufficient key length of the DES algorithm. Like DES, IDEA operates \non 64-bit blocks of plain-/ciphertext. However, it begins its operation with a 128-bit key. This \nkey is then broken up in a series of operations into 52 16-bit subkeys. The subkeys then act on \nthe input text using a combination of XOR and modulus operations to produce the encrypted/\ndecrypted version of the input message. IDEA is capable of operating in the same four modes \nutilized by DES: ECB, CBC, CFB, and OFB.\nAll of this material on key length block size and the number of rounds of encryp-\ntion may seem dreadfully boring; however, it’s very important material, so be \nsure to brush up on it while preparing for the exam.\nThe IDEA algorithm itself is patented by its Swiss developers. However, they have granted \nan unlimited license to anyone who wants to use IDEA for noncommercial purposes. IDEA pro-\nvides the cryptographic functionality in Phil Zimmerman’s popular Pretty Good Privacy (PGP) \nsecure e-mail package. Chapter 10 covers PGP in further detail.\nBlowfish\nBruce Schneier’s Blowfish block cipher is another alternative to DES and IDEA. Like its prede-\ncessors, Blowfish operates on 64-bit blocks of text. However, it extends IDEA’s key strength \neven further by allowing the use of variable-length keys ranging from a relatively insecure 32 \nbits to an extremely strong 448 bits. Obviously, the longer keys will result in a corresponding \n" }, { "page_number": 365, "text": "320\nChapter 9\n\u0002 Cryptography and Private Key Algorithms\nincrease in encryption/decryption time. However, time trials have established Blowfish as a \nmuch faster algorithm than both IDEA and DES. Also, Mr. Schneier released Blowfish for pub-\nlic use with no license required. Blowfish encryption is built into a number of commercial soft-\nware products and operating systems. There are also a number of Blowfish libraries available \nfor software developers.\nSkipjack\nThe Skipjack algorithm was approved for use by the U.S. government in Federal Information \nProcessing Standard (FIPS) 185, the Escrowed Encryption Standard (EES). Like many block \nciphers, Skipjack operates on 64-bit blocks of text. It uses an 80-bit key and supports the same \nfour modes of operation supported by DES. Skipjack was quickly embraced by the U.S. gov-\nernment and provides the cryptographic routines supporting the Clipper and Capstone high-\nspeed encryption chips designed for mainstream commercial use.\nHowever, Skipjack has an added twist—it supports the escrow of encryption keys. Two gov-\nernment agencies, the National Institute of Standards and Technology (NIST) and the Depart-\nment of the Treasury, each holds a portion of the information required to reconstruct a Skipjack \nkey. When law enforcement authorities obtain legal authorization, they contact the two agencies, \nobtain the pieces of the key, and are able to decrypt communications between the affected parties.\nSkipjack and the Clipper chip have not been embraced by the cryptographic community at \nlarge because of its mistrust of the escrow procedures in place within the U.S. government. In \nfact, it’s unlikely that any key escrow arrangement will succeed given the proliferation of inex-\npensive, powerful encryption technology on the Internet and the fact that Skipjack’s 80-bit key \nis relatively insecure.\nAdvanced Encryption Standard (AES)\nIn October 2000, the National Institute of Standards and Technology (NIST) announced that \nthe Rijndael block cipher (pronounced “rhine-doll”) had been chosen as the replacement for \nDES. In December of that same year, the secretary of commerce approved FIPS 197, which man-\ndated the use of AES/Rijndael for the encryption of all sensitive but unclassified data by the U.S. \ngovernment.\nRivest Cipher 5 (RC5)\nRivest Cipher 5, or RC5, is a symmetric algorithm patented by Rivest, Shamir, and Adleman \n(RSA) Data Security, the people who developed the RSA asymmetric algorithm. RC5 is a block \ncipher of variable block sizes (32, 64 or 128 bit) that uses keys sizes between 0 (zero) length and \n2048 bits.\n" }, { "page_number": 366, "text": "Symmetric Cryptography\n321\nThe Rijndael cipher allows the use of three key strengths: 128 bits, 192 bits, and 256 bits. \nThe original specification for AES called for the processing of 128-bit blocks, but Rijndael \nexceeded this specification, allowing cryptographers to use a block size equal to the key length. \nThe number of encryption rounds depends upon the key length chosen:\n\u0002\n128-bit keys require 9 rounds of encryption.\n\u0002\n192-bit keys require 11 rounds of encryption.\n\u0002\n256-bit keys require 13 rounds of encryption.\nBy the way, two of the other AES finalists were MARS and SERPENT.\nThe Rijndael algorithm uses three layers of transformations to encrypt/decrypt blocks of \nmessage text:\n\u0002\nLinear Mix Transform\n\u0002\nNonlinear Transform\n\u0002\nKey Addition Transform\nThe total number of round key bits needed is equal to the following:\nBlock length * number of rounds + 1\nFor example, with a block length of 128 bits and 13 rounds of encryption, 1,792 round key \nbits are needed.\nThe operational details of these layers are beyond the scope of this book. Interested readers \ncan obtain a complete copy of the 45-page Rijndael algorithm description at the Rijndael web-\nsite: www.rijndael.com.\nAES is just one of the many symmetric encryption algorithms you need to be familiar with. \nTable 9.2 lists several common and well-known symmetric encryption algorithms along with \ntheir block size and key size.\nTwofish\nThe Twofish algorithm developed by Bruce Schneier (also the creator of Blowfish) was another \none of the AES finalists. Like Rijndael, Twofish is a block cipher. It operates on 128-bit blocks \nof data and is capable of using cryptographic keys up to 256 bits in length.\nTwofish utilizes two techniques not found in other algorithms. Prewhitening involves XORing \nthe plaintext with a separate subkey before the 1st round of encryption. Postwhitening uses a \nsimilar operation after the 16th round of encryption.\n" }, { "page_number": 367, "text": "322\nChapter 9\n\u0002 Cryptography and Private Key Algorithms\nKey Distribution\nAs previously mentioned, one of the major problems underlying symmetric encryption algo-\nrithms is the secure distribution of the secret keys required to operate the algorithms. In the fol-\nlowing sections, we’ll examine the three main methods used to exchange secret keys securely: \noffline distribution, public key encryption, and the Diffie-Hellman key exchange algorithm.\nOffline Distribution\nThe most technically simple method involves the physical exchange of key material. One party \nprovides the other party with a sheet of paper or piece of storage media containing the secret \nkey. In many hardware encryption devices, this key material comes in the form of an electronic \ndevice that resembles an actual key that is inserted into the encryption device. If participants rec-\nognize each other’s voice, they might use the (tedious) process of reading keying material over \nthe telephone. However, each one of these methods has its own inherent flaws. If keying mate-\nrial is sent through the mail, it might be intercepted. Telephones can be wiretapped. Papers con-\ntaining keys might be inadvertently thrown in the trash or lost.\nT A B L E\n9 . 2\nSymmetric Memorization Chart\nName\nBlock Size\nKey Size\nData Encryption Standard (DES)\n64\n56\nTriple DES (3DES)\n64\n168\nAdvanced Encryption Standard (AES), Rijndael\nvariable\n128, 192, 256\nTwofish\n128\n1–256\nBlowfish (often used in SSH)\nvariable\n1– 448\nIDEA (used in PGP)\n64\n128\nRivest Cipher 5 (RC5), based on RSA\n32, 64, 128\n0–2048\nRivest Cipher 4 (RC4), based on RSA\nstreaming\n128\nRivest Cipher 2 (RC2), based on RSA\n64\n128\nSkipjack\n \n80\n" }, { "page_number": 368, "text": "Symmetric Cryptography\n323\nPublic Key Encryption\nMany communicators want to obtain the speed benefits of secret key encryption without the has-\nsles of key distribution. For this reason, many people use public key encryption to set up an initial \ncommunications link. Once the link is successfully established and the parties are satisfied as to each \nother’s identity, they exchange a secret key over the secure public key link. They then switch com-\nmunications from the public key algorithm to the secret key algorithm and enjoy the increased pro-\ncessing speed. In general, secret key encryption is 1,000 times faster than public key encryption.\nDiffie-Hellman\nIn some cases, neither public key encryption nor offline distribution is sufficient. Two parties \nmight need to communicate with each other but they have no physical means to exchange key \nmaterial and there is no public key infrastructure in place to facilitate the exchange of secret \nkeys. In situations like this, key exchange algorithms like the Diffie-Hellman algorithm prove \nto be extremely useful mechanisms.\nThe Diffie-Hellman algorithm represented a major advance in the state of cryp-\ntographic science when it was released in 1976. It’s still in use today.\nThe Diffie-Hellman algorithm works as follows:\n1.\nThe communicating parties (we’ll call them Richard and Sue) agree on two large numbers: \np (which is a prime number) and g (which is an integer) such that 1 < g < p.\n2.\nRichard chooses a random large integer r and performs the following calculation:\nR = gr mod p\n3.\nSue chooses a random large integer s and performs the following calculation:\nS = gs mod p\n4.\nRichard sends R to Sue and Sue sends S to Richard.\n5.\nRichard then performs the following calculation:\nK = Sr mod p\n6.\nSue then performs the following calculation:\nK = Rs mod p\nAt this point, Richard and Sue both have the same value, K, and can use this for secret key \ncommunication between the two parties.\nSecure RPC (SRPC) employs Diffie-Hellman for key exchange.\n" }, { "page_number": 369, "text": "324\nChapter 9\n\u0002 Cryptography and Private Key Algorithms\nKey Escrow\nCryptography is a powerful tool. Like most tools, it can be used for a number of beneficent pur-\nposes, but it can also be used with malicious intent. To gain a handle on the explosive growth \nof cryptographic technologies, governments around the world have floated ideas to implement \na key escrow system. These systems allow the government, under limited circumstances such as \na court order, to obtain the cryptographic key used for a particular communication from a cen-\ntral storage facility.\nThere are two major approaches to key escrow that have been proposed over the past decade:\n\u0002\nIn the Fair Cryptosystems escrow approach, the secret keys used in a communication are \ndivided into two or more pieces, each of which is given to an independent third party. Each \nof these pieces is useless on its own but may be recombined to obtain the secret key. When \nthe government obtains legal authority to access a particular key, it provides evidence of the \ncourt order to each of the third parties and then reassembles the secret key.\n\u0002\nThe Escrowed Encryption Standard takes a different approach by providing the govern-\nment with a technological means to decrypt ciphertext. This standard is the basis behind the \nSkipjack algorithm discussed earlier in this chapter.\nIt’s highly unlikely that government regulators will ever overcome the legal and privacy hur-\ndles necessary to implement key escrow on a widespread basis. The technology is certainly avail-\nable, but the general public will likely never accept the potential government intrusiveness it \nfacilitates.\nSummary\nCryptographers and cryptanalysts are in a never-ending race to develop more secure cryptosys-\ntems and advanced cryptanalytic techniques designed to circumvent those systems. Cryptogra-\nphy dates back as early as Caesar and has been an ongoing study for many years. In this chapter, \nyou learned some of the fundamental concepts underlying the field of cryptography, gained a \nbasic understanding of the terminology used by cryptographers, and looked at some historical \ncodes and ciphers used in the early days of cryptography. This chapter also examined the sim-\nilarities and differences between symmetric key cryptography (where communicating parties \nuse the same key) and asymmetric key cryptography (where each communicator has a pair of \npublic and private keys).\nWe wrapped up the chapter by analyzing some of the symmetric algorithms currently avail-\nable and their strengths and weaknesses as well as some solutions to the key exchange dilemma \nthat plagues secret key cryptographers. The next chapter expands this discussion to cover con-\ntemporary public key cryptographic algorithms. Additionally, some of the common cryptana-\nlytic techniques used to defeat both types of cryptosystems will be explored.\n" }, { "page_number": 370, "text": "Exam Essentials\n325\nExam Essentials\nUnderstand the role confidentiality plays in cryptosystems.\nConfidentiality is one of the \nmajor goals of cryptography. It ensures that messages remain protected from disclosure to \nunauthorized individuals and allows encrypted messages to be transmitted freely across an open \nnetwork. Confidentiality can be assured by both symmetric and asymmetric cryptosystems.\nUnderstand the role integrity plays in cryptosystems.\nIntegrity provides the recipient of a \nmessage with the assurance that the message was not altered (intentionally or unintentionally) \nbetween the time it was created by the sender and the time it was received by the recipient. Integ-\nrity can be assured by both symmetric and asymmetric cryptosystems.\nUnderstand the importance of providing nonrepudiation capability in cryptosystems.\nNon-\nrepudiation provides undeniable proof that the sender of a message actually authored it. It pre-\nvents the sender from subsequently denying that they sent the original message. Nonrepudiation \nis only possible with asymmetric cryptosystems.\nKnow how cryptosystems can be used to achieve authentication goals.\nAuthentication pro-\nvides assurances as to the identity of a user. One possible scheme that uses authentication is the \nchallenge-response protocol, in which the remote user is asked to encrypt a message using a key \nknown only to the communicating parties. Authentication can be achieved with both symmetric \nand asymmetric cryptosystems.\nBe familiar with the basic terminology of cryptography.\nWhen a sender wants to transmit a \nprivate message to a recipient, the sender takes the plaintext (unencrypted) message and \nencrypts it using an algorithm and a key. This produces a ciphertext message that is transmitted \nto the recipient. The recipient then uses a similar algorithm and key to decrypt the ciphertext \nand re-create the original plaintext message for viewing.\nBe able to explain how the binary system works and know the basic logical and mathematical \nfunctions used in cryptographic applications.\nBinary mathematics uses only the numbers 0 \nand 1 to represent false and true states, respectively. You use logical operations such as AND, \nOR, NOT, and XOR on these values to perform computational functions. The modulo function \nreturns the remainder of integer division and is critical in implementing several cryptographic \nalgorithms. Public key cryptography relies upon the use of one-way functions that are difficult \nto reverse.\nUnderstand the difference between a code and a cipher and explain the basic types of ciphers.\nCodes are cryptographic systems of symbols that operate on words or phrases and are some-\ntimes secret but don’t always provide confidentiality. Ciphers, however, are always meant to \nhide the true meaning of a message. Know how the following types of ciphers work: transpo-\nsition ciphers, substitution ciphers (including one-time pads), stream ciphers, and block ciphers.\nKnow the requirements for successful use of a one-time pad.\nFor a one-time pad to be suc-\ncessful, the key must be generated randomly without any known pattern. The key must be at \nleast as long as the message to be encrypted. The pads must be protected against physical dis-\nclosure and each pad must be used only one time and then discarded.\n" }, { "page_number": 371, "text": "326\nChapter 9\n\u0002 Cryptography and Private Key Algorithms\nUnderstand what an initialization vector (IV) is.\nAn initialization vector (IV) is a random bit \nstring (a nonce) that is the same length as the block size that is XORed with the message. IVs are \nused to create a unique ciphertext every time the same message is encrypted with the same key.\nUnderstand the concept of zero knowledge proof.\nZero knowledge proof is a communica-\ntion concept. A specific type of information is exchanged but no real data is transferred, as with \ndigital signatures and digital certificates.\nUnderstand split knowledge.\nSplit knowledge means that the information or privilege \nrequired to perform an operation is divided among multiple users. This ensures that no single \nperson has sufficient privileges to compromise the security of the environment. M of N Control \nis an example of split knowledge.\nUnderstand work function or work factor.\nWork function or work factor is a way to measure \nthe strength of a cryptography system by measuring the effort in terms of cost and/or time to \ndecrypt messages. Usually the time and effort required to perform a complete brute force attack \nagainst an encryption system is what a work function rating represents. The security and protec-\ntion offered by a cryptosystem is directly proportional to the value of its work function/factor.\nUnderstand the importance of key security.\nCryptographic keys provide the necessary ele-\nment of secrecy to a cryptosystem. Modern cryptosystems utilize keys that are at least 128 bits \nlong to provide adequate security. It’s generally agreed that the 56-bit key of the Data Encryp-\ntion Standard (DES) is no longer sufficiently long enough to provide security.\nKnow the differences between symmetric and asymmetric cryptosystems.\nSymmetric key \ncryptosystems (or secret key cryptosystems) rely upon the use of a shared secret key. They are \nmuch faster than asymmetric algorithms but they lack support for scalability, easy key distri-\nbution, and nonrepudiation. Asymmetric cryptosystems use public-private key pairs for com-\nmunication between parties but operate much more slowly than symmetric algorithms.\nBe able to explain the basic operational modes of the Data Encryption Standard (DES) and \nTriple DES (3DES).\nThe Data Encryption Standard operates in four modes: Electronic Code-\nbook (ECB) mode, Cipher Block Chaining (CBC) mode, Cipher Feedback (CFB) mode, and \nOutput Feedback (OFB) mode. ECB mode is considered the least secure and is used only for \nshort messages. 3DES uses three iterations of DES with two or three different keys to increase \nthe effective key strength to 112 bits.\nKnow the Advanced Encryption Standard (AES) and the Rijndael algorithm.\nThe Advanced \nEncryption Standard (AES) utilizes the Rijndael algorithm and is the new U.S. government stan-\ndard for the secure exchange of sensitive but unclassified data. AES uses key lengths and block \nsizes of 128, 192, and 256 bits to achieve a much higher level of security than that provided by \nthe older DES algorithm.\n" }, { "page_number": 372, "text": "Written Lab\n327\nWritten Lab\nAnswer the following questions about cryptography and private key algorithms.\n1.\nWhat is the major hurdle preventing the widespread adoption of one-time pad cryptosys-\ntems to ensure data confidentiality?\n2.\nEncrypt the message “I will pass the CISSP exam and become certified next month” using \ncolumnar transposition with the keyword SECURE.\n3.\nDecrypt the message “F R Q J U D W X O D W L R Q V B R X J R W L W” using the Caesar \nROT3 substitution cipher.\n" }, { "page_number": 373, "text": "328\nChapter 9\n\u0002 Cryptography and Private Key Algorithms\nReview Questions\n1.\nWhich one of the following is not a goal of cryptographic systems?\nA. Nonrepudiation\nB. Confidentiality\nC. Availability\nD. Integrity\n2.\nJohn recently received an electronic mail message from Bill. What cryptographic goal would \nneed to be met to convince John that Bill was actually the sender of the message?\nA. Nonrepudiation\nB. Confidentiality\nC. Availability\nD. Integrity\n3.\nWhat is the length of the cryptographic key used in the Data Encryption Standard (DES) \ncryptosystem?\nA. 56 bits\nB. 128 bits\nC. 192 bits\nD. 256 bits\n4.\nWhat type of cipher relies upon changing the location of characters within a message to achieve \nconfidentiality?\nA. Stream cipher\nB. Transposition cipher\nC. Block cipher\nD. Substitution cipher\n5.\nWhich one of the following is not a possible key length for the Advanced Encryption Standard \nRijndael cipher?\nA. 56 bits\nB. 128 bits\nC. 192 bits\nD. 256 bits\n" }, { "page_number": 374, "text": "Review Questions\n329\n6.\nWhich one of the following is a cryptographic goal that cannot be achieved by a secret key \ncryptosystem?\nA. Nonrepudiation\nB. Confidentiality\nC. Availability\nD. Integrity\n7.\nWhen correctly implemented, what is the only cryptosystem known to be unbreakable?\nA. Transposition cipher\nB. Substitution cipher\nC. Advanced Encryption Standard\nD. One-time pad\n8.\nWhat is the output value of the mathematical function 16 mod 3?\nA. 0\nB. 1\nC. 3\nD. 5\n9.\nIn the 1940s, a team of cryptanalysts from the United States successfully broke a Soviet code \nbased upon a one-time pad in a project known as VENONA. What rule did the Soviets break \nthat caused this failure?\nA. Key values must be random.\nB. Key values must be the same length as the message.\nC. Key values must be used only once.\nD. Key values must be protected from physical disclosure.\n10. Which one of the following cipher types operates on large pieces of a message rather than indi-\nvidual characters or bits of a message?\nA. Stream cipher\nB. Caesar cipher\nC. Block cipher\nD. ROT3 cipher\n11. What is the minimum number of cryptographic keys required for secure two-way communica-\ntions in symmetric key cryptography?\nA. One\nB. Two\nC. Three\nD. Four\n" }, { "page_number": 375, "text": "330\nChapter 9\n\u0002 Cryptography and Private Key Algorithms\n12. What is the minimum number of cryptographic keys required for secure two-way communica-\ntions in asymmetric key cryptography?\nA. One\nB. Two\nC. Three\nD. Four\n13. Which one of the following Data Encryption Standard (DES) operating modes can be used for \nlarge messages with the assurance that an error early in the encryption/decryption process won’t \nspoil results throughout the communication?\nA. Cipher Block Chaining (CBC)\nB. Electronic Codebook (ECB)\nC. Cipher Feedback (CFB)\nD. Output Feedback (OFB)\n14. What encryption algorithm is used by the Clipper chip, which supports the Escrowed Encryp-\ntion Standard sponsored by the U.S. government?\nA. Data Encryption Standard (DES)\nB. Advanced Encryption Standard (AES)\nC. Skipjack\nD. IDEA\n15. What is the minimum number of cryptographic keys required to achieve a higher level of security \nthan DES with the Triple DES algorithm?\nA. 1\nB. 2\nC. 3\nD. 4\n16. What approach to key escrow divides the secret key into several pieces that are distributed to \nindependent third parties?\nA. Fair Cryptosystems\nB. Key Escrow Standard\nC. Escrowed Encryption Standard\nD. Fair Escrow\n17.\nWhat kind of attack makes the Caesar cipher virtually unusable?\nA. Meet-in-the-middle attack\nB. Escrow attack\nC. Frequency attack\nD. Transposition attack\n" }, { "page_number": 376, "text": "Review Questions\n331\n18. What type of cryptosystem commonly makes use of a passage from a well-known book for the \nencryption key?\nA. Vernam cipher\nB. Running key cipher\nC. Skipjack cipher\nD. Twofish cipher\n19. Which AES finalist makes use of prewhitening and postwhitening techniques?\nA. Rijndael\nB. Twofish\nC. Blowfish\nD. Skipjack\n20. Matthew and Richard wish to communicate using symmetric cryptography but do not have a \nprearranged secret key. What algorithm might they use to resolve this situation?\nA. DES\nB. AES\nC. Diffie-Hellman\nD. Skipjack\n" }, { "page_number": 377, "text": "332\nChapter 9\n\u0002 Cryptography and Private Key Algorithms\nAnswers to Review Questions\n1.\nC. The four goals of cryptographic systems are confidentiality, integrity, authentication, and \nnonrepudiation.\n2.\nA. Nonrepudiation prevents the sender of a message from later denying that they sent it.\n3.\nA. DES uses a 56-bit key. This is considered one of the major weaknesses of this cryptosystem.\n4.\nB. Transposition ciphers use a variety of techniques to reorder the characters within a message.\n5.\nA. The Rijndael cipher allows users to select a key length of 128, 192, or 256 bits, depending \nupon the specific security requirements of the application.\n6.\nA. Nonrepudiation requires the use of a public key cryptosystem to prevent users from falsely \ndenying that they originated a message.\n7.\nD. Assuming that it is used properly, the one-time pad is the only known cryptosystem that is \nnot vulnerable to attacks.\n8.\nB. Option B is correct because 16 divided by 3 equals 5, with a remainder value of 1.\n9.\nA. The cryptanalysts from the United States discovered a pattern in the method the Soviets used \nto generate their one-time pads. After this pattern was discovered, much of the code was even-\ntually broken.\n10. C. Block ciphers operate on message “chunks” rather than on individual characters or bits. The \nother ciphers mentioned are all types of stream ciphers that operate on individual bits or char-\nacters of a message.\n11. A. Symmetric key cryptography uses a shared secret key. All communicating parties utilize the \nsame key for communication in any direction.\n12. D. In asymmetric (public key) cryptography, each communicating party must have a pair of \npublic and private keys. Therefore, two-way communication between parties requires a total of \nfour cryptographic keys (a public and private key for each user).\n13. D. Cipher Block Chaining and Cipher Feedback modes will carry errors throughout the entire \nencryption/decryption process. Electronic Codebook (ECB) operation is not suitable for large \namounts of data. Output Feedback (OFB) mode does not allow early errors to interfere with \nfuture encryption/decryption.\n14. C. The Skipjack algorithm implemented the key escrow standard supported by the U.S. government.\n15. B. To achieve added security over DES, 3DES must use at least two cryptographic keys.\n16. A. The Fair Cryptosystems approach would have independent third parties each store a portion of \nthe secret key and then provide them to the government upon presentation of a valid court order.\n17.\nC. The Caesar cipher (and other simple substitution ciphers) are vulnerable to frequency attacks \nthat analyze the rate at which specific letters appear in the ciphertext.\n" }, { "page_number": 378, "text": "Answers to Review Questions\n333\n18. B. Running key (or “book”) ciphers often use a passage from a commonly available book as the \nencryption key.\n19. B. The Twofish algorithm, developed by Bruce Schneier, uses prewhitening and postwhitening.\n20. C. The Diffie-Hellman algorithm allows for the secure exchange of symmetric keys over an inse-\ncure medium.\n" }, { "page_number": 379, "text": "334\nChapter 9\n\u0002 Cryptography and Private Key Algorithms\nAnswers to Written Lab\nFollowing are answers to the questions in this chapter’s written lab:\n1.\nThe major obstacle to the widespread adoption of one-time pad cryptosystems is the diffi-\nculty in creating and distributing the very lengthy keys that the algorithm depends on.\n2.\nThe first step in encrypting this message requires the assignment of numeric column values \nto the letters of the secret keyword:\nS E C U R E\n5 2 1 6 4 3\nNext, the letters of the message are written in order underneath the letters of the keyword:\nS E C U R E\n5 2 1 6 4 3\nI W I L L P\nA S S T H E\nC I S S P E\nX A M A N D\nB E C O M E\nC E R T I F\nI E D N E X\nT M O N T H\nFinally, the sender enciphers the message by reading down each column; the order in which \nthe columns are read correspond to the numbers assigned in the first step. This produces the \nfollowing ciphertext:\nI S S M C R D O W S I A E E E M P E E D E F X H L H P N\n M I E T I A C X B C I T L T S A O T N N\n3.\nThis message is decrypted by using the following function:\nP = (C - 3) mod 26\nC: F R Q J U D W X O D W L R Q V B R X J R W L W\nP: C O N G R A T U L A T I O N S Y O U G O T I T\nAnd the hidden message is “Congratulations You Got It.” Congratulations, you got it!\n" }, { "page_number": 380, "text": "Chapter\n10\nPKI and \nCryptographic \nApplications\nTHE CISSP EXAM TOPICS COVERED IN THIS \nCHAPTER INCLUDE:\n\u0001 Cryptographic Concepts, Methodologies, and Practices\n\u0001 Public Key Algorithms\n\u0001 Public Key Infrastructure\n\u0001 System Architecture for Implementing Cryptographic \nFunctions\n\u0001 Methods of Attack\n" }, { "page_number": 381, "text": "In Chapter 9, we introduced basic cryptography concepts and \nexplored a variety of private key cryptosystems. These symmetric \ncryptosystems offer fast, secure communication but introduce the \nsubstantial challenge of key exchange between previously unrelated parties. This chapter \nexplores the world of asymmetric (or public key) cryptography and the public key infrastructure \n(PKI) that supports worldwide secure communication between parties that don’t necessarily \nknow each other prior to the communication. We’ll also explore several practical applications \nof cryptography: securing electronic mail, web communications, electronic commerce, and net-\nworking. This chapter concludes with an examination of a variety of attacks malicious individ-\nuals might use to compromise weak cryptosystems.\nAsymmetric Cryptography\nThe section “Modern Cryptography” in Chapter 9 introduced the basic principles behind both \nprivate (symmetric) and public (asymmetric) key cryptography. You learned that symmetric key \ncryptosystems require both communicating parties to have the same shared secret key, creating \nthe problem of secure key distribution. You also learned that asymmetric cryptosystems avoid \nthis hurdle by using pairs of public and private keys to facilitate secure communication without \nthe overhead of complex key distribution systems. The security of these systems relies upon the \ndifficulty of reversing a one-way function.\nThe terms asymmetric cryptography and public key cryptography are often \n(acceptably) used interchangeably. However, when you get down to brass \ntacks, they can be different systems. Without getting too technical or straying \noutside the bounds of this book, suffice it to say that some asymmetric cryp-\ntography systems are not public-key based. Thinking asymmetric cryptogra-\nphy and public key cryptography are similar is fine for day-to-day use, but if \nyou formally study mathematics or cryptography, you’ll soon learn otherwise.\nIn the following sections, we’ll explore the concepts of public key cryptography in greater \ndetail and look at three of the more common public key cryptosystems in use today: RSA, El \nGamal, and the Elliptic Curve Cryptosystem.\n" }, { "page_number": 382, "text": "Asymmetric Cryptography\n337\nPublic and Private Keys\nRecall from Chapter 9 that public key cryptosystems rely on pairs of keys assigned to each user \nof the cryptosystem. Every user maintains both a public key and a private key. As the names \nimply, public key cryptosystem users make their public keys freely available to anyone with \nwhom they want to communicate. The mere possession of the public key by third parties does \nnot introduce any weaknesses into the cryptosystem. The private key, on the other hand, is \nreserved for the sole use of the individual. It is never shared with any other cryptosystem user.\nNormal communication between public key cryptosystem users is quite straightforward. The \ngeneral process is shown in Figure 10.1.\nF I G U R E\n1 0 . 1\nAsymmetric key cryptography\nNotice that the process does not require the sharing of private keys. The sender encrypts the \nplaintext message (P) with the recipient’s public key to create the ciphertext message (C). When \nthe recipient opens the ciphertext message, they decrypt it using their private key to re-create the \noriginal plaintext message. Once the sender encrypts the message with the recipient’s public key, \nno user (including the sender) can decrypt that message without knowledge of the recipient’s \nprivate key (the second half of the public-private key pair used to generate the message). This \nis the beauty of public key cryptography—public keys can be freely shared using unsecured \ncommunications and then used to create secure communications channels between users previ-\nously unknown to each other.\nYou also learned in the previous chapter that public key cryptography entails a higher degree \nof computational complexity. Keys used within public key systems must be longer than those \nused in private key systems to produce cryptosystems of equivalent strengths.\nRSA\nThe most famous public key cryptosystem is named after its creators. In 1977, Ronald Rivest, \nAdi Shamir, and Leonard Adleman proposed the RSA public key algorithm that remains a \nworldwide standard today. They patented their algorithm and formed a commercial venture \nknown as RSA Security to develop mainstream implementations of their security technology. \nSender\nReceiver\nEncryption\nAlgorithm\nP\nC\nReceiver’s\nPublic Key\nDecryption\nAlgorithm\nC\nP\nReceiver’s\nPrivate Key\n" }, { "page_number": 383, "text": "338\nChapter 10\n\u0002 PKI and Cryptographic Applications\nToday, the RSA algorithm forms the security backbone of a large number of well-known secu-\nrity infrastructures produced by companies like Microsoft, Nokia, and Cisco.\nThe RSA algorithm depends upon the computational difficulty inherent in factoring large \nprime numbers. Each user of the cryptosystem generates a pair of public and private keys using \nthe algorithm described in the following steps:\n1.\nChoose two large prime numbers (approximately 200 digits each), labeled p and q.\n2.\nCompute the product of those two numbers, n = p * q.\n3.\nSelect a number, e, that satisfies the following two requirements:\na.\ne is less than n.\nb.\ne and (n – 1)(q – 1) are relatively prime—that is, the two numbers have no common fac-\ntors other than 1.\n4.\nFind a number, d, such that (ed – 1) mod (p – 1)(q – 1) = 0.\n5.\nDistribute e and n as the public key to all cryptosystem users. Keep d secret as the private key.\nIf Alice wants to send an encrypted message to Bob, she generates the ciphertext (C) from the \nplaintext (P) using the following formula (where e is Bob’s public key and n is the product of p \nand q created during the key generation process):\nC = Pe mod n\nWhen Bob receives the message, he performs the following calculation to retrieve the plain-\ntext message:\nP = Cd mod n\nEl Gamal\nIn Chapter 9, you learned how the Diffie-Hellman algorithm uses large integers and modular \narithmetic to facilitate the secure exchange of secret keys over insecure communications chan-\nnels. In 1985, Dr. T. El Gamal published an article describing how the mathematical principles \nbehind the Diffie-Hellman key exchange algorithm could be extended to support an entire pub-\nlic key cryptosystem used for the encryption and decryption of messages.\nMerkle-Hellman Knapsack\nAnother early asymmetric algorithm, the Merkle-Hellman Knapsack algorithm, was developed \nthe year after RSA was publicized. Like RSA, it’s also based upon the difficulty of performing \nfactoring operations, but it relies upon a component of set theory known as superincreasing \nsets rather than on large prime numbers. Merkle-Hellman was proven ineffective when it was \nbroken in 1984.\n" }, { "page_number": 384, "text": "Asymmetric Cryptography\n339\nOne of the major advantages of El Gamal over the RSA algorithm is that it was released into \nthe public domain. Dr. El Gamal did not obtain a patent on his extension of Diffie-Hellman and \nit is freely available for use, unlike the commercialized patented RSA technology.\nHowever, El Gamal also has a major disadvantage—the algorithm doubles the length of any \nmessage it encrypts. This presents a major hardship when encrypting long messages or data that \nwill be transmitted over a narrow bandwidth communications circuit.\nElliptic Curve\nAlso in 1985, two mathematicians, Neil Koblitz from the University of Washington and Victor \nMiller from International Business Machines (IBM), independently proposed the application of \nelliptic curve cryptography theory to develop secure cryptographic systems.\nImportance of Key Length\nThe length of the cryptographic key is perhaps the most important security parameter that can \nbe set at the discretion of the security administrator. It’s important to understand the capabil-\nities of your encryption algorithm and choose a key length that provides an appropriate level \nof protection. This judgment can be made by weighing the difficulty of defeating a given key \nlength (measured in the amount of processing time required to defeat the cryptosystem) \nagainst the importance of the data.\nGenerally speaking, the more critical your data, the stronger the key you use to protect it \nshould be. Timeliness of the data is also an important consideration. You must take into \naccount the rapid growth of computing power—the famous Moore’s Law states that comput-\ning power doubles approximately every 18 months. If it takes current computers one year of \nprocessing time to break your code, it will take only three months if the attempt is made with \ncontemporary technology three years down the road. If you expect that your data will still be \nsensitive at that time, you should choose a much longer cryptographic key that will remain \nsecure well into the future.\nThe strengths of various key lengths also vary greatly according to the cryptosystem you’re \nusing. According to a white paper published by Certicom, a provider of wireless security solu-\ntions, the key lengths shown in the following table for three asymmetric cryptosystems all pro-\nvide equal protection:\nCryptosystem\nKey Length\nRSA\n1,088 bits\nDSA\n1,024 bits\nElliptic curve\n160 bits\n" }, { "page_number": 385, "text": "340\nChapter 10\n\u0002 PKI and Cryptographic Applications\nThe mathematical concepts behind elliptic curve cryptography are quite com-\nplex and well beyond the scope of this book. However, you should be generally \nfamiliar with the elliptic curve algorithm and its potential applications when \npreparing for the CISSP exam. If you are interested in learning the detailed \nmathematics behind elliptic curve cryptosystems, an excellent tutorial exists at \nwww.certicom.com/research/online.html.\nAny elliptic curve can be defined by the following equation:\ny2 = x3 + ax + b\nIn this equation, x, y, a, and b are all real numbers. Each elliptic curve has a corresponding \nelliptic curve group made up of the points on the elliptic curve along with the point O, located \nat infinity. Two points within the same elliptic curve group (P and Q) can be added together \nwith an elliptic curve addition algorithm. This operation is expressed, quite simply, as follows:\nP + Q\nThis problem can be extended to involve multiplication by assuming that Q is a multiple of \nP, meaning that\nQ = xP\nComputer scientists and mathematicians believe that it is extremely hard to find x, even if P \nand Q are already known. This difficult problem, known as the elliptic curve discrete logarithm \nproblem, forms the basis of elliptic curve cryptography. It is widely believed that this problem \nis harder to solve than both the prime factorization problem that the RSA cryptosystem is based \nupon and the standard discrete logarithm problem utilized by Diffie-Hellman and El Gamal. \nThis is illustrated by the data shown in the table in the sidebar “Importance of Key Length,” \nwhich noted that a 1,024-bit RSA key is cryptographically equivalent to a 160-bit elliptic curve \ncryptosystem key.\nHash Functions\nLater in this chapter, you’ll learn how cryptosystems implement digital signatures to provide \nproof that a message originated from a particular user of the cryptosystem and to ensure that \nthe message was not modified while in transit between the two parties. Before you can com-\npletely understand that concept, we must first explain the concept of hash functions. This sec-\ntion explores the basics of hash functions and looks at several common hash functions used in \nmodern digital signature algorithms.\nHash functions have a very simple purpose—they take a potentially long message and gen-\nerate a unique output value derived from the content of the message. This value is commonly \nreferred to as the message digest. Message digests can be generated by the sender of a message \n" }, { "page_number": 386, "text": "Hash Functions\n341\nand transmitted to the recipient along with the full message for two reasons. First, the recipient \ncan use the same hash function to recompute the message digest from the full message. They can \nthen compare the computed message digest to the transmitted one to ensure that the message \nsent by the originator is the same one received by the recipient. If the message digests do not \nmatch, it indicates that the message was somehow modified while in transit. Second, the mes-\nsage digest can be used to implement a digital signature algorithm. This concept is covered in \n“Digital Signatures” later in this chapter.\nThe term message digest can be used interchangeably with a wide variety of \nother synonyms, including hash, hash value, hash total, CRC, fingerprint, \nchecksum, and digital ID.\nIn most cases, a message digest is 128 bits or larger. However, a single-digit value can be used \nto perform the function of parity, a low-level or single-digit checksum value used to provide a \nsingle individual point of verification. In most cases, the longer the message digest, the more reli-\nable its verification of integrity.\nAccording to RSA Security, there are five basic requirements for a cryptographic hash function:\n\u0002\nThe input can be of any length.\n\u0002\nThe output has a fixed length.\n\u0002\nThe hash function is relatively easy to compute for any input.\n\u0002\nThe hash function is one-way (meaning that it is extremely hard to determine the input \nwhen provided with the output). One-way functions and their usefulness in cryptography \nare described in Chapter 9.\n\u0002\nThe hash function is collision free (meaning that it is extremely hard to find two messages \nthat produce the same hash value).\nIn the following sections, we’ll look at four common hashing algorithms: SHA, MD2, MD4, \nand MD5. HMAC is also discussed later in this chapter.\nThere are numerous hashing algorithms not addressed in this exam. In addi-\ntion to SHA, MDx, and HMAC, you should also recognize HAVAL. HAVAL (HAsh \nof VAriable Length) is a modification of MD5. HAVAL uses 1,024-bit blocks and \nproduces hash values of 128, 160, 192, 224, and 256 bits.\nSHA\nThe Secure Hash Algorithm (SHA) and its successor, SHA-1, are government standard hash \nfunctions developed by the National Institute of Standards and Technology (NIST) and are \nspecified in an official government publication—the Secure Hash Standard (SHS), also known \nas Federal Information Processing Standard (FIPS) 180.\n" }, { "page_number": 387, "text": "342\nChapter 10\n\u0002 PKI and Cryptographic Applications\nSHA-1 takes an input of virtually any length (in reality, there is an upper bound of approx-\nimately 2,097,152 terabytes on the algorithm) and produces a 160-bit message digest. Due to \nthe mathematical structure of the hashing algorithm, this provides 80 bits of protection against \ncollision attacks. The SHA-1 algorithm processes a message in 512-bit blocks. Therefore, if the \nmessage length is not a multiple of 512, the SHA algorithm pads the message with additional \ndata until the length reaches the next highest multiple of 512.\nAlthough SHA-1 is the current official standard for federal government applications, it is not \nquite strong enough. It was designed to work with the old Data Encryption Standard (DES) and \nits follow-on, Triple DES (3DES). The new Advanced Encryption Standard (described in the \npreceding chapter) supports key lengths of up to 256 bits. Therefore, the government is cur-\nrently evaluating three new hash functions to replace SHA-1 in the near future:\n\u0002\nSHA-256 produces a 256-bit message digest and provides 128 bits of protection against \ncollision attacks.\n\u0002\nSHA-512 produces a 512-bit message digest and provides 256 bits of protection against \ncollision attacks.\n\u0002\nSHA-384 uses a truncated version of the SHA-512 hash to produce a 384-bit digest that \nsupports 192 bits of protection against collision attacks.\nAlthough it might seem trivial, take the time to memorize the size of the message \ndigests produced by each one of the hash algorithms described in this chapter.\nMD2\nThe MD2 (Message Digest 2) hash algorithm was developed by Ronald Rivest (the same Rivest \nof Rivest, Shamir, and Adleman fame) in 1989 to provide a secure hash function for 8-bit pro-\ncessors. MD2 pads the message so that its length is a multiple of 16 bytes. It then computes a \n16-byte checksum and appends it to the end of the message. A 128-bit message digest is then \ngenerated by using the entire original message along with the appended checksum.\nCryptanalytic attacks exist against improper implementations of the MD2 algorithm. Spe-\ncifically, Nathalie Rogier and Pascal Chauvaud discovered that if the checksum is not appended \nto the message before digest computation, collisions may occur.\nMD4\nThe next year, in 1990, Rivest enhanced his message digest algorithm to support 32-bit proces-\nsors and increase the level of security. This enhanced algorithm is known as MD4. It first pads \nthe message to ensure that the message length is 64 bits smaller than a multiple of 512 bits. For \nexample, a 16-bit message would be padded with 432 additional bits of data to make it 448 bits, \nwhich is 64 bits smaller than a 512-bit message.\nThe MD4 algorithm then processes 512-bit blocks of the message in three rounds of com-\nputation. The final output is a 128-bit message digest.\n" }, { "page_number": 388, "text": "Hash Functions\n343\nThe MD4 algorithm is no longer accepted as a suitable hashing function.\nSeveral mathematicians have published papers documenting flaws in the full version of MD4 \nas well as improperly implemented versions of MD4. In particular, Hans Dobbertin published \na paper in 1996 outlining how a modern PC could be used to find collisions for MD4 message \ndigests in less than one minute. For this reason, MD4 is no longer considered to be a secure \nhashing algorithm and its use should be avoided if at all possible.\nMD5\nIn 1991, Rivest released the next version of his message digest algorithm, which he called MD5. \nIt also processes 512-bit blocks of the message, but it uses four distinct rounds of computation \nto produce a digest of the same length as the MD2 and MD4 algorithms (128 bits). MD5 has \nthe same padding requirements as MD4—the message length must be 64 bits less than a mul-\ntiple of 512 bits.\nMD5 implements additional security features that reduce the speed of message digest pro-\nduction significantly. Cryptanalysts have not yet proven that the full MD5 algorithm is vulner-\nable to collisions, but many experts suspect that such a proof may not be far away. However, \nMD5 is the strongest of Rivest’s algorithms and remains in use today. MD5 is commonly seen \nin use in relation to file downloads, such as updates and patches, so the recipient can verify the \nintegrity of a file after downloading and before installing or applying it to any system.\nTable 10.1 lists well-known hashing algorithms and their resultant hash value lengths in bits. \nEarmark this page for memorization.\nT A B L E\n1 0 . 1\nHash Algorithm Memorization Chart\nName\nHash Value Length\nSecure Hash Algorithm (SHA-1)\n160\nMessage Digest 5 (MD5)\n128\nMessage Digest 4 (MD4)\n128\nMessage Digest 2 (MD2)\n128\nHMAC (Hash Message Authenticating Code)\nvariable\nHAVAL (Hash of Variable Length) —an MD5 variant\n128, 160, 192, 224, and 256 bits\n" }, { "page_number": 389, "text": "344\nChapter 10\n\u0002 PKI and Cryptographic Applications\nDigital Signatures\nOnce you have chosen a cryptographically sound hashing algorithm, you can use it to imple-\nment a digital signature system. Digital signature infrastructures have two distinct goals:\n\u0002\nDigitally signed messages assure the recipient that the message truly came from the claimed \nsender and enforce nonrepudiation (that is, they preclude the sender from later claiming \nthat the message is a forgery).\n\u0002\nDigitally signed messages assure the recipient that the message was not altered while in \ntransit between the sender and recipient. This protects against both malicious modification \n(a third party wanting to alter the meaning of the message) and unintentional modification \n(due to faults in the communications process, such as electrical interference).\nDigital signature algorithms rely upon a combination of the two major concepts already cov-\nered in this chapter—public key cryptography and hashing functions. If Alice wants to digitally \nsign a message she’s sending to Bob, she performs the following actions:\n1.\nAlice generates a message digest of the original plaintext message using one of the crypto-\ngraphically sound hashing algorithms, such as SHA-1, MD2, or MD5.\n2.\nAlice then encrypts only the message digest using her private key.\n3.\nAlice appends the signed message digest to the plaintext message.\n4.\nAlice transmits the appended message to Bob.\nDigital signatures are used for more than just messages. Software vendors \noften use digital signature technology to authenticate code distributions that \nyou download from the Internet, such as applets and software patches.\nWhen Bob receives the digitally signed message, he reverses the procedure, as follows:\n1.\nBob decrypts the message digest using Alice’s public key.\n2.\nBob uses the same hashing function to create a message digest of the full plaintext message \nreceived from Alice.\n3.\nBob then compares the decrypted message digest he received from Alice with the message \ndigest he computed himself. If the two digests match, he can be assured that the message he \nreceived was sent by Alice. If they do not match, either the message was not sent by Alice \nor the message was modified while in transit.\nNote that the digital signature process does not provide any privacy in and of itself. It only \nensures that the cryptographic goals of integrity and nonrepudiation are met. However, if Alice \nwanted to ensure the privacy of her message to Bob, she would add an additional step to the \nmessage creation process. After appending the signed message digest to the plaintext message, \nAlice could encrypt the entire message with Bob’s public key. When Bob received the message, \nhe would decrypt it with his own private key before following the steps just outlined.\n" }, { "page_number": 390, "text": "Digital Signatures\n345\nHMAC\nThe Hashed Message Authentication Code (HMAC) algorithm implements a partial digital sig-\nnature—it guarantees the integrity of a message during transmission, but it does not provide for \nnonrepudiation.\nHMAC can be combined with any standard message digest generation algorithm, such as \nMD5 or SHA-1. It can be combined with these algorithms by using a shared secret key. There-\nfore, only communicating parties who know the key can generate or verify the digital signature. \nIf the recipient decrypts the message digest but cannot successfully compare it to a message \ndigest generated from the plaintext message, the message was altered in transit.\nBecause HMAC relies on a shared secret key, it does not provide any nonrepudiation func-\ntionality (as previously mentioned). However, it operates in a more efficient manner than the \ndigital signature standard described in the following section and may be suitable for applica-\ntions in which symmetric key cryptography is appropriate. In short, it represents a halfway \npoint between unencrypted use of a message digest algorithm and computationally expensive \ndigital signature algorithms based upon public key cryptography.\nDigital Signature Standard\nThe National Institute of Standards and Technology specifies the digital signature algorithms \nacceptable for federal government use in Federal Information Processing Standard (FIPS) 186-2, \nalso known as the Digital Signature Standard (DSS). This document specifies that all federally \nWhich Key Should I Use?\nIf you’re new to public key cryptography, selection of the correct key for various applications \ncan be quite confusing. Encryption, decryption, message signing, and signature verification all \nuse the same algorithm with different key inputs. Here are a few simple rules to help keep these \nconcepts straight in your mind when preparing for the CISSP exam:\n\u0002\nIf you want to encrypt a message, use the sender’s public key.\n\u0002\nIf you want to decrypt a message sent to you, use your private key.\n\u0002\nIf you want to digitally sign a message you are sending to someone else, use your private key.\n\u0002\nIf you want to verify the signature on a message sent by someone else, use the sender’s \npublic key.\nThese four rules are the core principles of public key cryptography and digital signatures. If you \nunderstand each of them, you’re off to a great start!\n" }, { "page_number": 391, "text": "346\nChapter 10\n\u0002 PKI and Cryptographic Applications\napproved digital signature algorithms must use the SHA-1 hashing function (recall from our dis-\ncussion of hash functions that this specification is currently under review and will likely be \nrevised to support longer message digests).\nDSS also specifies the encryption algorithms that can be used to support a digital signature \ninfrastructure. There are three currently approved standard encryption algorithms:\n\u0002\nThe Digital Signature Algorithm (DSA) as specified in FIPS 186-2\n\u0002\nThe Rivest, Shamir, Adleman (RSA) algorithm as specified in ANSI X9.31\n\u0002\nThe Elliptic Curve DSA (ECDSA) as specified in ANSI X9.62\nTwo other digital signature algorithms you should recognize, at least by name, \nare Schnorr’s signature algorithm and Nybergrueppel’s signature algorithm. \nAlso, DES and SHA appear from time to time as algorithms employed in digital \nsignature systems.\nPublic Key Infrastructure\nThe major strength of public key encryption is its ability to facilitate communication between \nparties previously unknown to each other. This is made possible by the public key infrastructure \n(PKI) hierarchy of trust relationships. In the following sections, you’ll learn the basic compo-\nnents of the public key infrastructure and the cryptographic concepts that make global secure \ncommunications possible. You’ll learn the composition of a digital certificate, the role of cer-\ntificate authorities, and the process used to generate and destroy certificates.\nCertificates\nDigital certificates provide communicating parties with the assurance that they are communi-\ncating with people who truly are who they claim to be. Digital certificates are essentially \nendorsed copies of an individual’s public key. This prevents malicious individuals from distrib-\nuting false public keys on behalf of another party and then convincing third parties that they are \ncommunicating with someone else.\nDigital certificates contain specific identifying information, and their construction is gov-\nerned by an international standard—X.509. Certificates that conform to X.509 contain the fol-\nlowing data:\n\u0002\nVersion of X.509 to which the certificate conforms\n\u0002\nSerial number (from the certificate creator)\n\u0002\nSignature algorithm identifier (specifies the technique used by the certificate authority to \ndigitally sign the contents of the certificate)\n\u0002\nIssuer name (identification of the certificate authority that issued the certificate)\n" }, { "page_number": 392, "text": "Public Key Infrastructure\n347\n\u0002\nValidity period (specifies the dates and times—a starting date and time and an ending date \nand time—during which the certificate is valid)\n\u0002\nSubject’s name (contains the distinguished name, or DN, of the entity that owns the public \nkey contained in the certificate)\n\u0002\nSubject’s public key (the meat of the certificate—the actual public key the certificate owner \nused to set up secure communications)\nThe current version of X.509 (version 3) supports certificate extensions—customized vari-\nables containing data inserted into the certificate by the certificate authority to support tracking \nof certificates or various applications.\nIf you’re interested in building your own X.509 certificates or just want to \nexplore the inner workings of the public key infrastructure, you can purchase \nthe complete official X.509 standard from the International Telecommunica-\ntions Union. It’s part of the Open Systems Interconnection (OSI) series of com-\nmunication standards and can be purchased electronically on the ITU website \nat www.itu.int.\nX.509 has not been officially accepted as a standard, and implementations can vary from \nvendor to vendor. However, both Microsoft and Netscape have adopted X.509 as their de facto \nstandard for Secure Sockets Layer (SSL) communication between their web clients and servers. \nSSL is covered in greater detail in the section “Applied Cryptography” later in this chapter.\nCertificate Authorities\nCertificate authorities (CAs) are the glue that binds the public key infrastructure together. These \nneutral organizations offer notarization services for digital certificates. In order to obtain a dig-\nital certificate from a reputable CA, you must appear in front of one of their agents in person \nand present appropriate identifying documents. The following list includes the major CAs:\n\u0002\nVeriSign\n\u0002\nThawte Consulting\n\u0002\nSocietà per i Servizi Bancari-SSB S.p.A.\n\u0002\nInternet Publishing Services\n\u0002\nCertisign Certification Digital Ltda\n\u0002\nBelSign\nThere’s nothing preventing any organization from simply setting up shop as a CA. However, \nthe certificates issued by a CA are only as good as the trust placed in the organization that issued \nthem. This is an important item to consider when receiving a digital certificate from a third \nparty. If you don’t recognize and trust the name of the CA that issued the certificate, you \nshouldn’t place any trust in the certificate at all.\n" }, { "page_number": 393, "text": "348\nChapter 10\n\u0002 PKI and Cryptographic Applications\nRegistration authorities (RAs) assist CAs with the burden of verifying users’ identities prior \nto issuing digital certificates. They do not directly issue certificates themselves, but they play an \nimportant role in the certification process, allowing CAs to outsource some of their workload. \nBasically, you can think of an RA as a read-only CA. The RA’s primary work task is to distrib-\nute the CRL to any clients that request it.\nYou may have heard of Certificate Path Validation (CPV) in your studies of cer-\ntificate authorities. CPV means that each certificate in a certificate path from \noriginal start or root of trust down to the server or client in question is valid and \nlegitimate. CPV can be important if you need to verify that every link between \n“trusted” endpoints remains current, valid, and trustworthy. This issue arises \nfrom time to time when intermediary systems’ certificates expire or are \nreplaced; this can break the chain of trust or the verification path. By forcing a \nreverification of all stages of trust, you can reestablish all trust links and prove \nthat the assumed trust remains assured.\nCertificate Generation and Destruction\nThe technical concepts behind the public key infrastructure are relatively simple. In the follow-\ning sections, we’ll look at the processes used by certificate authorities to create, validate, and \nrevoke client certificates.\nEnrollment\nWhen you want to obtain a digital certificate, you must first prove your identity to the certificate \nauthority (CA) in some manner; this process is called enrollment. As mentioned in the previous sec-\ntion, this often involves physically appearing before an agent of the certification authority with \nappropriate identification documents. Some certificate authorities provide other means of verifica-\ntion, including the use of credit report data and identity verification by trusted community leaders.\nOnce you’ve satisfied the certificate authority regarding your identity, you provide them with \nyour public key. The CA next creates an X.509 digital certificate containing your identifying \ninformation and a copy of your public key. The CA then digitally signs the certificate using the \nCA’s private key and provides you with a copy of your signed digital certificate. You may then \nsafely distribute this certificate to anyone with whom you want to communicate securely.\nVerification\nWhen you receive a digital certificate from someone with whom you want to communicate, you \nverify the certificate by checking the CA’s digital signature using the CA’s public key. Next, you \nmust check and ensure that the certificate was not published on a certificate revocation list \n(CRL). At this point, you may assume that the public key listed in the certificate is authentic, \nprovided that it satisfies the following requirements:\n\u0002\nThe digital signature of the CA is authentic.\n\u0002\nYou trust the CA.\n" }, { "page_number": 394, "text": "Public Key Infrastructure\n349\n\u0002\nThe certificate is not listed on a CRL.\n\u0002\nThe certificate actually contains the data you are trusting.\nThe last point is a subtle but extremely important item. Before you trust an identifying piece of \ninformation about someone, be sure that it is actually contained within the certificate. If a certif-\nicate contains the e-mail address (billjones@foo.com) but not the individual’s name, you can \nonly be certain that the public key contained therein is associated with that e-mail address. The CA \nis not making any assertions about the actual identity of the billjones@foo.com e-mail account. \nHowever, if the certificate contains the name Bill Jones along with an address and telephone num-\nber, the CA is also vouching for that information as well.\nDigital certificate verification algorithms are built in to a number of popular web browsing \nand e-mail clients, so you won’t often need to get involved in the particulars of the process. \nHowever, it’s important to have a solid understanding of the technical details taking place \nbehind the scenes to make appropriate security judgments for your organization.\nRevocation\nOccasionally, a certificate authority needs to revoke a certificate. This might occur for one of \nthe following reasons:\n\u0002\nThe certificate was compromised (e.g., the certificate owner accidentally gave away the \nprivate key).\n\u0002\nThe certificate was erroneously issued (e.g., the CA mistakenly issued a certificate without \nproper verification).\n\u0002\nThe details of the certificate changed (e.g., the subject’s name changed).\n\u0002\nThe security association changed (e.g., the subject is no longer employed by the organiza-\ntion sponsoring the certificate).\nRevocation request grace period is the maximum response time within which \na CA will perform any requested revocation. This is defined in the Certificate \nPractice Statement (CPS). The CPS states the practices a CA employs when \nissuing or managing certificates.\nThere are two techniques used to verify the authenticity of certificates and identify revoked \ncertificates:\nCertificate revocation lists\nCertificate revocation lists (CRLs) are maintained by the various \ncertification authorities and contain the serial numbers of certificates that have been issued by \na CA and have been revoked, along with the date and time the revocation went into effect. The \nmajor disadvantage to certificate revocation lists is that they must be downloaded and cross-ref-\nerenced periodically, introducing a period of latency between the time a certificate is revoked \nand the time end users are notified of the revocation. However, CRLs remain the most common \nmethod of checking certificate status in use today.\n" }, { "page_number": 395, "text": "350\nChapter 10\n\u0002 PKI and Cryptographic Applications\nOnline Certificate Status Protocol (OCSP)\nThis protocol eliminates the latency inherent in \nthe use of certificate revocation lists by providing a means for real-time certificate verification. \nWhen a client receives a certificate, it sends an OCSP request to the CA’s OCSP server. The \nserver then responds with a status of valid, invalid, or unknown.\nKey Management\nWhen working within the public key infrastructure, it’s important that you comply with several \nbest practice requirements to maintain the security of your communications.\nFirst, choose your encryption system wisely. As you learned earlier, “security through obscu-\nrity” is not an appropriate approach. Choose an encryption system with an algorithm in the \npublic domain that has been thoroughly vetted by industry experts. Be wary of systems that use \na “black box” approach and maintain that the secrecy of their algorithm is critical to the integ-\nrity of the cryptosystem.\nYou must also select your keys in an appropriate manner. Use a key length that balances your \nsecurity requirements with performance considerations. Also, ensure that your key is truly ran-\ndom. Any patterns within the key increase the likelihood that an attacker will be able to break \nyour encryption and degrade the security of your cryptosystem.\nWhen using public key encryption, keep your secret key secret! Do not, under any circum-\nstances, allow anyone else to gain access to your private key. Remember, allowing someone \naccess even once permanently compromises all communications that take place (past, present, \nor future) using that key and allows the third party to successfully impersonate you.\nRetire keys when they’ve served a useful life. Many organizations have mandatory key rota-\ntion requirements to protect against undetected key compromise. If you don’t have a formal pol-\nicy that you must follow, select an appropriate interval based upon the frequency with which \nyou use your key. You might want to change your key pair every few months, if practical.\nBack up your key! If you lose the file containing your secret key due to data corruption, disas-\nter, or other circumstances, you’ll certainly want to have a backup available. You may wish to \neither create your own backup or use a key escrow service that maintains the backup for you. \nIn either case, ensure that the backup is handled in a secure manner. After all, it’s just as impor-\ntant as your primary key file!\nApplied Cryptography\nUp to this point, you’ve learned a great deal about the foundations of cryptography, the inner \nworkings of various cryptographic algorithms, and the use of the public key infrastructure to \ndistribute identity credentials using digital certificates. You should now feel comfortable with \nthe basics of cryptography and prepared to move on to higher-level applications of this tech-\nnology to solve everyday communications problems. In the following sections, we’ll examine \nthe use of cryptography to secure electronic mail, web communications services, electronic com-\nmerce, and networking.\n" }, { "page_number": 396, "text": "Applied Cryptography\n351\nElectronic Mail\nWe have mentioned several times that security should be cost effective. When it comes to elec-\ntronic mail, simplicity is the most cost-effective option, but sometimes cryptography functions \nprovide specific security services that you can’t avoid using. Since ensuring security is also cost \neffective, here are some simple rules about encrypting e-mail:\n\u0002\nIf you need confidentiality when sending an e-mail message, then encrypt the message.\n\u0002\nIf your message must maintain integrity, then you must hash the message.\n\u0002\nIf your message needs authentication and integrity, then you should digitally sign the message.\n\u0002\nIf your message requires confidentiality, integrity, authentication, and nonrepudiation, \nthen you should encrypt and digitally sign the message.\nIt is always the responsibility of the sender to ensure that proper mechanisms are in place to \nensure that the security (i.e., confidentiality, integrity, authenticity, and nonrepudiation) and \nprivacy of a message or transmission are maintained.\nOne of the most demanded applications of cryptography is the encryption and signing of \nelectronic mail messages. Until recently, encrypted e-mail required the use of complex, awk-\nward software that required manual intervention and complicated key exchange procedures. \nAn increased emphasis on security in recent years resulted in the implementation of strong \nencryption technology in mainstream electronic mail packages. Next, we’ll look at some of the \nsecure electronic mail standards in widespread use today.\nPretty Good Privacy\nPhil Zimmerman’s Pretty Good Privacy (PGP) secure e-mail system appeared on the computer \nsecurity scene in 1991. It is based upon the “web of trust” concept, where you must become \ntrusted by one or more PGP users to begin using the system. You then accept their judgment \nregarding the validity of additional users and, by extension, trust a multilevel “web” of users \ndescending from your initial trust judgments. PGP initially encountered a number of hurdles to \nwidespread use. The most difficult obstruction was the U.S. government export regulations, \nwhich treated encryption technology as munitions and prohibited the distribution of strong \nencryption technology outside of the United States. Fortunately, this restriction has since been \nrepealed and PGP may be freely distributed to most countries.\nPGP is available in two versions. The commercial version uses RSA for key exchange, IDEA \nfor encryption/decryption, and MD5 for message digest production. The freeware version uses \nDiffie-Hellman key exchange, the Carlisle Adams/Stafford Tavares (CAST) 128-bit encryption/\ndecryption algorithm, and the SHA-1 hashing function.\nPrivacy Enhanced Mail\nThe Privacy Enhanced Mail (PEM) standard addresses implementation guidelines for secure \nelectronic mail in a variety of Internet Request for Comments (RFC) documents. RFC 1421 out-\nlines an architecture that provides the following services:\n\u0002\nDisclosure protection\n\u0002\nOriginator authenticity\n" }, { "page_number": 397, "text": "352\nChapter 10\n\u0002 PKI and Cryptographic Applications\n\u0002\nMessage integrity\n\u0002\nNonrepudiation (if asymmetric cryptography is used)\nHowever, the same RFC also notes that PEM is not intended to provide the following services:\n\u0002\nAccess control\n\u0002\nTraffic flow confidentiality\n\u0002\nAddress list accuracy\n\u0002\nRouting control\n\u0002\nAssurance of message receipt and nondeniability of receipt\n\u0002\nAutomatic association of acknowledgments with the messages to which they refer\n\u0002\nReplay protection\nSecurity administrators who desire any of the services just listed should implement additional \ncontrols over and above those provided by a PEM-compliant electronic mail system. An impor-\ntant distinction between PEM and PGP is that PEM uses a CA-managed hierarchy of digital cer-\ntificates whereas PGP relies upon the “web of trust” between system users.\nMOSS\nAnother Request for Comments document, RFC 1848, specifies the MIME Object Security Services \n(MOSS), yet another standard for secure electronic mail, designed to supersede Privacy Enhanced \nMail. Like PGP, MOSS does not require the use of digital certificates and provides easy associations \nbetween certificates and e-mail addresses. It also allows the secure exchange of attachments to e-mail \nmessages. However, MOSS does not provide any interoperability with PGP or PEM.\nS/MIME\nThe Secure Multipurpose Internet Mail Extensions (S/MIME) protocol has emerged as a likely \nstandard for future encrypted electronic mail efforts. S/MIME utilizes the RSA encryption algo-\nrithm and has received the backing of major industry players, including RSA Security. S/MIME \nhas already been incorporated in a large number of commercial products, including these:\n\u0002\nMicrosoft Outlook and Outlook Express\n\u0002\nNetscape Communicator\n\u0002\nLotus Notes\n\u0002\nVeriSign Digital ID\n\u0002\nEudora WorldSecure\nS/MIME relies upon the use of X.509 certificates for the exchange of cryptographic keys. The \npublic keys contained in these certificates are used for digital signatures and for the exchange of sym-\nmetric keys used for longer communications sessions. RSA is the only public key cryptographic pro-\ntocol supported by S/MIME. The protocol supports the following symmetric encryption algorithms:\n\u0002\nDES\n\u0002\n3DES\n\u0002\nRC2\n" }, { "page_number": 398, "text": "Applied Cryptography\n353\nThe strong industry support for the S/MIME standard makes it likely that S/MIME will be \nwidely adopted and approved as an Internet standard for secure electronic mail by the Internet \nEngineering Task Force (IETF) in the near future.\nWeb\nAlthough secure electronic mail is still in its early days, secure web browsing has achieved wide-\nspread acceptance in recent years. This is mainly due to the strong movement toward electronic \ncommerce and the desire of both e-commerce vendors and consumers to securely exchange \nfinancial information (such as credit card information) over the Web. We’ll look at the two \ntechnologies that are responsible for the small lock icon at the bottom of web browsers—Secure \nSockets Layer (SSL) and Secure HTTP (S-HTTP).\nSecure Sockets Layer\nSecure Sockets Layer (SSL) was developed by Netscape to provide client/server encryption for \nweb traffic. SSL operates above the TCP/IP protocol in the network stack. Hypertext Transfer \nProtocol over Secure Sockets Layer (HTTPS) uses port 443 to negotiate encrypted communi-\ncations sessions between web servers and browser clients. Although SSL originated as a stan-\ndard for Netscape browsers, Microsoft also adopted it as a security standard for its popular \nInternet Explorer browser. The incorporation of SSL into both of these products made it the de \nfacto Internet standard.\nSSL relies upon the exchange of server digital certificates to negotiate RSA encryption/\ndecryption parameters between the browser and the web server. SSL’s goal is to create secure \ncommunications channels that remain open for an entire web browsing session.\nSSL forms the basis for a new security standard, the Transport Layer Security (TLS) protocol, \nspecified in RFC 2246. TLS is expected to supersede SSL as it gains in popularity. SSL and TLS \nboth support server authentication (mandatory) and client authentication (optional).\nBe certain to know the differences between HTTP over SSL (HTTPS) and Secure \nHTTP (S-HTTP).\nSecure HTTP\nSecure HTTP (S-HTTP) is the second major protocol used to provide security on the World \nWide Web. S-HTTP is not nearly as popular as SSL, but it has two major differences:\n\u0002\nS-HTTP secures individual messages between a client and server rather than creating a \nsecure communications channel as SSL does.\n\u0002\nS-HTTP supports two-way authentication between a client and a server rather than the \nserver-only authentication supported by SSL.\n" }, { "page_number": 399, "text": "354\nChapter 10\n\u0002 PKI and Cryptographic Applications\nSteganography\nSteganography is the art of using cryptographic techniques to embed secret messages within another \nmessage. Steganographic algorithms work by making alterations to the least significant bits of the \nmany bits that make up image files. The changes are so minor that there is no appreciable effect on \nthe viewed image. This technique allows communicating parties to hide messages in plain sight—\nsuch as embedding a secret message within an illustration on an otherwise innocent web page.\nSteganographers often embed their secret messages within images or WAV \nfiles. These files are often so large that the secret message would easily be \nmissed by even the most observant inspector.\nE-Commerce\nAs mentioned in the previous section, the rapid growth of electronic commerce led to the wide-\nspread adoption of SSL and HTTPS as standards for the secure exchange of information \nthrough web browsers. Recently, industry experts have recognized the added security necessary \nfor electronic transactions. In the next section, we’ll explore the Secure Electronic Transaction \n(SET) protocol designed to add this enhanced security.\nSecure Electronic Transactions\nThe Secure Electronic Transaction (SET) standard was originally developed jointly by Visa and \nMasterCard—the two largest providers of credit cards in the United States—as a means for \nsecuring e-commerce transactions. When they outlined the business case for SET, the two ven-\ndors identified the following seven requirements:\n\u0002\nProvide confidentiality of payment information and enable confidentiality of order infor-\nmation transmitted along with the payment information.\n\u0002\nEnsure the integrity of all transmitted data.\n\u0002\nProvide authentication that a cardholder is a legitimate user of a branded payment card account.\n\u0002\nProvide authentication that a merchant can accept branded payment card transactions \nthrough its relationship with an acquiring financial institution.\n\u0002\nEnsure the use of the best security practices and system design techniques to protect all \nlegitimate parties in an electronic commerce transaction.\n\u0002\nCreate a protocol that neither depends on transport security mechanisms nor prevents their use.\n\u0002\nFacilitate and encourage interoperability among software and network providers.\n Material on SET is disappearing from the Internet since the original site, \nwww.setco.org, is no longer active. For more information on SET, try visiting \nwww.ectag.org.\n" }, { "page_number": 400, "text": "Applied Cryptography\n355\nSET utilizes a combination of RSA public key cryptography and DES private key cryptogra-\nphy in conjunction with digital certificates to secure electronic transactions. The original SET \nstandard was published in 1997.\nNetworking\nThe final application of cryptography we’ll explore in this chapter is the use of cryptographic \nalgorithms to provide secure networking services. In the following sections, we’ll take a brief \nlook at two methods used to secure communications circuits, as well as IPSec and the ISAKMP \nprotocol. We’ll also look at some of the security issues surrounding wireless networking.\nCircuit Encryption\nSecurity administrators use two types of encryption techniques to protect data traveling over \nnetworks—link encryption and end-to-end encryption.\nLink encryption protects entire communications circuits by creating a secure tunnel between \ntwo points using either a hardware or a software solution that encrypts all traffic entering one \nend of the tunnel and decrypts all traffic entering the other end of the tunnel. For example, a \ncompany with two offices connected via a data circuit might use link encryption to protect \nagainst attackers monitoring at a point in between the two offices.\nEnd-to-end encryption protects communications between two parties (e.g., a client and a \nserver) and is performed independently of link encryption. An example of end-to-end encryp-\ntion would be the use of Privacy Enhanced Mail to pass a message between a sender and a \nreceiver. This protects against an intruder who might be monitoring traffic on the secure side of \nan encrypted link or traffic sent over an unencrypted link.\nThe critical difference between link and end-to-end encryption is that in link encryption, all \nthe data, including the header, trailer, address, and routing data, is also encrypted. Therefore, \neach packet has to be decrypted at each hop so it can be properly routed to the next hop and \nthen reencrypted before it can be sent along its way, which slows the routing. End-to-end \nencryption does not encrypt the header, trailer, address, and routing data, so it moves faster \nfrom point to point but is more susceptible to sniffers and eavesdroppers. When encryption hap-\npens at the higher OSI layers, it is usually end-to-end encryption, and if encryption is done at \nthe lower layers of the OSI model, it is usually link encryption.\nSecure Shell (SSH) is a good example of an end-to-end encryption technique. This suite of \nprograms provide encrypted alternatives to common Internet applications like FTP, Telnet, \nand rlogin. There are actually two versions of SSH. SSH1 (which is now considered insecure) \nMONDEX\nThe MONDEX payment system, owned by MasterCard International, uses cryptographic tech-\nnology to allow electronic commerce users to store value on smart chips in proprietary pay-\nment cards. The value can then be instantly transferred to a vendor at the point of purchase.\n" }, { "page_number": 401, "text": "356\nChapter 10\n\u0002 PKI and Cryptographic Applications\nsupports the DES, 3DES, IDEA, and Blowfish algorithms. SSH2 drops support for DES and \nIDEA but adds support for several other algorithms.\nIPSec\nThere are various security architectures in use today, each one designed to address security \nissues in different environments. One such architecture that supports secure communications is \nthe Internet Protocol Security (IPSec) standard. IPSec is a standard architecture set forth by the \nInternet Engineering Task Force (IETF) for setting up a secure channel to exchange information \nbetween two entities. The two entities could be two systems, two routers, two gateways, or any \ncombination of entities. Although generally used to connect two networks, IPSec can be used to \nconnect individual computers, such as a server and a workstation or a pair of workstations \n(sender and receiver, perhaps). IPSec does not dictate all implementation details but is an open, \nmodular framework that allows many manufacturers and software developers to develop IPSec \nsolutions that work well with products from other vendors.\nIPSec uses public key cryptography to provide encryption, access control, nonrepudiation, and \nmessage authentication, all using IP protocols. The primary use of IPSec is for virtual private net-\nworks (VPNs), so IPSec operates in either transport or tunnel mode. Tunnel mode is most often \nused when you set up VPNs between network gateways. In tunnel mode, the message and the orig-\ninal IP header are encrypted. Then a new IP header that addresses the destination’s gateway is \nadded. In contrast, in transport mode, only the message is encrypted, not the IP header.\nThe IP Security (IPSec) protocol provides a complete infrastructure for secured network \ncommunications. IPSec has gained widespread acceptance and is now offered in a number of \ncommercial operating systems out of the box. IPSec relies upon security associations, and there \nare four main components:\n\u0002\nThe Authentication Header (AH) provides assurances of message integrity and nonrepudi-\nation. AH also provides authentication and access control and prevents replay attacks.\n\u0002\nThe Encapsulating Security Payload (ESP) provides confidentiality and integrity of packet \ncontents. It provides encryption and limited authentication and prevents replay attacks.\nESP also provides some limited authentication, but not to the degree of the AH. \nThough ESP is sometimes used without AH, it’s rare to see AH used without ESP.\n\u0002\nThe IP Payload Compression (IPcomp) protocol allows IPSec users to achieve enhanced \nperformance by compressing packets prior to the encryption operation.\n\u0002\nThe Internet Key Exchange (IKE) protocol provides for the secure exchange of crypto-\ngraphic keys between IPSec participants. IKE establishes a shared security policy between \ncommunication partners and authenticates and/or produces keys for key-dependent ser-\nvices. All communication partners (e.g., router/firewall/host) must be identified before traf-\nfic is sent. This is accomplished through manual pre-shared keys or by a CA-controlled key \ndistribution service (ISAKMP).\n" }, { "page_number": 402, "text": "Applied Cryptography\n357\nOAKLEY is a key establishment protocol that was proposed for IPsec but was \nsuperseded by IKE. OAKLEY is based on the Diffie-Hellman algorithm and \ndesigned to be a compatible component of ISAKMP.\nIPSec provides for two discrete modes of operation. When IPSec is used in transport mode, \nonly the packet payload is encrypted. This mode is designed for peer-to-peer communication. \nWhen it’s used in tunnel mode, the entire packet, including the header, is encrypted. This mode \nis designed for gateway-to-gateway communication.\nIPSec is an extremely important concept in modern computer security. Be cer-\ntain that you’re familiar with the four component protocols and the two modes \nof IPSec operation.\nAt runtime, you set up an IPSec session by creating a security association (SA). The SA rep-\nresents the communication session and records any configuration and status information about \nthe connection. The SA represents a simplex connection. If you want a two-way channel, you \nneed two SAs, one for each direction. Also, if you want to support a bidirectional channel using \nboth AH and ESP, you will need to set up four SAs. Some of IPSec’s greatest strengths comes \nfrom being able to filter or manage communications on a per-SA basis so that clients or gate-\nways between which security associations exist can be rigorously managed in terms of what \nkinds of protocols or services can use an IPSec connection. Also, without a valid security asso-\nciation defined, pairs of users or gateways cannot establish IPSec links.\nFurther details of the IPSec algorithm are provided in Chapter 3, “ISO Model, Network \nSecurity, and Protocols.”\nISAKMP\nThe Internet Security Association and Key Management Protocol (ISAKMP) provides back-\nground security support services for IPSec by negotiating, establishing, modifying, and deleting \nsecurity associations. As you learned in the previous section, IPSec relies upon a system of secu-\nrity associations (SAs). These SAs are managed through the use of ISAKMP. There are four \nbasic requirements for ISAKMP, as set forth in Internet RFC 2408:\n\u0002\nAuthenticate communicating peers.\n\u0002\nCreate and manage security associations.\n\u0002\nProvide key generation mechanisms.\n\u0002\nProtect against threats (e.g., replay and denial of service attacks).\nWireless Networking\nThe widespread rapid adoption of wireless networks poses a tremendous security risk. Many \ntraditional networks do not implement encryption for routine communications between hosts \non the local network and rely upon the assumption that it would be too difficult for an attacker \n" }, { "page_number": 403, "text": "358\nChapter 10\n\u0002 PKI and Cryptographic Applications\nto gain physical access to the network wire inside a secure location to eavesdrop on the network. \nHowever, wireless networks transmit data through the air, leaving them extremely vulnerable \nto interception.\nThe security community responded with the introduction of Wired Equivalent Privacy \n(WEP), which provides 40-, 64-, and 128-bit encryption options to protect communications \nwithin the wireless LAN. WEP is described in IEEE 802.11 as an optional component of the \nwireless networking standard. Unfortunately, there are several vulnerabilities in this protocol \nthat make it a less than desirable choice for many security administrators.\nRemember that WEP is not an end-to-end security solution. It encrypts traffic \nonly between a mobile computer and the nearest wireless access point. Once \nthe traffic hits the wired network, it’s in the clear again.\nAnother commonly used wireless security standard, IEEE 802.1x, provides a flexible frame-\nwork for authentication and key management in wireless networks. It greatly reduces the bur-\nden inherent in changing WEP encryption keys manually and supports a number of diverse \nauthentication techniques.\nWireless Application Protocol (WAP)\nUnlike WEP, Wireless Application Protocol (WAP) is not used for 802.11 wireless networking. \nInstead, WAP is used by portable devices like cell phones and PDAs to support Internet con-\nnectivity via your telco or carrier provider. WAP is not a single protocol, but rather a suite of \nprotocols:\n\u0002\nWireless Markup Language (WML) and Script\n\u0002\nWireless Application Environment (WAE)\n\u0002\nWireless Transaction Protocol (WTP)\n\u0002\nWireless Transport Layer Security Protocol (WTLS; provides three classes of security)\n\u0002\nWireless Datagram Protocol (WDP)\nWireless Transport Layer Security Protocol (WTLS) provides the authentication mechanism \nfor WAP. It is a wireless version of TLS, which is a derivative of SSL v.3.0. WTLS provides for \nthree types of authentication:\n\u0002\nClass 1 (Anonymous authentication)\n\u0002\nClass 2 (Server authentication)\n\u0002\nClass 3 (Two-way client and server authentication)\nThe biggest problem with WAP is known as the “gap in wap.” This means that WAP is used \nto protect data from the handheld device to the receiving station at the telco, but once on the \ntelco’s servers, data returns to its pre-WAP state (i.e., decrypted into plain text) before being \nreencoded or reencrypted into SSL for secured transmission from the telco’s servers to the ulti-\nmate Internet-based destination. This temporary state of insecurity grants the telco (and other \npotential eavesdroppers) the ability to gain direct access to your data.\n" }, { "page_number": 404, "text": "Cryptographic Attacks\n359\nCryptographic Attacks\nAs with any security mechanism, malicious individuals have found a number of attacks to defeat \ncryptosystems. It’s important that you, as a security administrator, understand the threats posed \nby various cryptographic attacks to minimize the risks posed to your systems:\nAnalytic attack\nThis is an algebraic manipulation that attempts to reduce the complexity of \nthe algorithm. Analytic attacks focus on the logic of the algorithm itself.\nImplementation attack\nThis is a type of attack that exploits weaknesses in the implementation \nof a cryptography system. It focuses on exploiting the software code, not just errors and flaws \nbut methodology employed to program the encryption system.\nStatistical attack\nA statistical attack exploits statistical weaknesses in a cryptosystem, such as \ninability to produce random numbers and floating point errors. Statistical attacks attempt to \nfind a vulnerability in the hardware or operating system hosting the cryptography application.\nBrute force\nBrute force attacks are quite straightforward. Such an attack attempts every pos-\nsible valid combination for a key or password. They involve using massive amounts of process-\ning power to methodically guess the key used to secure cryptographic communications. For a \nnon-flawed protocol, the average amount of time required to discover the key through a brute \nforce attack is directly proportional to the length of the key. A brute force attack will always be \nsuccessful given enough time. However, enough time is relative to the length of the key. For \nexample, a computer that could brute force a DES 56-bit key in 1 second would take 149 trillion \nyears to brute force an AES 128-bit key. Every additional bit of key length doubles the time to \nperform a brute force attack because the number of potential keys is doubled.\nKnown plaintext\nIn the known plaintext attack, the attacker has a copy of the encrypted mes-\nsage along with the plaintext message used to generate the ciphertext (the copy). This knowl-\nedge greatly assists the attacker in breaking weaker codes. For example, imagine the ease with \nwhich you could break the Caesar cipher described in Chapter 9 if you had both a plaintext and \na ciphertext copy of the same message.\nChosen ciphertext\nIn a chosen ciphertext attack, the attacker has the ability to decrypt chosen por-\ntions of the ciphertext message and use the decrypted portion of the message to discover the key.\nChosen plaintext\nIn a chosen plaintext attack, the attacker has the ability to encrypt plaintext mes-\nsages of their choosing and can then analyze the ciphertext output of the encryption algorithm.\nMeet-in-the-middle\nAttackers might use a meet-in-the-middle attack to defeat encryption algo-\nrithms that use two rounds of encryption. This attack is the reason that Double DES (2DES) was \nquickly discarded as a viable enhancement to the DES encryption in favor of Triple DES (3DES). In \nthe meet-in-the-middle attack, the attacker uses a known plaintext message. The plaintext is then \nencrypted using every possible key (k1), while the equivalent ciphertext is decrypted using all possible \nkeys (k2). When a match is found, the corresponding pair (k1, k2) represents both portions of the \ndouble encryption. This type of attack generally takes only double the time necessary to break a single \nround of encryption (or 2n rather than the anticipated 2n * 2n) , offering minimal added protection.\n" }, { "page_number": 405, "text": "360\nChapter 10\n\u0002 PKI and Cryptographic Applications\nMan-in-the-middle\nIn the man-in-the-middle attack, a malicious individual sits between two \ncommunicating parties and intercepts all communications (including the setup of the crypto-\ngraphic session). The attacker responds to the originator’s initialization requests and sets up a \nsecure session with the originator. The attacker then establishes a second secure session with the \nintended recipient using a different key and posing as the originator. The attacker can then “sit \nin the middle” of the communication and read all traffic as it passes between the two parties.\nBe careful not to confuse the meet-in-the-middle attack with the man-in-the-\nmiddle attack. They sound very similar!\nBirthday\nThe birthday attack (also known as a collision attack or reverse hash matching (see \nour discussion of brute force and dictionary attacks in Chapter 2)) seeks to find flaws in the one-\nto-one nature of hashing functions. In this attack, the malicious individual seeks to substitute \nin a digitally signed communication a different message that produces the same message digest, \nthereby maintaining the validity of the original digital signature.\nReplay\nThe replay attack is used against cryptographic algorithms that don’t incorporate tem-\nporal protections. In this attack, the malicious individual intercepts an encrypted message \nbetween two parties (often a request for authentication) and then later “replays” the captured \nmessage to open a new session. This attack can be defeated by incorporating a time stamp and \nexpiration period into each message.\nSummary\nPublic key encryption provides an extremely flexible infrastructure, facilitating simple, secure com-\nmunication between parties that do not necessarily know each other prior to initiating the commu-\nnication. It also provides the framework for the digital signing of messages to ensure nonrepudiation \nand message integrity. This chapter explored public key encryption, which is made possible by the \npublic key infrastructure (PKI) hierarchy of trust relationships. We also described some popular \ncryptographic algorithms, such as link encryption and end-to-end encryption. Finally, we intro-\nduced you to the public key infrastructure, which uses certificate authorities (CAs) to generate digital \ncertificates containing the public keys of system users and digital signatures, which rely upon a com-\nbination of public key cryptography and hashing functions.\nWe also looked at some of the common applications of cryptographic technology in solving \neveryday problems. You learned how cryptography can be used to secure electronic mail (using \nPGP, PEM, MOSS, and S/MIME), web communications (using SSL and S-HTTP), electronic \ncommerce (using steganography and SET), and both peer-to-peer and gateway-to-gateway net-\nworking (using IPSec and ISAKMP) as well as wireless communications (using WEP).\nFinally, we looked at some of the more common attacks used by malicious individuals \nattempting to interfere with or intercept encrypted communications between two parties. Such \n" }, { "page_number": 406, "text": "Exam Essentials\n361\nattacks include birthday, cryptanalytic, replay, brute force, known plaintext, chosen plaintext, \nchosen ciphertext, meet-in-the-middle, man-in-the-middle, and birthday attacks. It’s important \nfor you to understand these attacks in order to provide adequate security against them.\nExam Essentials\nUnderstand the key types used in asymmetric cryptography.\nPublic keys are freely shared \namong communicating parties, whereas private keys are kept secret. To encrypt a message, use \nthe recipient’s public key. To decrypt a message, use your own private key. To sign a message, \nuse your own private key. To validate a signature, use the sender’s public key.\nBe familiar with the three major public key cryptosystems.\nRSA is the most famous public \nkey cryptosystem; it was developed by Rivest, Shamir, and Adleman in 1977. It depends upon \nthe difficulty of factoring the product of prime numbers. El Gamal is an extension of the Diffie-\nHellman key exchange algorithm that depends upon modular arithmetic. The elliptic curve \nalgorithm depends upon the elliptic curve discrete logarithm problem and provides more secu-\nrity than other algorithms when both are used with keys of the same length.\nKnow the fundamental requirements of a hash function.\nGood hash functions have five \nrequirements. They must allow input of any length, provide fixed-length output, make it rela-\ntively easy to compute the hash function for any input, provide one-way functionality, and be \ncollision free.\nBe familiar with the four major hashing algorithms.\nThe Secure Hash Algorithm (SHA) and \nits successor SHA-1 make up the government standard message digest function. SHA-1 pro-\nduces a 160-bit message digest. MD2 is a hash function that is designed for 8-bit processors and \nprovides a 16-byte hash. MD4 and MD5 both produce a 128-bit hash, but MD4 has proven vul-\nnerabilities and is no longer accepted.\nUnderstand how digital signatures are generated and verified.\nTo digitally sign a message, \nfirst use a hashing function to generate a message digest. Then encrypt the digest with your pri-\nvate key. To verify the digital signature on a message, decrypt the signature with the sender’s \npublic key and then compare the message digest to one you generate yourself. If they match, the \nmessage is authentic.\nKnow the components of the Digital Signature Standard (DSS).\nThe Digital Signature Stan-\ndard uses the SHA-1 message digest function along with one of three encryption algorithms: the \nDigital Signature Algorithm (DSA), the Rivest, Shamir, Adleman (RSA) algorithm, or the Ellip-\ntic Curve DSA (ECDSA) algorithm.\nUnderstand the public key infrastructure (PKI)\nIn the public key infrastructure, certificate \nauthorities (CAs) generate digital certificates containing the public keys of system users. Users \nthen distribute these certificates to people with whom they wish to communicate. Certificate \nrecipients verify a certificate using the CA’s public key.\n" }, { "page_number": 407, "text": "362\nChapter 10\n\u0002 PKI and Cryptographic Applications\nKnow the common applications of cryptography to secure electronic mail.\nThe emerging \nstandard for encrypted messages is the S/MIME protocol. Other popular e-mail security proto-\ncols include Phil Zimmerman’s Pretty Good Privacy (PGP), Privacy Enhanced Mail (PEM), and \nMIME Object Security Services (MOSS).\nKnow the common applications of cryptography to secure web activity.\nThe de facto stan-\ndard for secure web traffic is the use of HTTP over Secure Sockets Layer (SSL), otherwise \nknown as HTTPS. Secure HTTP (S-HTTP) also plays an important role in protecting individual \nmessages. Most web browsers support both standards.\nKnow the common applications of cryptography to secure electronic commerce.\nThe Secure \nElectronic Transaction (SET) protocol was developed jointly by Visa and MasterCard to pro-\nvide end-to-end security for electronic commerce transactions.\nKnow the common applications of cryptography to secure networking.\nThe IPSec protocol \nstandard provides a common framework for encrypting network traffic and is built in to a num-\nber of common operating systems. In IPSec transport mode, packet contents are encrypted for \npeer-to-peer communication. In tunnel mode, the entire packet, including header information, \nis encrypted for gateway-to-gateway communications.\nDescribe IPSec.\nIPSec is a security architecture framework that supports secure communica-\ntion over IP. IPSec establishes a secure channel in either transport mode or tunnel mode. It can \nbe used to establish direct communication between computers or to set up a VPN between net-\nworks. IPSec uses two protocols: Authentication Header (AH) and Encapsulating Security Pay-\nload (ESP).\nExplain common cryptographic attacks\nBrute force attacks are attempts to randomly find the \ncorrect cryptographic key. Known plaintext, chosen ciphertext, and chosen plaintext attacks \nrequire the attacker to have some extra information in addition to the ciphertext. The meet-in-\nthe-middle attack exploits protocols that use two rounds of encryption. The man-in-the-middle \nattack fools both parties into communicating with the attacker instead of directly with each \nother. The birthday attack is an attempt to find collisions in hash functions. The replay attack \nis an attempt to reuse authentication requests.\n" }, { "page_number": 408, "text": "Review Questions\n363\nReview Questions\n1.\nIn the RSA public key cryptosystem, which one of the following numbers will always be largest?\nA. e\nB. n\nC. p\nD. q\n2.\nWhich cryptographic algorithm forms the basis of the El Gamal cryptosystem?\nA. RSA\nB. Diffie-Hellman\nC. 3DES\nD. IDEA\n3.\nIf Richard wants to send an encrypted message to Sue using a public key cryptosystem, which \nkey does he use to encrypt the message?\nA. Richard’s public key\nB. Richard’s private key\nC. Sue’s public key\nD. Sue’s private key\n4.\nIf a 2,048-bit plaintext message was encrypted with the El Gamal public key cryptosystem, how \nlong would the resulting ciphertext message be?\nA. 1,024 bits\nB. 2,048 bits\nC. 4,096 bits\nD. 8,192 bits\n5.\nAcme Widgets currently uses a 1,024-bit RSA encryption standard companywide. The company \nplans to convert from RSA to an elliptic curve cryptosystem. If it wishes to maintain the same \ncryptographic strength, what ECC key length should it use?\nA. 160 bits\nB. 512 bits\nC. 1,024 bits\nD. 2,048 bits\n" }, { "page_number": 409, "text": "364\nChapter 10\n\u0002 PKI and Cryptographic Applications\n6.\nJohn would like to produce a message digest of a 2,048-byte message he plans to send to Mary. If he \nuses the SHA-1 hashing algorithm, what size will the message digest for this particular message be?\nA. 160 bits\nB. 512 bits\nC. 1,024 bits\nD. 2,048 bits\n7.\nWhich one of the following message digest algorithms is considered flawed and should no longer \nbe used?\nA. SHA-1\nB. MD2\nC. MD4\nD. MD5\n8.\nWhich one of the following message digest algorithms is the current U.S. government standard \nin use by secure federal information processing systems?\nA. SHA-1\nB. MD2\nC. MD4\nD. MD5\n9.\nRichard received an encrypted message sent to him from Sue. Which key should he use to \ndecrypt the message?\nA. Richard’s public key\nB. Richard’s private key\nC. Sue’s public key\nD. Sue’s private key\n10. Richard would like to digitally sign a message he’s sending to Sue so that Sue can be sure the mes-\nsage came from him without modification while in transit. Which key should he use to encrypt \nthe message digest?\nA. Richard’s public key\nB. Richard’s private key\nC. Sue’s public key\nD. Sue’s private key\n11. Which one of the following algorithms is not supported by the Digital Signature Standard?\nA. Digital Signature Algorithm\nB. RSA\nC. El Gamal DSA\nD. Elliptic Curve DSA\n" }, { "page_number": 410, "text": "Review Questions\n365\n12. Which International Telecommunications Union (ITU) standard governs the creation and \nendorsement of digital certificates for secure electronic communication?\nA. X.500\nB. X.509\nC. X.900\nD. X.905\n13. What cryptosystem provides the encryption/decryption technology for the commercial version \nof Phil Zimmerman’s Pretty Good Privacy secure e-mail system?\nA. DES/3DES\nB. IDEA\nC. ECC\nD. El Gamal\n14. What TCP/IP communications port is utilized by Secure Sockets Layer traffic?\nA. 80\nB. 220\nC. 443\nD. 559\n15. What type of cryptographic attack rendered Double DES (2DES) no more effective than stan-\ndard DES encryption?\nA. Birthday\nB. Chosen ciphertext\nC. Meet-in-the-middle\nD. Man-in-the-middle\n16. Which of the following security systems was created to support the use of stored-value payment \ncards?\nA. SET\nB. IPSec\nC. MONDEX\nD. PGP\n17.\nWhich of the following links would be protected by WEP encryption?\nA. Firewall to firewall\nB. Router to firewall\nC. Client to wireless access point\nD. Wireless access point to router\n" }, { "page_number": 411, "text": "366\nChapter 10\n\u0002 PKI and Cryptographic Applications\n18. What is the major disadvantage of using certificate revocation lists?\nA. Key management\nB. Latency\nC. Record keeping\nD. Vulnerability to brute force attacks\n19. Which one of the following encryption algorithms is now considered insecure?\nA. El Gamal\nB. RSA\nC. Skipjack\nD. Merkle-Hellman Knapsack\n20. What does IPSec define?\nA. All possible security classifications for a specific configuration\nB. A framework for setting up a secure communication channel\nC. The valid transition states in the Biba model\nD. TCSEC security categories\n" }, { "page_number": 412, "text": "Answers to Review Questions\n367\nAnswers to Review Questions\n1.\nB. The number n is generated as the product of the two large prime numbers p and q. Therefore, \nn must always be greater than both p and q. Furthermore, it is an algorithm constraint that e \nmust be chosen such that e is smaller than n. Therefore, in RSA cryptography n is always the \nlargest of the four variables shown in the options to this question.\n2.\nB. The El Gamal cryptosystem extends the functionality of the Diffie-Hellman key exchange \nprotocol to support the encryption and decryption of messages.\n3.\nC. Richard must encrypt the message using Sue’s public key so that Sue can decrypt it using her \nprivate key. If he encrypted the message with his own public key, the recipient would need to \nknow Richard’s private key to decrypt the message. If he encrypted it with his own private key, \nany user could decrypt the message using Richard’s freely available public key. Richard could \nnot encrypt the message using Sue’s private key because he does not have access to it. If he did, \nany user could decrypt it using Sue’s freely available public key.\n4.\nC. The major disadvantage of the El Gamal cryptosystem is that it doubles the length of any mes-\nsage it encrypts. Therefore, a 2,048-bit plaintext message would yield a 4,096-bit ciphertext \nmessage when El Gamal is used for the encryption process.\n5.\nA. The elliptic curve cryptosystem requires significantly shorter keys to achieve encryption that \nwould be the same strength as encryption achieved with the RSA encryption algorithm. A 1,024-\nbit RSA key is cryptographically equivalent to a 160-bit elliptic curve cryptosystem key.\n6.\nA. The SHA-1 hashing algorithm always produces a 160-bit message digest, regardless of the \nsize of the input message. In fact, this fixed-length output is a requirement of any secure hashing \nalgorithm.\n7.\nC. The MD4 algorithm has documented flaws that produce collisions, rendering it useless as a \nhashing function for secure cryptographic applications.\n8.\nA. SHA-1 is the current U.S. government standard, as defined in the Secure Hashing Standard (SHS), \nalso known as Federal Information Processing Standard (FIPS) 180. Several newer algorithms (such \nas SHA-256, SHA-384, and SHA-512) are being considered to replace SHA-1 and make it crypto-\ngraphically compatible with the stronger Advanced Encryption Standard.\n9.\nB. Sue would have encrypted the message using Richard’s public key. Therefore, Richard needs \nto use the complementary key in the key pair, his private key, to decrypt the message.\n10. B. Richard should encrypt the message digest with his own private key. When Sue receives the \nmessage, she will decrypt the digest with Richard’s public key and then compute the digest her-\nself. If the two digests match, she can be assured that the message truly originated from Richard.\n11. C. The Digital Signature Standard allows federal government use of the Digital Signature Algo-\nrithm, RSA, or the Elliptic Curve DSA in conjunction with the SHA-1 hashing function to pro-\nduce secure digital signatures.\n" }, { "page_number": 413, "text": "368\nChapter 10\n\u0002 PKI and Cryptographic Applications\n12. B. X.509 governs digital certificates and the public key infrastructure (PKI). It defines the appro-\npriate content for a digital certificate and the processes used by certificate authorities to generate \nand revoke certificates.\n13. B. Pretty Good Privacy uses a “web of trust” system of digital signature verification. The encryp-\ntion technology is based upon the IDEA private key cryptosystem.\n14. C. Secure Sockets Layer utilizes TCP port 443 for encrypted client/server communications.\n15. C. The meet-in-the-middle attack demonstrated that it took relatively the same amount of com-\nputation power to defeat 2DES as it does to defeat standard DES. This led to the adoption of Tri-\nple DES (3DES) as a standard for government communication.\n16. C. The MONDEX payment system, owned by MasterCard International, provides the crypto-\ngraphic technology necessary to support stored-value payment cards.\n17.\nC. The Wired Equivalent Privacy protocol encrypts traffic passing between a mobile client and \nthe wireless access point. It does not provide end-to-end encryption.\n18. B. Certificate revocation lists (CRLs) introduce an inherent latency to the certificate expiration \nprocess due to the time lag between CRL distributions.\n19. D. The Merkle-Hellman Knapsack algorithm, which relies upon the difficulty of factoring \nsuperincreasing sets, has been broken by cryptanalysts.\n20. B. IPSec is a security protocol that defines a framework for setting up a secure channel to \nexchange information between two entities.\n" }, { "page_number": 414, "text": "Chapter\n11\nPrinciples of \nComputer Design\nTHE CISSP EXAM TOPICS COVERED IN THIS \nCHAPTER INCLUDE:\n\u0001 Principles of Common Computer and Network \nOrganizations, Architectures, and Designs\n" }, { "page_number": 415, "text": "In previous chapters of this book, we’ve taken a look at basic \nsecurity principles and the protective mechanisms put in place to \nprevent violation of them. We’ve also examined some of the spe-\ncific types of attacks used by malicious individuals seeking to circumvent those protective mech-\nanisms. Until this point, when discussing preventative measures we have focused on policy \nmeasures and the software that runs on a system. However, security professionals must also pay \ncareful attention to the system itself and ensure that their higher-level protective controls are not \nbuilt upon a shaky foundation. After all, the most secure firewall configuration in the world \nwon’t do a bit of good if the computer it runs on has a fundamental security flaw that allows \nmalicious individuals to simply bypass the firewall completely.\nIn this chapter, we’ll take a look at those underlying security concerns by conducting a brief \nsurvey of a field known as computer architecture: the physical design of computers from various \ncomponents. We’ll examine each of the major physical components of a computing system—\nhardware and firmware—looking at each from a security perspective. Obviously, the detailed \nanalysis of a system’s hardware components is not always a luxury available to you due to \nresource and time constraints. However, all security professionals should have at least a basic \nunderstanding of these concepts in case they encounter a security incident that reaches down to \nthe system design level.\nThe federal government takes an active interest in the design and specification of the com-\nputer systems used to process classified national security information. Government security \nagencies have designed elaborate controls, such as the TEMPEST program used to protect \nagainst unwanted electromagnetic emanations and the Orange Book security levels that define \nacceptable parameters for secure systems.\nThis chapter also introduces two key concepts: security models and security modes, both of \nwhich tie into computer architectures and system designs. A security model defines basic \napproaches to security that sit at the core of any security policy implementation. Security mod-\nels address such basic questions as: What basic entities or operations need security? What is a \nsecurity principal? What is an access control list? Security models covered in this chapter include \nstate machine, Bell-LaPadula, Biba, Clark-Wilson, information flow, noninterference, Take-\nGrant, access control matrix, and Brewer and Nash models.\nSecurity modes represent ways in which systems can operate, depending on various elements such \nas the sensitivity or security classification of the data involved, the clearance level of the user \ninvolved, and the type of data operations requested. A security mode describes the conditions under \nwhich a system runs. Four such modes are recognized: dedicated security, system high security, com-\npartmented security, and multilevel security modes, all covered in detail in this chapter.\nThe next chapter, “Principles of Security Models,” examines how security models and secu-\nrity modes condition system behavior and capabilities and explores security controls and the \ncriteria used to evaluate compliance with them.\n" }, { "page_number": 416, "text": "Computer Architecture\n371\nComputer Architecture\nComputer architecture is an engineering discipline concerned with the design and construction \nof computing systems at a logical level. Many college-level computer engineering and computer \nscience programs find it difficult to cover all the basic principles of computer architecture in a \nsingle semester, so this material is often divided into two one-semester courses for undergrad-\nuates. Computer architecture courses delve into the design of central processing unit (CPU) \ncomponents, memory devices, device communications, and similar topics at the bit level, defin-\ning processing paths for individual logic devices that make simple “0 or 1” decisions. Most secu-\nrity professionals do not need that level of knowledge, which is well beyond the scope of this \nbook. However, if you will be involved in the security aspects of the design of computing sys-\ntems at this level, you would be well advised to conduct a more thorough study of this field.\nThe more complex a system, the less assurance it provides. More complexity \nmeans more areas for vulnerabilities exist and more areas must be secured \nagainst threats. More vulnerabilities and more threats mean that the subse-\nquent security provided by the system is less trustworthy.\nHardware\nAny computing professional is familiar with the concept of hardware. As in the construction \nindustry, hardware is the physical “stuff” that makes up a computer. The term hardware \nencompasses any tangible part of a computer that you can actually reach out and touch, from \nthe keyboard and monitor to its CPU(s), storage media, and memory chips. Take careful note \nthat although the physical portion of a storage device (such as a hard disk or SIMM) may be \nconsidered hardware, the contents of those devices—the collections of 0s and 1s that make up \nthe software and data stored within them—may not. After all, you can’t reach inside the com-\nputer and pull out a handful of bits and bytes!\nProcessor\nThe central processing unit (CPU), generally called the processor, is the computer’s nerve cen-\nter—it is the chip, or chips in a multiprocessor system, that governs all major operations and \neither directly performs or coordinates the complex symphony of calculations that allows a \ncomputer to perform its intended tasks. Surprisingly, the CPU is actually capable of performing \nonly a limited set of computational and logical operations, despite the complexity of the tasks \nit allows the computer to perform. It is the responsibility of the operating system and compilers \nto translate high-level programming languages used to design software into simple assembly \nlanguage instructions that a CPU understands. This limited range of functionality is inten-\ntional—it allows a CPU to perform computational and logical operations at blazing speeds, \noften measured in units known as MIPS (million instructions per second). To give you an idea \nof the magnitude of the progress in computing technology over the years, consider this: The \n" }, { "page_number": 417, "text": "372\nChapter 11\n\u0002 Principles of Computer Design\noriginal Intel 8086 processor introduced in 1978 operated at a rate of 0.33 MIPS (that’s \n330,000 calculations per second). A reasonably current 3.2GHz Pentium 4 processor intro-\nduced in 2003 operates at a blazing speed of 3,200 MIPS, or 3,200,000,000 calculations per \nsecond, almost 10,000 times as fast!\nExecution Types\nAs computer processing power increased, users demanded more advanced features to enable \nthese systems to process information at greater rates and to manage multiple functions simul-\ntaneously. Computer engineers devised several methods to meet these demands.\nAt first blush, the terms multitasking, multiprocessing, multiprogramming, and \nmultithreading may seem nearly identical. However, they describe very differ-\nent ways of approaching the “doing two things at once” problem. We strongly \nadvise that you take the time to review the distinctions between these terms \nuntil you feel comfortable with them.\nMULTITASKING\nIn computing, multitasking means handling two or more tasks simultaneously. In reality, most \nsystems do not truly multitask; they rely upon the operating system to simulate multitasking by \ncarefully structuring the sequence of commands sent to the CPU for execution. After all, when \nyour processor is humming along at 3,200 MIPS, it’s hard to tell that it’s switching between \ntasks rather than actually working on two tasks at once.\nMULTIPROCESSING\nIn a multiprocessing environment, a multiprocessor computing system (that is, one with more \nthan one CPU) harnesses the power of more than one processor to complete the execution of a \nsingle application. For example, a database server might run on a system that contains three \nprocessors. If the database application receives a number of separate queries simultaneously, it \nmight send each query to a separate processor for execution.\nTwo types of multiprocessing are most common in modern systems with multiple CPUs. The \nscenario just described, where a single computer contains more than one processor controlled \nby a single operating system, is called symmetric multiprocessing (SMP). In SMP, processors \nshare not only a common operating system, but also a common data bus and memory resources. \nIn this type of arrangement, systems may use a large number of processors. Fortunately, this \ntype of computing power is more than sufficient to drive most systems.\nSome computationally intensive operations, such as those that support the research of sci-\nentists and mathematicians, require more processing power than a single operating system can \ndeliver. Such operations may be best served by a technology known as massively parallel pro-\ncessing (MPP). MPP systems house hundreds or even thousands of processors, each of which \nhas its own operating system and memory/bus resources. When the software that coordinates \nthe entire system’s activities and schedules them for processing encounters a computationally \nintensive task, it assigns responsibility for the task to a single processor. This processor in turn \nbreaks the task up into manageable parts and distributes them to other processors for execution. \n" }, { "page_number": 418, "text": "Computer Architecture\n373\nThose processors return their results to the coordinating processor where they are assembled \nand returned to the requesting application. MPP systems are extremely powerful (not to men-\ntion extremely expensive!) and are the focus of a good deal of computing research.\nBoth types of multiprocessing provide unique advantages and are suitable for different types \nof situations. SMP systems are adept at processing simple operations at extremely high rates, \nwhereas MPP systems are uniquely suited for processing very large, complex, computationally \nintensive tasks that lend themselves to decomposition and distribution into a number of subor-\ndinate parts.\nMULTIPROGRAMMING\nMultiprogramming is similar to multitasking. It involves the pseudo-simultaneous execution of \ntwo tasks on a single processor coordinated by the operating system as a way to increase oper-\national efficiency. Multiprogramming is considered a relatively obsolete technology and is \nrarely found in use today except in legacy systems. There are two main differences between mul-\ntiprogramming and multitasking:\n\u0002\nMultiprogramming usually takes place on large-scale systems, such as mainframes, \nwhereas multitasking takes place on PC operating systems, such as Windows and Linux.\n\u0002\nMultitasking is normally coordinated by the operating system, whereas multiprogramming \nrequires specially written software that coordinates its own activities and execution \nthrough the operating system.\nMULTITHREADING\nMultithreading permits multiple concurrent tasks to be performed within a single process. \nUnlike multitasking, where multiple tasks occupy multiple processes, multithreading permits \nmultiple tasks to operate within a single process. Multithreading is often used in applications \nwhere frequent context switching between multiple active processes consumes excessive over-\nhead and reduces efficiency. In multithreading, switching between threads incurs far less overhead \nand is therefore more efficient. In modern Windows implementations, for example, the over-\nhead involved in switching from one thread to another within a single process is on the order \nof 40 to 50 instructions, with no substantial memory transfers needed. Whereas switching from \none process to another involves 1,000 instructions or more and requires substantial memory \ntransfers as well.\nA good example of multithreading occurs when multiple documents are opened at the same \ntime in a word processing program. In that situation, you do not actually run multiple instances \nof the word processor—this would place far too great a demand on the system. Instead, each \ndocument is treated as a single thread within a single word processor process, and the software \nchooses which thread it works on at any given moment.\nSymmetric multiprocessing systems actually make use of threading at the operating system \nlevel. As in the word processing example just described, the operating system also contains a \nnumber of threads that control the tasks assigned to it. In a single-processor system, the OS \nsends one thread at a time to the processor for execution. SMP systems send one thread to each \navailable processor for simultaneous execution.\n" }, { "page_number": 419, "text": "374\nChapter 11\n\u0002 Principles of Computer Design\nProcessing Types\nMany high-security systems control the processing of information assigned to various security \nlevels, such as the classification levels of unclassified, confidential, secret, and top secret the U.S. \ngovernment assigns to information related to national defense. Computers must be designed so \nthat they do not—ideally, so that they cannot—inadvertently disclose information to unautho-\nrized recipients.\nComputer architects and security policy administrators have attacked this problem at the \nprocessor level in two different ways. One is through a policy mechanism, whereas the other is \nthrough a hardware solution. The next two sections explore each of those options.\nSINGLE STATE\nSingle state systems require the use of policy mechanisms to manage information at different \nlevels. In this type of arrangement, security administrators approve a processor and system to \nhandle only one security level at a time. For example, a system might be labeled to handle only \nsecret information. All users of that system must then be approved to handle information at \nthe secret level. This shifts the burden of protecting the information being processed on a system \naway from the hardware and operating system and onto the administrators who control access \nto the system.\nMULTISTATE\nMultistate systems are capable of implementing a much higher level of security. These systems \nare certified to handle multiple security levels simultaneously by using specialized security \nmechanisms such as those described in the next section “Protection Mechanisms.” These mech-\nanisms are designed to prevent information from crossing between security levels. One user \nmight be using a multistate system to process secret information while another user is processing \ntop secret information at the same time. Technical mechanisms prevent information from cross-\ning between the two users and thereby crossing between security levels.\nIn actual practice, multistate systems are relatively uncommon owing to the expense of \nimplementing the necessary technical mechanisms. This expense is sometimes justified; how-\never, when dealing with a very expensive resource, such as a massively parallel system, the cost \nof obtaining multiple systems far exceeds the cost of implementing the additional security con-\ntrols necessary to enable multistate operation on a single such system.\nProtection Mechanisms\nIf a computer isn’t running, it’s an inert lump of plastic, silicon, and metal doing nothing. When \na computer is running, it operates a runtime environment that represents the combination of the \noperating system and whatever applications may be active. When running, the computer also has \nthe capability to access files and other data as the user’s security permissions allow. Within that \nruntime environment it’s necessary to integrate security information and controls to protect the \nintegrity of the operating system itself, to manage which users are allowed to access specific data \nitems, to authorize or deny operations requested against such data, and so forth. The ways in \nwhich running computers implement and handle security at runtime may be broadly described as \na collection of protection mechanisms. In the following sections, we describe various protection \nmechanisms that include protection rings, operational states, and security modes.\n" }, { "page_number": 420, "text": "Computer Architecture\n375\nBecause the ways in which computers implement and use protection mecha-\nnisms are so important to maintaining and controlling security, you should \nunderstand how all three mechanisms covered here—rings, operational states, \nand security modes—are defined and how they behave. Don’t be surprised to \nsee exam questions about specifics in all three areas, because this is such \nimportant stuff!\nPROTECTION RINGS\nThe ring protection scheme is an oldie but a goodie: it dates all the way back to work on the \nMultics operating system. This experimental operating system was designed and built between \n1963 and1969 with the collaboration of Bell Laboratories, MIT, and General Electric. Though \nit did see commercial use in implementations from Honeywell, Multics has left two enduring \nlegacies in the computing world: one, it inspired the creation of a simpler, less intricate operat-\ning system called Unix (a play on the word multics), and two, it introduced the idea of protec-\ntion rings to operating system design.\nFrom a security standpoint, protection rings organize code and components in an operating \nsystem (as well as applications, utilities, or other code that runs under the operating system’s \ncontrol) into concentric rings, as shown in Figure 11.1. The deeper inside the circle you go, the \nhigher the privilege level associated with the code that occupies a specific ring. Though the orig-\ninal Multics implementation allowed up to seven rings (numbered 0 through 6), most modern \noperating systems use a four-ring model (numbered 0 through 3).\nAs the innermost ring, 0 has the highest level of privilege and can basically access any resource, \nfile, or memory location. The part of an operating system that always remains resident in memory \n(so that it can run on demand at any time) is called the kernel. It occupies ring 0 and can preempt \ncode running at any other ring. The remaining parts of the operating system—those that come and \ngo as various tasks are requested, operations performed, processes switched, and so forth—occupy \nring 1. Ring 2 is also somewhat privileged in that it’s where I/O drivers and system utilities reside; \nthese are able to access peripheral devices, special files, and so forth that applications and other pro-\ngrams cannot themselves access directly. Those applications and programs occupy the outermost \nring, ring 3.\nThe essence of the ring model lies in priority, privilege, and memory segmentation. Any pro-\ncess that wishes to execute must get in line (a pending process queue). The process associated \nwith the lowest ring number always runs before processes associated with higher-numbered \nrings. Processes in lower-numbered rings can access more resources and interact with the oper-\nating system more directly than those in higher-numbered rings. Those processes that run in \nhigher-numbered rings must generally ask a handler or a driver in a lower-numbered ring for \nservices they need; this is sometimes called a mediated-access model. In its strictest implemen-\ntation, each ring has its own associated memory segment. Thus, any request from a process in \na higher-numbered ring for an address in a lower-numbered ring must call on a helper process \nin the ring associated with that address. In practice, many modern operating systems break \nmemory into only two segments: one for system-level access (rings 0 through 2) and one for \nuser-level programs and applications (ring 3).\n" }, { "page_number": 421, "text": "376\nChapter 11\n\u0002 Principles of Computer Design\nFrom a security standpoint, the ring model enables an operating system to protect and insu-\nlate itself from users and applications. It also permits the enforcement of strict boundaries \nbetween highly privileged operating system components (like the kernel) and less-privileged \nparts of the operating system (like other parts of the operating system, plus drivers and utilities). \nWithin this model, direct access to specific resources is possible only within certain rings; like-\nwise, certain operations (such as process switching, termination, scheduling) are only allowed \nwithin certain rings as well.\nF I G U R E\n1 1 . 1\nIn the commonly used four-ring model, protection rings segregate the \noperating system into kernel, components, and drivers in rings 0–2 and applications and \nprograms run at ring 3.\nThe ring that a process occupies, therefore, determine its access level to system resources \n(and determines what kinds of resources it must request from processes in lower-numbered, more-\nprivileged rings). Processes may access objects directly only if they reside within their own ring or \nwithin some ring outside its current boundaries (in numerical terms, for example, this means a pro-\ncess at ring 1 can access its own resources directly, plus any associated with rings 2 and 3, but it can’t \naccess any resources associated only with ring 0). The mechanism whereby mediated access occurs—\nthat is, the driver or handler request mentioned in a previous paragraph—is usually known as a sys-\ntem call and usually involves invocation of a specific system or programming interface designed to \npass the request to an inner ring for service. Before any such request can be honored, however, the \ncalled ring must check to make sure that the calling process has the right credentials and authoriza-\ntion to access the data and to perform the operation(s) involved in satisfying the request.\nRing 0: OS Kernel/Memory (Resident Components)\nRing 1: Other OS Components\nRing 2: Drivers, Protocols, etc.\nRing 3: User-Level Programs and Applications\nRings 0– 2 run in supervisory or privileged mode.\nRing 3 runs in user mode.\n \nRing 0\nRing 1\nRing 2\nRing 3\n" }, { "page_number": 422, "text": "Computer Architecture\n377\nPROCESS STATES\nAlso known as operating states, process states are various forms of execution in which a process \nmay run. Where the operating system is concerned, it can be in one of two modes at any given \nmoment: operating in a privileged, all-access mode known as supervisor state or operating in \nwhat’s called the problem state associated with user mode, where privileges are low and all \naccess requests must be checked against credentials for authorization before they are granted or \ndenied. The latter is called the problem state not because problems are guaranteed to occur, but \nbecause the unprivileged nature of user access means that problems can occur and the system \nmust take appropriate measures to protect security, integrity, and confidentiality.\nProcesses line up for execution in an operating system in a processing queue, where they will \nbe scheduled to run as a processor becomes available. Because many operating systems allow \nprocesses to consume processor time only in fixed increments or chunks, when a new process \nis created, it enters the processing queue for the first time; should a process consume its entire \nchunk of processing time (called a time slice) without completing, it returns to the processing \nqueue for another time slice the next time its turn comes around. Also, the process scheduler \nusually selects the highest-priority process for execution, so reaching the front of the line doesn’t \nalways guarantee access to the CPU (because a process may be preempted at the last instant by \nanother process with higher priority).\nAccording to whether a process is running or not, it can operate in one of several states:\nReady\nIn the ready state, a process is ready to resume or begin processing as soon as it is sched-\nuled for execution. If the CPU is available when the process reaches this state, it will transition \ndirectly into the running state; otherwise, it sits in the ready state until its turn comes up. This \nmeans the process has all the memory and other resources it needs to begin executing immediately.\nWaiting\nWaiting can also be understood as “waiting for a resource”—that is, the process is \nready for continued execution but is waiting for a device or access request (an interrupt of some \nkind) to be serviced before it can continue processing (for example, a database application that \nasks to read records from a file must wait for that file to be located and opened and for the right \nset of records to be found).\nRunning\nThe running process executes on the CPU and keeps going until it finishes, its time \nslice expires, or it blocks for some reason (usually because it’s generated an interrupt for access \nto a device or the network and is waiting for that interrupt to be serviced). If the time slice ends \nand the process isn’t completed, it returns to the ready state (and queue); if the process blocks \nwhile waiting for a resource to become available, it goes into the waiting state (and queue).\nThe running state is also often called the problem state. However, don't asso-\nciate problem with error. Instead, think of the problem state as you would think \nof a math problem being solved to obtain the answer.\nSupervisory\nThe supervisory state is used when the process must perform an action that \nrequires greater than normal privileges, including modifying system configuration, installing \ndevice drivers, or modifying security settings.\n" }, { "page_number": 423, "text": "378\nChapter 11\n\u0002 Principles of Computer Design\nStopped\nWhen a process finishes or must be terminated (because an error occurs, a required \nresource is not available, or a resource request can’t be met), it goes into a stopped state. At this \npoint, the operating system can recover all memory and other resources allocated to the process \nand reuse them for other processes as needed.\nFigure 11.2 shows a diagram of how these various states relate to one another. New pro-\ncesses always transition into the ready state. From there, ready processes always transition into \nthe running state. While running, a process can transition into the stopped state if it completes \nor is terminated, return to the ready state for another time slice, or transition to the waiting state \nuntil its pending resource request is met. When the operating system decides which process to \nrun next, it checks the waiting queue and the ready queue and takes the highest-priority job \nthat’s ready to run (so that only waiting jobs whose pending requests have been serviced, or are \nready to service, are eligible in this consideration). A special part of the kernel, called the pro-\ngram executive or the process scheduler, is always around (waiting in memory) so that when a \nprocess state transition must occur, it can step in and handle the mechanics involved.\nF I G U R E\n1 1 . 2\nThe process scheduler\nIn Figure 11.2, the process scheduler manages the processes awaiting execution in the ready \nand waiting states and decides what happens to running processes when they transition into \nanother state (ready, waiting, or stopped).\nSECURITY MODES\nThe U.S. government has designated four approved security modes for systems that process clas-\nsified information. These are described in the following sections. In Chapter 5, “Security Man-\nagement Concepts and Principles,” we reviewed the classification system used by the federal \ngovernment and the concepts of security clearances and access approval. The only new term in this \ncontext is need-to-know, which refers to an access authorization scheme in which a subject’s right \nto access an object takes into consideration not just a privilege level, but also the relevance of the \ndata involved to the role the subject plays (or the job they perform). Need-to-know indicates that \nthe subject requires access to the object to perform their job properly, or to fill some specific role. \nThose with no need-to-know may not access the object, no matter what level of privilege they \nhold. If you need a refresher on those concepts, please review them before proceeding.\nProcess needs another\ntime slice\nNew processes\nReady\nIf CPU is available\nStopped\nWhen process finishes,\nor terminates\nUnblocked\nRunning\nBlock for I/O,\nresources\nWaiting\n" }, { "page_number": 424, "text": "Computer Architecture\n379\nThree specific elements must exist before the security modes themselves can be deployed:\n\u0002\nA hierarchical MAC environment\n\u0002\nTotal physical control over which subjects can access the computer console\n\u0002\nTotal physical control over which subjects can enter into the same room as the computer console\nYou will rarely, if ever, encounter the following modes outside of the world of gov-\nernment agencies and contractors. However, you may discover this terminology \nin other contexts, so you’d be well advised to commit the terms to memory.\nDEDICATED MODE\nDedicated mode systems are essentially equivalent to the single state system described in the sec-\ntion “Processing Types” earlier in this chapter. There are three requirements for users of dedi-\ncated systems:\n\u0002\nEach user must have a security clearance that permits access to all information processed \nby the system.\n\u0002\nEach user must have access approval for all information processed by the system.\n\u0002\nEach user must have a valid need-to-know for all information processed by the system.\nIn the definitions of each of these modes, we use the phrase “all information pro-\ncessed by the system” for brevity. The official definition is more comprehensive \nand uses the phrase “all information processed, stored, transferred, or accessed.”\nSYSTEM HIGH MODE\nSystem high mode systems have slightly different requirements that must be met by users:\n\u0002\nEach user must have a valid security clearance that permits access to all information pro-\ncessed by the system.\n\u0002\nEach user must have access approval for all information processed by the system.\n\u0002\nEach user must have a valid need-to-know for some information processed by the system.\nNote that the major difference between the dedicated mode and the system high mode is that \nall users do not necessarily have a need-to-know for all information processed on a system high \nmode computing device.\nCOMPARTMENTED MODE\nCompartmented mode systems weaken these requirements one step further:\n\u0002\nEach user must have a valid security clearance that permits access to all information pro-\ncessed by the system.\n\u0002\nEach user must have access approval for all information they will have access to on the system.\n\u0002\nEach user must have a valid need-to-know for all information they will have access to on \nthe system.\n" }, { "page_number": 425, "text": "380\nChapter 11\n\u0002 Principles of Computer Design\nNotice that the major difference between compartmented mode systems and system high \nmode systems is that users of a compartmented mode system do not necessarily have access \napproval for all of the information on the system. However, as with system high and dedicated \nsystems, all users of the system must still have appropriate security clearances. In a special \nimplementation of this mode called compartmented mode workstations (CMWs), users with \nthe necessary clearances can process multiple compartments of data at the same time.\nCMWs require that two forms of security labels be placed on objects: sensitivity levels and \ninformation labels. Sensitivity levels describe the levels at which objects must be protected. \nThese are common among all four of the modes. Information labels prevent data overclassifi-\ncation and associate additional information with the objects, which assists in proper and accu-\nrate data labeling not related to access control.\nMULTILEVEL MODE\nThe government’s definition of multilevel mode systems pretty much parallels the technical def-\ninition given in the previous section. However, for consistency, we’ll express it in terms of clear-\nance, access approval, and need-to-know:\n\u0002\nSome users do not have a valid security clearance for all information processed by the sys-\ntem. Thus access is controlled by whether the subject's clearance level dominates the \nobject's sensitivity label.\n\u0002\nEach user must have access approval for all information they will have access to on the system.\n\u0002\nEach user must have a valid need-to-know for all information they will have access to on \nthe system.\nAs you look through the requirements for the various modes of operation approved by the fed-\neral government, you’ll notice that the administrative requirements for controlling the types of \nusers that access a system decrease as we move from dedicated systems down to multilevel sys-\ntems. However, this does not decrease the importance of limiting individual access so that users \nmay obtain only information that they are legitimately entitled to access. As discussed in the pre-\nvious section, it’s simply a matter of shifting the burden of enforcing these requirements from \nadministrative personnel—who physically limit access to a computer—to the hardware and soft-\nware—which control what information can be accessed by each user of a multiuser system.\nMultilevel security mode can also be called the controlled security mode.\nTable 11.1 summarizes and compares these four security modes according to security clear-\nances required, need-to-know, and the ability to process data from multiple clearance levels \n(abbreviated PDMCL). When comparing all four security modes, it is generally understood that \nthe multilevel mode is exposed to the highest level of risk.\nOperating Modes\nModern processors and operating systems are designed to support multiuser environments in \nwhich individual computer users might not be granted access to all components of a system or all \nof the information stored on it. For that reason, the processor itself supports two modes of oper-\nation, user mode and privileged mode. These two modes are discussed in the following sections.\n" }, { "page_number": 426, "text": "Computer Architecture\n381\nUSER\nUser mode is the basic mode used by the CPU when executing user applications. In this mode, \nthe CPU allows the execution of only a portion of its full instruction set. This is designed to pro-\ntect users from accidentally damaging the system through the execution of poorly designed code \nor the unintentional misuse of that code. It also protects the system and its data from a malicious \nuser who might try to execute instructions designed to circumvent the security measures put in \nplace by the operating system or who might mistakenly perform actions that could result in \nunauthorized access or damage to the system or valuable information assets.\nOften processes within user mode are executed within a controlled environment called a vir-\ntual machine (VM) or a virtual subsystem machine. A virtual machine is a simulated environ-\nment created by the OS to provide a safe and efficient place for programs to execute. Each VM \nis isolated from all other VMs and each VM has its own assigned memory address space that \ncan be used by the hosted application. It is the responsibility of the elements in privileged mode \n(a.k.a. kernel mode) to create and support the VMs and prevent the processes in one VM from \ninterfering with the processes in other VMs.\nPRIVILEGED\nCPUs also support privileged mode, which is designed to give the operating system access to the \nfull range of instructions supported by the CPU. This mode goes by a number of names, and the \nexact terminology varies according to the CPU manufacturer. Some of the more common mon-\nikers are included in the following list:\n\u0002\nPrivileged mode\n\u0002\nSupervisory mode\n\u0002\nSystem mode\n\u0002\nKernel mode\nT A B L E\n1 1 . 1\nComparing Security Modes\nMode\nClearance\nNeed-to-Know\nPDMCL\nDedicated\nSame\nNone\nNone\nSystem-high\nSame\nYes\nNone\nCompartmented\nSame\nYes\nYes\nMultilevel\nDifferent\nYes\nYes\nClearance is Same if all users must have the same security clearances, Different if otherwise.\nNeed-to-know is None if it does not apply and not used or if it is used but all users have the need to know all data \npresent on the system, Yes if access is limited by need-to-know restrictions.\nApplies if and when CMW implementations are used (Yes); otherwise, PDMCL is None.\n" }, { "page_number": 427, "text": "382\nChapter 11\n\u0002 Principles of Computer Design\nNo matter which term you use, the basic concept remains the same—this mode grants a wide \nrange of permissions to the process executing on the CPU. For this reason, well-designed oper-\nating systems do not let any user applications execute in privileged mode. Only those processes \nthat are components of the operating system itself are allowed to execute in this mode, for both \nsecurity and system integrity purposes.\nDon’t confuse processor modes with any type of user access permissions. The \nfact that the high-level processor mode is sometimes called privileged or super-\nvisory mode has no relationship to the role of a user. All user applications, \nincluding those of system administrators, run in user mode. When system \nadministrators use system tools to make configuration changes to the system, \nthose tools also run in user mode. When a user application needs to perform a \nprivileged action, it passes that request to the operating system using a system \ncall, which evaluates it and either rejects the request or approves it and executes \nit using a privileged mode process outside the user’s control.\nMemory\nThe second major hardware component of a system is memory, the storage bank for informa-\ntion that the computer needs to keep readily available. There are many different kinds of mem-\nory, each suitable for different purposes, and we’ll take a look at each in the sections that follow.\nRead-Only Memory (ROM)\nRead-only memory (ROM) works like the name implies—it’s memory the PC can read but can’t \nchange (no writing allowed). The contents of a standard ROM chip are burned in at the factory \nand the end user simply cannot alter it. ROM chips often contain “bootstrap” information that \ncomputers use to start up prior to loading an operating system from disk. This includes the \nfamiliar power-on self-test (POST) series of diagnostics that run each time you boot a PC.\nROM’s primary advantage is that it can’t be modified. There is no chance that user or admin-\nistrator error will accidentally wipe out or modify the contents of such a chip. This attribute makes \nROM extremely desirable for orchestrating a computer’s innermost workings. There is a type \nof ROM that may be altered by administrators to some extent. It is known as programmable read-\nonly memory (PROM) and comes in several subtypes, described in the following sections.\nPROGRAMMABLE READ-ONLY MEMORY (PROM)\nA basic programmable read-only memory (PROM) chip is very similar to a ROM chip in func-\ntionality, but with one exception. During the manufacturing process, a PROM chip’s contents \naren’t “burned in” at the factory as with standard ROM chips. Instead, a PROM incorporates \nspecial functionality that allows an end user to burn in the chip’s contents later on. However, \nthe burning process has a similar outcome—once data is written to a PROM chip, no further \nchanges are possible. After it’s burned it, a PROM chip essentially functions like a ROM chip.\nPROM chips provide software developers with an opportunity to store information permanently \non a high-speed, customized memory chip. PROMs are commonly used for hardware applications \nwhere some custom functionality is necessary, but seldom changes once programmed.\n" }, { "page_number": 428, "text": "Computer Architecture\n383\nERASABLE PROGRAMMABLE READ-ONLY MEMORY (EPROM)\nCombine the relatively high cost of PROM chips and software developers’ inevitable desires to \ntinker with their code once it’s written and you’ve got the rationale that led to the development \nof erasable PROM (EPROM). These chips have a small window that, when illuminated with a \nspecial ultraviolet light, causes the contents of the chip to be erased. After this process is complete, \nend users can burn new information into the EPROM as if it had never been programmed before.\nELECTRONICALLY ERASABLE PROGRAMMABLE READ-ONLY MEMORY (EEPROM)\nAlthough it’s better than no erase function at all, EPROM erasure is pretty cumbersome. It \nrequires physical removal of the chip from the computer and exposure to a special kind of ultra-\nviolet light. A more flexible, friendly alternative is electronically erasable PROM (EEPROM), \nwhich uses electric voltages delivered to the pins of the chip to force erasure. EEPROMs can be \nerased without removing them from the computer, which makes them much more attractive \nthan standard PROM or EPROM chips.\nOne well-known type of EEPROM is the CompactFlash cards often used in modern com-\nputers, PDAs, MP3 players, and digital cameras to store files, data, music, and images. These \ncards can be erased without removing them from the devices that use them, but they retain \ninformation even when the device is not powered on.\nRandom Access Memory (RAM)\nRandom access memory (RAM) is readable and writeable memory that contains information a \ncomputer uses during processing. RAM retains its contents only when power is continuously \nsupplied to it. Unlike with ROM, when a computer is powered off, all data stored in RAM dis-\nappears. For this reason, RAM is useful only for temporary storage. Any critical data should \nnever be stored solely in RAM; a backup copy should always be kept on another storage device \nto prevent its disappearance in the event of a sudden loss of electrical power.\nREAL MEMORY\nReal memory (also known as main memory or primary memory) is typically the largest RAM \nstorage resource available to a computer. It is normally composed of a number of dynamic \nRAM chips and, therefore, must be refreshed by the CPU on a periodic basis (see the sidebar \n“Dynamic vs. Static RAM” for more information on this subject).\nCACHE RAM\nComputer systems contain a number of caches that improve performance by taking data from \nslower devices and temporarily storing it in faster devices when repeated use is likely; this is \ncalled cache RAM. The processor normally contains an onboard cache of extremely fast mem-\nory used to hold data on which it will operate. This on-chip, or level 1 cache, is often backed \nup by a static RAM cache on a separate chip, called a level 2 cache, that holds data from the \ncomputer’s main bank of real memory. Likewise, real memory often contains a cache of infor-\nmation stored on magnetic media. This chain continues down through the memory/storage hier-\narchy to enable computers to improve performance by keeping data that’s likely to be used next \ncloser at hand (be it for CPU instructions, data fetches, file access, or what have you).\nMany peripherals also include onboard caches to reduce the storage burden they place on the \nCPU and operating system. For example, many higher-end printers include large RAM caches \n" }, { "page_number": 429, "text": "384\nChapter 11\n\u0002 Principles of Computer Design\nso that the operating system can quickly spool an entire job to the printer. After that, the pro-\ncessor can forget about the print job; it won’t be forced to wait for the printer to actually pro-\nduce the requested output, spoon-feeding it chunks of data one at a time. The printer can \npreprocess information from its onboard cache, thereby freeing the CPU and operating system \nto work on other tasks.\nRegisters\nThe CPU also includes a limited amount of onboard memory, known as registers, that provide \nit with directly accessible memory locations that the brain of the CPU, the arithmetic-logical \nunit (or ALU), uses when performing calculations or processing instructions. In fact, any data \nthat the ALU is to manipulate must be loaded into a register unless it is directly supplied as part \nof the instruction. The main advantage of this type of memory is that it is part of the ALU itself \nand, therefore, operates in lockstep with the CPU at typical CPU speeds.\nMemory Addressing\nWhen utilizing memory resources, the processor must have some means of referring to various \nlocations in memory. The solution to this problem is known as addressing, and there are several \ndifferent addressing schemes used in various circumstances. We’ll look at five of the more com-\nmon addressing schemes.\nDynamic vs. Static RAM\nThere are two main types of RAM: dynamic RAM and static RAM. Most computers contain a \ncombination of both types and use them for different purposes.\nTo store data, dynamic RAM uses a series of capacitors, tiny electrical devices that hold a \ncharge. These capacitors either hold a charge (representing a 1 bit in memory) or do not hold \na charge (representing a 0 bit). However, because capacitors naturally lose their charges over \ntime, the CPU must spend time refreshing the contents of dynamic RAM to ensure that 1 bits \ndon’t unintentionally change to 0 bits, thereby altering memory contents.\nStatic RAM uses more sophisticated technology—a logical device known as a flip-flop, which to all \nintents and purposes is simply an on/off switch that must be moved from one position to another \nto change a 0 to 1 or vice versa. More important, static memory maintains its contents unaltered so \nlong as power is supplied and imposes no CPU overhead for periodic refresh operations.\nThat said, dynamic RAM is cheaper than static RAM because capacitors are cheaper than flip-\nflops. However, static RAM runs much faster than dynamic RAM. This creates a trade-off for \nsystem designers, who combine static and dynamic RAM modules to strike the right balance \nof cost versus performance.\n" }, { "page_number": 430, "text": "Computer Architecture\n385\nREGISTER ADDRESSING\nAs you learned in the previous section, registers are small memory locations directly in the CPU. \nWhen the CPU needs information from one of its registers to complete an operation, it uses a \nregister address (e.g., “register 1”) to access its contents.\nIMMEDIATE ADDRESSING\nImmediate addressing is not technically a memory addressing scheme per se, but rather a way of \nreferring to data that is supplied to the CPU as part of an instruction. For example, the CPU might \nprocess the command “Add 2 to the value in register 1.” This command uses two addressing \nschemes. The first is immediate addressing—the CPU is being told to add the value 2 and does not \nneed to retrieve that value from a memory location—it’s supplied as part of the command. The \nsecond is register addressing—it’s instructed to retrieve the value from register 1.\nDIRECT ADDRESSING\nIn direct addressing, the CPU is provided with an actual address of the memory location to \naccess. The address must be located on the same memory page as the instruction being executed.\nINDIRECT ADDRESSING\nIndirect addressing uses a scheme similar to direct addressing. However, the memory address \nsupplied to the CPU as part of the instruction doesn’t contain the actual value that the CPU is \nto use as an operand. Instead, the memory address contains another memory address (perhaps \nlocated on a different page). The CPU reads the indirect address to learn the address where the \ndesired data resides and then retrieves the actual operand from that address.\nBASE+OFFSET ADDRESSING\nBase+Offset addressing uses a value stored in one of the CPU’s registers as the base location \nfrom which to begin counting. The CPU then adds the offset supplied with the instruction to \nthat base address and retrieves the operand from that computed memory location.\nSecondary Memory\nSecondary memory is a term commonly used to refer to magnetic/optical media or other storage \ndevices that contain data not immediately available to the CPU. For the CPU to access data in \nsecondary memory, the data must first be read by the operating system and stored in real mem-\nory. However, secondary memory is much more inexpensive than primary memory and can be \nused to store massive amounts of information. In this context, hard disks, floppy drives, and \noptical media like CD-ROMs or DVDs can all function as secondary memory.\nVIRTUAL MEMORY\nVirtual memory is a special type of secondary memory that the operating system manages to make \nlook and act just like real memory. The most common type of virtual memory is the pagefile that \nmost operating systems manage as part of their memory management functions. This specially \nformatted file contains data previously stored in memory but not recently used. When the oper-\nating system needs to access addresses stored in the pagefile, it checks to see if the page is memory-\nresident (in which case it can access it immediately) or if it’s been swapped to disk, in which case \nit reads the data from disk back into real memory (this process is called paging). Using virtual \n" }, { "page_number": 431, "text": "386\nChapter 11\n\u0002 Principles of Computer Design\nmemory is an inexpensive way to make a computer operate as if it had more real memory than is \nphysically installed. Its major drawback is that the paging operations that occur when data is \nexchanged between primary and secondary memory are relatively slow (memory functions in \nmicroseconds, disk systems in milliseconds; usually, this means four orders of magnitude differ-\nence!) and consume significant computer overhead, slowing down the entire system.\nMemory Security Issues\nMemory stores and processes your data—some of which may be extremely sensitive. It’s essen-\ntial that you understand the various types of memory and know how they store and retain data. \nAny memory devices that may retain data should be purged before they are allowed to leave \nyour organization for any reason. This is especially true for secondary memory and ROM/\nPROM/EPROM/EEPROM devices designed to retain data even after the power is turned off.\nHowever, memory data retention issues are not limited to those types of memory designed \nto retain data. Remember that static and dynamic RAM chips store data through the use of \ncapacitors and flip-flops (see the sidebar “Dynamic vs. Static RAM”). It is technically possible \nthat those electrical components could retain some of their charge for a limited period of time \nafter power is turned off. A technically sophisticated individual could theoretically take electri-\ncal measurements of those components and retrieve small portions of the data stored on such \ndevices. However, this requires a good deal of technical expertise and is not a likely threat unless \nyou have entire governments as your adversary.\nThe greatest security threat posed by RAM chips is a simple one. They are \nhighly pilferable and are quite often stolen. After all, who checks to see how \nmuch memory is in their computer at the start of each day? Someone could \neasily remove a single memory module from each of a large number of sys-\ntems and walk out the door with a small bag containing valuable chips. Today, \nthis threat is diminishing as the price of memory chips continues to fall ($70 for \n512MB DDR400 static RAM as we write).\nOne of the most important security issues surrounding memory is controlling who may \naccess data stored in memory while a computer is in use. This is primarily the responsibility of \nthe operating system and is the main memory security issue underlying the various processing \nmodes described in previous sections in this chapter. In the section “Security Protection Mech-\nanisms” later in this chapter, you’ll learn how the principle of process isolation can be used to \nensure that processes don’t have access to read or write to memory spaces not allocated to them. \nIf you’re operating in a multilevel security environment, it’s especially important to ensure that \nadequate protections are in place to prevent the unwanted leakage of memory contents between \nsecurity levels, through either direct memory access or covert channels (a full discussion of \ncovert channels appears in Chapter 12).\nStorage\nData storage devices make up the third class of computer system components we’ll discuss. \nThese devices are used to store information that may be used by a computer any time after it’s \n" }, { "page_number": 432, "text": "Computer Architecture\n387\nwritten. We’ll first examine a few common terms that relate to storage devices and then look at \nsome of the security issues related to data storage.\nPrimary vs. Secondary\nThe concepts of primary and secondary storage can be somewhat confusing, especially when \ncompared to primary and secondary memory. There’s an easy way to keep it straight—they’re \nthe same thing! Primary memory, also known as primary storage, is the RAM that a computer \nuses to keep necessary information readily available to the CPU while the computer is running. \nSecondary memory (or secondary storage) includes all the familiar long-term storage devices \nthat you use every day. Secondary storage consists of magnetic and optical media such as hard \ndrives, floppy disks, magnetic tapes, compact discs (CDs), digital video disks (DVDs), flash \nmemory cards, and the like.\nVolatile vs. Nonvolatile\nYou’re already familiar with the concept of volatility from our discussion of memory, although \nyou may not have heard it described using that term before. The volatility of a storage device \nis simply a measure of how likely it is to lose its data when power is turned off. Devices designed \nto retain their data (such as magnetic media) are classified as nonvolatile, whereas devices such \nas static or dynamic RAM modules, which are designed to lose their data, are classified as vol-\natile. Recall from the discussion in the previous section that sophisticated technology may some-\ntimes be able to extract data from volatile memory after power is removed, so the lines between \nthe two may sometimes be blurry.\nRandom vs. Sequential\nStorage devices may be accessed in one of two fashions. Random access storage devices allow \nan operating system to read (and sometimes write) immediately from any point within the \ndevice by using some type of addressing system. Almost all primary storage devices are random \naccess devices. You can use a memory address to access information stored at any point within \na RAM chip without reading the data that is physically stored before it. Most secondary storage \ndevices are also random access. For example, hard drives use a movable head system that allows \nyou to move directly to any point on the disk without spinning past all of the data stored on pre-\nvious tracks; likewise, CD-ROM and DVD devices use an optical scanner that can position itself \nanywhere on the platter surface as well.\nSequential storage devices, on the other hand, do not provide this flexibility. They require \nthat you read (or speed past) all of the data physically stored prior to the desired location. A \ncommon example of a sequential storage device is a magnetic tape drive. To provide access to \ndata stored in the middle of a tape, the tape drive must physically scan through the entire tape \n(even if it’s not necessarily processing the data that it passes in fast forward mode) until it \nreaches the desired point.\nObviously, sequential storage devices operate much slower than random access storage \ndevices. However, here again you’re faced with a cost/benefit decision. Many sequential storage \ndevices can hold massive amounts of data on relatively inexpensive media. This property makes \ntape drives uniquely suited for backup tasks associated with a disaster recovery/business conti-\nnuity plan (see Chapters 15 and 16 for more on Business Continuity Planning and Disaster \n" }, { "page_number": 433, "text": "388\nChapter 11\n\u0002 Principles of Computer Design\nRecovery Planning). In a backup situation, you often have extremely large amounts of data that \nneed to be stored and you infrequently need to access that stored information. The situation just \nbegs for a sequential storage device!\nStorage Media Security\nWe discussed the security problems that surround primary storage devices in the previous sec-\ntion. There are three main concerns when it comes to the security of secondary storage devices; \nall of them mirror concerns raised for primary storage devices:\n\u0002\nData may remain on secondary storage devices even after it has been erased. This condition \nis known as data remanence. Most technically savvy computer users know that utilities are \navailable that can retrieve files from a disk even after they have been deleted. It’s also tech-\nnically possible to retrieve data from a disk that has been reformatted. If you truly want to \nremove data from a secondary storage device, you must use a specialized utility designed to \ndestroy all traces of data on the device or damage or destroy it beyond possible repair.\n\u0002\nSecondary storage devices are also prone to theft. Economic loss is not the major factor \n(after all, how much does a floppy disk cost?), but the loss of confidential information poses \ngreat risks. If someone copies your trade secrets onto a floppy disk and walks out the door \nwith it, it’s worth a lot more than the cost of the disk itself.\n\u0002\nAccess to data stored on secondary storage devices is one of the most critical issues facing \ncomputer security professionals. For hard disks, data can often be protected through a \ncombination of operating system access controls. Floppy disks and other removable media \npose a greater challenge, so securing them often requires encryption technologies.\nInput and Output Devices\nInput and output devices are often seen as basic, primitive peripherals and usually don’t receive \nmuch attention until they stop working properly. However, even these basic devices can present \nsecurity risks to a system. Security professionals should be aware of these risks and ensure that \nappropriate controls are in place to mitigate them. The next four sections examine some of the \nrisks posed by specific input and output devices.\nMonitors\nMonitors seem fairly innocuous. After all, they simply display the data presented by the oper-\nating system. When you turn them off, the data disappears from the screen and can’t be recov-\nered. However, a technology known as TEMPEST can compromise the security of data \ndisplayed on a monitor.\nTEMPEST truly is an extremely interesting technology. If you’d like to learn more, \nthere are a number of very good Web resources on TEMPEST protection and exploi-\ntation. A good starting point is the article “The Computer Spyware Uncle Sam Won’t \nLet You Buy” posted on InfoWar.com at http://www.hackemate.com.ar/ezines/\nswat/swat26/Swt26-00.txt.\n" }, { "page_number": 434, "text": "Computer Architecture\n389\nTEMPEST is a technology that allows the electronic emanations that every monitor produces \n(known as Van Eck radiation) to be read from a distance and even from another location. The \ntechnology is also used to protect against such activity. Various demonstrations have shown \nthat you can easily read the screens of monitors inside an office building using gear housed in \na van parked outside on the street. Unfortunately, the protective controls required to prevent \nVan Eck radiation (lots and lots of copper!) are expensive to implement and cumbersome to use.\nPrinters\nPrinters also may represent a security risk, albeit a simpler one. Depending upon the physical secu-\nrity controls used at your organization, it may be much easier to walk out with sensitive informa-\ntion in printed form than to walk out with a floppy disk or other magnetic media. Also, if printers \nare shared, users may forget to retrieve their sensitive printouts, leaving them vulnerable to prying \neyes. These are all issues that are best addressed by an organization’s security policy.\nKeyboards/Mice\nKeyboards, mice, and similar input devices are not immune from security vulnerabilities either. \nAll of these devices are vulnerable to TEMPEST monitoring. Also, keyboards are vulnerable to \nless-sophisticated bugging. A simple device can be placed inside a keyboard to intercept all of \nthe keystrokes that take place and transmit them to a remote receiver using a radio signal. This \nhas the same effect as TEMPEST monitoring but can be done with much less-expensive gear.\nModems\nNowadays, modems are extremely cheap and most computer systems ship from manufacturers \nwith a high-speed modem installed as part of the basic configuration. This is one of the greatest \nwoes of a security administrator. Modems allow users to create uncontrolled access points into \nyour network. In the worst case, if improperly configured, they can create extremely serious \nsecurity vulnerabilities that allow an outsider to bypass all of your perimeter protection mech-\nanisms and directly access your network resources. At best, they create an alternate egress chan-\nnel that insiders can use to funnel data outside of your organization.\nYou should seriously consider an outright ban on modems in your organization’s security \npolicy unless they are truly needed for business reasons. In those cases, security officials should \nknow the physical and logical locations of all modems on the network, ensure that they are cor-\nrectly configured, and make certain that appropriate protective measures are in place to prevent \ntheir illegitimate use.\nInput/Output Structures\nCertain computer activities related to general input/output (I/O) operations, rather than indi-\nvidual devices, also have security implications. Some familiarity with manual input/output \ndevice configuration is required to integrate legacy peripheral devices (those that do not auto-\nconfigure or support Plug and Play, or PnP, setup) in modern PCs as well. Three types of oper-\nations that require manual configuration on legacy devices are involved here:\nMemory-mapped I/O\nFor many kinds of devices, memory-mapped I/O is a technique used to \nmanage input/output. That is, a part of the address space that the CPU manages functions to \n" }, { "page_number": 435, "text": "390\nChapter 11\n\u0002 Principles of Computer Design\nprovide access to some kind of device through a series of mapped memory addresses or loca-\ntions. Thus, by reading mapped memory locations, you’re actually reading the input from the \ncorresponding device (which is automatically copied to those memory locations at the system \nlevel when the device signals that input is available). Likewise, by writing to those mapped \nmemory locations, you’re actually sending output to that device (automatically handled by \ncopying from those memory locations to the device at the system level when the CPU signals \nthat the output is available). From a configuration standpoint, it’s important to make sure that \nonly one device maps into a specific memory address range and that the address range is used \nfor no other purpose than to handle device I/O. From a security standpoint, access to mapped \nmemory locations should be mediated by the operating system and subject to proper authori-\nzation and access controls.\nInterrupt (IRQ)\nInterrupt (IRQ) is an abbreviation for Interrupt ReQuest line, a technique for \nassigning specific signal lines to specific devices through a special interrupt controller. When a \ndevice wishes to supply input to the CPU, it sends a signal on its assigned IRQ (which usually \nfalls in a range of 0–16 on older PCs for two cascaded 8-line interrupt controllers and 0–23 on \nnewer ones with three cascaded 8-line interrupt controllers). Where newer PnP-compatible \ndevices may actually share a single interrupt (IRQ number), older legacy devices must generally \nhave exclusive use of a unique IRQ number (a well-known pathology called interrupt conflict \noccurs when two or more devices are assigned the same IRQ number and is best recognized by \nan inability to access all affected devices). From a configuration standpoint, finding unused IRQ \nnumbers that will work with legacy devices can be a sometimes trying exercise. From a security \nstandpoint, only the operating system should be able to mediate access to IRQs at a sufficiently \nhigh level of privilege to prevent tampering or accidental misconfiguration.\nDirect Memory Access (DMA)\nDirect Memory Access (DMA) works as a channel with two sig-\nnal lines, where one line is a DMA request (DMQ) line, the other a DMA acknowledgment (DACK) \nline. Devices that can exchange data directly with real memory (RAM) without requiring assistance \nfrom the CPU use DMA to manage such access. Using its DRQ line, a device signals the CPU that \nit wants to make direct access (which may be read or write, or some combination of the two) to \nanother device, usually real memory. The CPU authorizes access and then allows the access to pro-\nceed independently while blocking other access to the memory locations involved. When the access \nis complete, the device uses the DACK line to signal that the CPU may once again permit access to \npreviously blocked memory locations. This is faster than requiring the CPU to mediate such access \nand permits the CPU to move on to other tasks while the memory access is underway. DMA is used \nmost commonly to permit disk drives, optical drives, display cards, and multimedia cards to manage \nlarge-scale data transfers to and from real memory. From a configuration standpoint, it’s important \nto manage DMA addresses to keep device addresses unique and to make sure such addresses are \nused only for DMA signaling. From a security standpoint, only the operating system should be able \nto mediate DMA assignment and use of DMA to access I/O devices.\nIf you understand common IRQ assignments, how memory-mapped I/O and DMA work, \nand related security concerns, you know enough to tackle the CISSP exam. If not, some addi-\ntional reading may be warranted. In that case, PC Guide’s excellent overview of system memory \n(www.pcguide.com/ref/ram/) should tell you everything you need to know.\n" }, { "page_number": 436, "text": "Security Protection Mechanisms\n391\nFirmware\nFirmware (also known as microcode in some circles) is a term used to describe software that is \nstored in a ROM chip. This type of software is changed infrequently (actually, never, if it’s \nstored on a true ROM chip as opposed to an EPROM/EEPROM) and often drives the basic \noperation of a computing device.\nBIOS\nThe Basic Input/Output System (BIOS) contains the operating-system independent primitive \ninstructions that a computer needs to start up and load the operating system from disk. The \nBIOS is contained in a firmware device that is accessed immediately by the computer at boot \ntime. In most computers, the BIOS is stored on an EEPROM chip to facilitate version updates. \nThe process of updating the BIOS is known as “flashing the BIOS.”\nDevice Firmware\nMany hardware devices, such as printers and modems, also need some limited processing power \nto complete their tasks while minimizing the burden placed on the operating system itself. In \nmany cases, these “mini” operating systems are entirely contained in firmware chips onboard \nthe devices they serve. As with a computer’s BIOS, device firmware is frequently stored on an \nEEPROM device so it can be updated as necessary.\nSecurity Protection Mechanisms\nThe need for security mechanisms within an operating system is due to one simple fact: software \nis not trusted. Third-party software is untrustworthy, no matter who it comes from. The OS \nmust employ protection mechanisms to keep the computing environment stable and to keep \nprocesses isolated from each other. Without these efforts, the security of data could never be \nreliable or even possible.\nThere are a number of common protection mechanisms that computer system designers \nshould adhere to when designing secure systems. These principles are specific instances of more \ngeneral security rules that govern safe computing practices. We’ll divide our discussion into two \nareas: technical mechanisms and policy mechanisms.\nTechnical Mechanisms\nTechnical mechanisms are the controls that system designers can build right into their systems. We’ll \nlook at five: layering, abstraction, data hiding, process isolation, and hardware segmentation.\nLayering\nBy layering processes, you implement a structure similar to the ring model used for operating \nmodes (and discussed earlier in this chapter) and apply it to each operating system process. It \n" }, { "page_number": 437, "text": "392\nChapter 11\n\u0002 Principles of Computer Design\nputs the most-sensitive functions of a process at the core, surrounded by a series of increasingly \nlarger concentric circles with correspondingly lower sensitivity levels (using a slightly different \napproach, this is also sometimes explained in terms of upper and lower layers, where security \nand privilege decrease when climbing up from lower to upper layers).\nCommunication between layers takes place only through the use of well-defined, specific \ninterfaces to provide necessary security. All inbound requests from outer (less-sensitive) layers \nare subject to stringent authentication and authorization checks before they’re allowed to pro-\nceed (or denied, if they fail such checks). As you’ll understand more completely later in this \nchapter, using layering for security is similar to using security domains and lattice-based secu-\nrity models in that security and access controls over certain subjects and objects are associated \nwith specific layers and privileges and access increase as one moves from outer to inner layers.\nIn fact, separate layers can only communicate with one another through specific interfaces \ndesigned to maintain a system’s security and integrity. Even though less-secure outer layers \ndepend on services and data from more-secure inner layers, they only know how to interface with \nthose layers and are not privy to those inner layers’ internal structure, characteristics, or other \ndetails. To maintain layer integrity, inner layers neither know about nor depend on outer layers. \nNo matter what kind of security relationship may exist between any pair of layers, neither can \ntamper with the other (so that each layer is protected from tampering by any other layer). Finally, \nouter layers cannot violate or override any security policy enforced by an inner layer.\nAbstraction\nAbstraction is one of the fundamental principles behind the field known as object-oriented pro-\ngramming. It is the “black box” doctrine that says that users of an object (or operating system \ncomponent) don’t necessarily need to know the details of how the object works; they just need \nto know the proper syntax for using the object and the type of data that will be returned as a \nresult. This is very much what’s involved in mediated access to data or services, as when user \nmode applications use system calls to request administrator mode service or data (and where \nsuch requests may be granted or denied depending on the requester’s credentials and permis-\nsions) rather than obtaining direct, unmediated access.\nAnother way in which abstraction applies to security is in the introduction of object groups, \nsometimes called classes, where access controls and operation rights are assigned to groups of \nobjects rather than on a per-object basis. This approach allows security administrators to define \nand name groups easily (often related to job roles or responsibilities) and helps make adminis-\ntration of rights and privileges easier (adding an object to a class confers rights and privileges \nrather than having to manage rights and privileges for each individual object separately).\nData Hiding\nData hiding is an important characteristic in multilevel secure systems. It ensures that data exist-\ning at one level of security is not visible to processes running at different security levels. Chapter 7, \n“Data and Application Security Issues,” covers a number of data hiding techniques used to pre-\nvent users from deducing even the very existence of a piece of information. The key concept \nbehind data hiding is a desire to make sure those who have no need to know the details involved \nin accessing and processing data at one level have no way to learn or observe those details \n" }, { "page_number": 438, "text": "Security Protection Mechanisms\n393\ncovertly or illicitly. From a security perspective, data hiding relies on placing objects in different \nsecurity containers from those that subjects occupy so as to hide object details from those with \nno need to know about them.\nProcess Isolation\nProcess isolation requires that the operating system provide separate memory spaces for each \nprocess’s instructions and data. It also requires that the operating system enforce those bound-\naries, preventing one process from reading or writing data that belongs to another process. \nThere are two major advantages to using this technique:\n\u0002\nIt prevents unauthorized data access. Process isolation is one of the fundamental require-\nments in a multilevel security mode system.\n\u0002\nIt protects the integrity of processes. Without such controls, a poorly designed process \ncould go haywire and write data to memory spaces allocated to other processes, causing the \nentire system to become unstable rather than only affecting execution of the errant process. \nIn a more malicious vein, processes could attempt (and perhaps even succeed) at reading or \nwriting to memory spaces outside their scopes, intruding upon or attacking other processes.\nMany modern operating systems address the need for process isolation by implementing so-\ncalled virtual machines on a per-user or per-process basis. A virtual machine presents a user or \nprocess with a processing environment—including memory, address space, and other key sys-\ntem resources and services—that allows that user or process to behave as though they have sole, \nexclusive access to the entire computer. This allows each user or process to operate indepen-\ndently without requiring it to take cognizance of other users or processes that might actually be \nactive simultaneously on the same machine. As part of the mediated access to the system that \nthe operating system provides, it maps virtual resources and access in user mode so that they use \nsupervisory mode calls to access corresponding real resources. This not only makes things easier \nfor programmers, it also protects individual users and processes from one another.\nHardware Segmentation\nHardware segmentation is similar to process isolation in purpose—it prevents the access of \ninformation that belongs to a different process/security level. The main difference is that hard-\nware segmentation enforces these requirements through the use of physical hardware controls \nrather than the logical process isolation controls imposed by an operating system. Such imple-\nmentations are rare, and they are generally restricted to national security implementations \nwhere the extra cost and complexity is offset by the sensitivity of the information involved and \nthe risks inherent in unauthorized access or disclosure.\nSecurity Policy and Computer Architecture\nJust as security policy guides the day-to-day security operations, processes, and procedures in \norganizations, it has an important role to play when designing and implementing systems. This \nis equally true whether a system is entirely hardware based, entirely software based, or a com-\nbination of both. In this case, the role of a security policy is to inform and guide the design, \n" }, { "page_number": 439, "text": "394\nChapter 11\n\u0002 Principles of Computer Design\ndevelopment, implementation, testing, and maintenance of some particular system. Thus, this \nkind of security policy tightly targets a single implementation effort (though it may be adapted \nfrom other, similar efforts, it should reflect the target as accurately and completely as possible).\nFor system developers, a security policy is best encountered in the form of a document that \ndefines a set of rules, practices, and procedures that describe how the system should manage, \nprotect, and distribute sensitive information. Security policies that prevent information flow \nfrom higher security levels to lower security levels are called multilevel security policies. As a \nsystem is developed, the security policy should be designed, built, implemented, and tested as it \nrelates to all applicable system components or elements, including any or all of the following: \nphysical hardware components, firmware, software, and how the organization interacts with \nand uses the system.\nPolicy Mechanisms\nAs with any security program, policy mechanisms should also be put into place. These mecha-\nnisms are extensions of basic computer security doctrine, but the applications described in this \nsection are specific to the field of computer architecture and design.\nPrinciple of Least Privilege\nIn Chapter 1, “Accountability and Access Control,” you learned about the general security \nprinciple of least privilege and how it applies to users of computing systems. This principle is \nalso very important to the design of computers and operating systems, especially when applied \nto system modes. When designing operating system processes, you should always ensure that \nthey run in user mode whenever possible. The greater the number of processes that execute in \nprivileged mode, the higher the number of potential vulnerabilities that a malicious individual \ncould exploit to gain supervisory access to the system. In general, it’s better to use APIs to ask \nfor supervisory mode services or to pass control to trusted, well-protected supervisory mode \nprocesses as they're needed from within user mode applications than it is to elevate such pro-\ngrams or processes to supervisory mode altogether.\nSeparation of Privilege\nThe principle of separation of privilege builds upon the principle of least privilege. It requires \nthe use of granular access permissions; that is, different permissions for each type of privileged \noperation. This allows designers to assign some processes rights to perform certain supervisory \nfunctions without granting them unrestricted access to the system. It also allows individual \nrequests for services or access to resources to be inspected, checked against access controls, and \ngranted or denied based on the identity of the user making the requests or on the basis of groups \nto which the user belongs or security roles that the user occupies.\nAccountability\nAccountability is an essential component in any security design. Many high-security systems \ncontain physical devices (such as pen registers and non-modifiable audit trails) that enforce indi-\nvidual accountability for privileged functionality. In general, however, such capability relies on \n" }, { "page_number": 440, "text": "Security Protection Mechanisms\n395\na system’s ability to monitor activity on and interactions with a system’s resources and config-\nuration data and to protect resulting logs from unwanted access or alteration so that they pro-\nvide an accurate and reliable record of activity and interaction that documents every user’s \n(including administrators or other trusted individuals with high levels of privilege) history on \nthat system.\nDistributed Architecture\nAs computing has evolved from a host/terminal model, where users could be physically distrib-\nuted but all functions, activity, data, and resources resided on a single centralized system, to a \nclient/server model, where users operate independent fully functional desktop computers but \nalso access services and resources on networked servers, security controls and concept have had \nto evolve to follow suit. This means that clients have computing and storage capabilities and, \ntypically, that multiple servers do likewise. Thus, security must be addressed everywhere instead \nof at a single centralized host. From a security standpoint, this means that, because processing \nand storage are distributed on multiple clients and servers, all those computers must be properly \nsecured and protected. It also means that the network links between clients and servers (and in \nsome cases, these links may not be purely local) must also be secured and protected.\nVulnerabilities\nDistributed architectures are prone to vulnerabilities unthinkable in monolithic host/terminal \nsystems. Desktop systems can contain sensitive information that may be at some risk of being \nexposed and must therefore be protected. Individual users may lack general security savvy or \nawareness, and therefore the underlying architecture has to compensate for those lacks. Desk-\ntop PCs, workstations, and laptops can provide avenues of access into critical information sys-\ntems elsewhere in a distributed environment because users require access to networked servers \nand services to do their jobs. By permitting user machines to access a network and its distributed \nresources, organizations must also recognize that those user machines can become threats if \nthey are misused or compromised.\nCommunications equipment can also provide unwanted points of entry into a distributed \nenvironment. For example, modems attached to a desktop machine that’s also attached to an \norganization’s network can make that network vulnerable to dial-in attack. Likewise, users \nwho download data from the Internet increase the risk of infecting their own and other sys-\ntems with malicious code, Trojan horses, and so forth. Desktops, laptops, and workstations—\nand associated disks or other storage devices—may not be secure from physical intrusion or \ntheft. Finally, when data resides only on client machines, it may not be secured with a proper \nbackup (it’s often the case that while servers are backed up routinely, the same is not true for \nclient computers).\nSafeguards\nHopefully the foregoing litany of potential vulnerabilities in distributed architectures argues \nstrongly that such environments require numerous safeguards to implement appropriate secu-\nrity and to ensure that such vulnerabilities are eliminated, mitigated, or remedied. Clients must \n" }, { "page_number": 441, "text": "396\nChapter 11\n\u0002 Principles of Computer Design\nbe subjected to policies that impose safeguards on their contents and their users’ activities. \nThese include the following:\n\u0002\nE-mail must be screened so that it cannot become a vector for infection by malicious software; \ne-mail should also be subject to policies that govern appropriate use and limit potential liability.\n\u0002\nDownload/upload policies must be created so that incoming and outgoing data is screened \nand suspect materials blocked.\n\u0002\nSystems must be subject to robust access controls, which may include multifactor authen-\ntication and/or biometrics to restrict access to desktops and to prevent unauthorized access \nto servers and services.\n\u0002\nGraphical user interface mechanisms and database management systems should be \ninstalled, and their use required, to restrict and manage access to critical information.\n\u0002\nFile encryption may be appropriate for files and data stored on client machines (indeed, \ndrive-level encryption is a good idea for laptops and other mobile computing gear that is \nsubject to loss or theft outside an organization’s premises).\n\u0002\nIt's essential to separate and isolate processes that run in user and supervisory mode so that \nunauthorized and unwanted access to high-privilege processes and capabilities is prevented.\n\u0002\nProtection domains should be created so that compromise of a client won’t automatically \ncompromise an entire network.\n\u0002\nDisks and other sensitive materials should be clearly labeled as to their security classifica-\ntion or organizational sensitivity; procedural processes and system controls should com-\nbine to help protect sensitive materials from unwanted or unauthorized access.\n\u0002\nFiles on desktop machines should be backed up, as well as files on servers—ideally, using \nsome form of centralized backup utility that works with client agent software to identify \nand capture files from clients stored in a secure backup storage archive.\n\u0002\nDesktop users need regular security awareness training to maintain proper security aware-\nness; they also need to be notified about potential threats and instructed on how to deal \nwith them appropriately.\n\u0002\nDesktop computers and their storage media require protection against environmental haz-\nards (temperature, humidity, power loss/fluctuation, and so forth).\n\u0002\nDesktop computers should be included in disaster recovery and business continuity plan-\nning because they’re potentially as important (if not more important) to getting their users \nback to work as other systems and services within an organization.\n\u0002\nDevelopers of custom software built in and for distributed environments also need to take \nsecurity into account, including use of formal methods for development and deployment, \nsuch as code libraries, change control mechanisms, configuration management, and patch \nand update deployment.\nIn general, safeguarding distributed environments means understanding the vulnerabilities to \nwhich they’re subject and applying appropriate safeguards. These can (and do) range from tech-\nnology solutions and controls to policies and procedures that manage risk and seek to limit or \navoid losses, damage, unwanted disclosure, and so on.\n" }, { "page_number": 442, "text": "Security Models\n397\nSecurity Models\nIn information security, models provide a way to formalize security policies. Such models can \nbe abstract or intuitive (some are decidedly mathematical), but all are intended to provide an \nexplicit set of rules that a computer can follow to implement the fundamental security concepts, \nprocesses, and procedures that make up a security policy. These models offer a way to deepen \nyour understanding of how a computer operating system should be designed and developed to \nsupport a specific security policy. You’ll explore nine security models in the following sections; \nall of them can shed light on how security enters into computer architectures and operating sys-\ntem design:\n\u0002\nState machine model\n\u0002\nInformation flow model\n\u0002\nNoninterference model\n\u0002\nTake-Grant model\n\u0002\nAccess control matrix\n\u0002\nBell-LaPadula\n\u0002\nBiba\n\u0002\nClark-Wilson\n\u0002\nBrewer and Nash model (a.k.a. Chinese Wall)\nWhile it is understood that no system can be totally secure, it is possible to design and build \nreasonably secure systems. In fact, if a secured system complies with a specific set of security cri-\nteria, it can be said to exhibit a level of trust. Therefore, trust can be built into a system and then \nevaluated, certified, and accredited. In the remainder of this chapter and into Chapter 12, “Prin-\nciples of Security Models,” this flow of thought will be followed through from design to final \naccreditation.\nState Machine Model\nThe state machine model describes a system that is always secure no matter what state it is in. \nIt’s based on the computer science definition of a finite state machine (FSM). An FSM combines \nan external input with an internal machine state to model all kinds of complex systems, includ-\ning parsers, decoders, and interpreters. Given an input and a state, an FSM transitions to \nanother state and may create an output. Mathematically, the next state is a function of the cur-\nrent state and the input next state = G(input, current state). Likewise, the output is also a func-\ntion of the input and the current state output = F(input, current state).\nMany security models are based on the secure state concept. According to the state machine \nmodel, a state is a snapshot of a system at a specific moment in time. If all aspects of a state meet \nthe requirements of the security policy, that state is considered secure. A transition occurs when \naccepting input or producing output. A transition always results in a new state (also called a \nstate transition). All state transitions must be evaluated. If each possible state transitions results \n" }, { "page_number": 443, "text": "398\nChapter 11\n\u0002 Principles of Computer Design\nin another secure state, the system can be called a secure state machine. A secure state machine \nmodel system always boots into a secure state, maintains a secure state across all transitions, \nand allows subjects to access resources only in a secure manner compliant with the security pol-\nicy. The secure state machine model is the basis for many other security models.\nInformation Flow Model\nThe information flow model focuses on the flow of information. Information flow models are \nbased on a state machine model. The Bell-LaPadula and Biba models, which we will discuss in \ndetail in a moment, are both information flow models. Bell-LaPadula is concerned with pre-\nventing information from flowing from a high security level to a low security level. Biba is con-\ncerned with preventing information from flowing from a low security level to a high security \nlevel. Information flow models don’t necessarily deal with only the direction of information \nflow; they can also address the type of flow.\nInformation flow models are designed to prevent unauthorized, insecure, or restricted infor-\nmation flow. Information flow can be between subjects and objects at the same classification \nlevel as well as between subjects and objects at different classification levels. An information \nflow model allows all authorized information flows, whether within the same classification level \nor between classification levels. It prevents all unauthorized information flows, whether within \nthe same classification level or between classification levels.\nAnother interesting perspective on the information flow model is that it is used to establish \na relationship between two versions or states of the same object when those two versions or \nstates exist at different points in time. Thus, information flow dictates the transformation of an \nobject from one state at one point in time to another state at another point in time.\nNoninterference Model\nThe noninterference model is loosely based on the information flow model. However, instead \nof being concerned about the flow of information, the noninterference model is concerned with \nhow the actions of a subject at a higher security level affect the system state or actions of a sub-\nject at a lower security level. Basically, the actions of subject A (high) should not affect the \nactions of subject B (low) or even be noticed by subject B. The real concern is to prevent the \nactions of subject A at a high level of security classification from affecting the system state at a \nlower level. If this occurs, subject B may be placed into an insecure state or be able to deduce \nor infer information about a higher level of classification. This is a type of information leakage \nand implicitly creates a covert channel. Thus, the noninterference model can be imposed to pro-\nvide a form of protection against damage caused by malicious programs such as Trojan horses.\nTake-Grant Model\nThe Take-Grant model employs a directed graph to dictate how rights can be passed from one \nsubject to another or from a subject to an object. Simply put, a subject with the grant right can \ngrant another subject or another object any other right they possess. Likewise, a subject with the \ntake right can take a right from another subject.\n" }, { "page_number": 444, "text": "Security Models\n399\nAccess Control Matrix\nAn access control matrix is a table of subjects and objects that indicates the actions or functions \nthat each subject can perform on each object. Each column of the matrix is an ACL. Each row \nof the matrix is a capability list. An ACL is tied to the object; it lists valid actions each subject \ncan perform. A capability list is tied to the subject; it lists valid actions that can be taken on each \nobject. From an administration perspective, using only capability lists for access control is a \nmanagement nightmare. A capability list method of access control can be accomplished by stor-\ning on each subject a list of rights the subject has for every object. This effectively gives each user \na key ring of accesses and rights to objects within the security domain. To remove access to a \nparticular object, every user (subject) that has access to it must be individually manipulated. \nThus, managing access on each user account is much more difficult than managing access on \neach object (i.e., via ACLs).\nImplementing an access control matrix model usually involves constructing an environment \nthat can create and manage lists of subjects and objects and a function that can return the type \nassociated with whatever object is supplied to that function as input (this is important because \nan object’s type determines what kinds of operations may be applied to it).\nThe access control matrix shown in Table 11.2 is for a discretionary access control system. \nA mandatory or rule-based matrix can be constructed simply by replacing the subject names \nwith classifications or roles. Access control matrixes are used by systems to quickly determine \nwhether the requested action by a subject for an object is authorized.\nComposition Theories\nSome other models that fall into the information flow category build on the notion of how \ninputs and outputs between multiple systems relate to one another—which follows how infor-\nmation flows between systems rather than within an individual system. These are called com-\nposition theories because they explain how outputs from one system relate to inputs to \nanother system. There are three recognized types of composition theories:\n\u0002\nCascading: Input for one system comes from the output of another system.\n\u0002\nFeedback: One system provides input to another system, which reciprocates by reversing \nthose roles (so that system A first provides input for system B, and then system B provides \ninput to system A).\n\u0002\nHookup: One system sends input to another system but also sends input to external entities.\n" }, { "page_number": 445, "text": "400\nChapter 11\n\u0002 Principles of Computer Design\nBell-LaPadula Model\nThe Bell-LaPadula model was developed out of the U.S. Department of Defense (DoD) multilevel \nsecurity policy. The DoD’s policy includes four levels of classification, from most sensitive to least: \ntop secret, secret, confidential, and unclassified. The policy states that a subject with any level of \nclearance can access resources at or below its clearance level. However, within the clearances of \nconfidential, secret, and top secret, access is granted only on a need-to-know basis. In other words, \naccess to a specific object is granted to the classified levels only if a specific work task requires \nsuch access. With these restrictions, the Bell-LaPadula model is focused on maintaining the con-\nfidentiality of objects. Bell-LaPadula does not address the aspects of integrity or availability for \nobjects. Bell-LaPadula is the first mathematical model of a multilevel security policy.\nBy design, the Bell-LaPadula model prevents the leaking or transfer of classified information \nto less-secure clearance levels. This is accomplished by blocking lower-classified subjects from \naccessing higher-classified objects.\nIn its conception, the Bell-LaPadula model is based on the state machine model and infor-\nmation flow model. It also employs mandatory access controls and the lattice model. The lattice \ntiers are the classification levels used by the security policy of the organization. In this model, \nsecure states are circumscribed by two rules, or properties:\nSimple Security Property\nThe Simple Security Property (SS Property) states that a subject at a \nspecific classification level cannot read data with a higher classification level. This is often short-\nened to “no read up.”\n* Security Property\nThe * (star) Security Property (* Property), also known as the confine-\nment property, states that a subject at a specific classification level cannot write data to a lower \nclassification level. This is often shortened to “no write down.”\nT A B L E\n1 1 . 2\nAn Access Control Matrix\n \nObjects (Categorized by Type)\nSubjects\nDocument File\nPrinter\nNetwork Folder Share\nBob\nRead\nNo Access\nNo Access\nMary\nNo Access\nNo Access\nRead\nAmanda\nRead, Write\nPrint\nNo Access\nMark\nRead, Write\nPrint\nRead, Write\nKathryn\nRead, Write\nPrint, Manage Print Queue\nRead, Write, Execute\nColin\nRead, Write, Change \nPermissions\nPrint, Manage Print Queue, \nChange Permissions\nRead, Write, Execute, Change \nPermissions\n" }, { "page_number": 446, "text": "Security Models\n401\nThese two rules define the states into which the system can transition. No other transitions \nare allowed. All states accessible through these two rules are secure states. Thus, Bell-LaPadula–\nmodeled systems offer state machine model security (see Figure 11.3).\nF I G U R E\n1 1 . 3\nThe Bell-LaPadula model\nThere is an exception in the Bell-LaPadula model that states that a “trusted sub-\nject” is not constrained by the * Property. A trusted subject is defined as “a \nsubject that is guaranteed not to consummate a security-breaching informa-\ntion transfer even if it is possible.” This means that a trusted subject is allowed \nto violate the * Property and perform a write down.\nLattice-Based Access Control\nThis general category for nondiscretionary access controls was introduced in Chapter 1. Here's \na quick refresher on the subject (which drives the underpinnings for most access control security \nmodels): Subjects under lattice-based access controls are assigned positions in a lattice. These \npositions fall between defined security labels or classifications. Subjects can access only objects \nthat fall into the range between the least upper bound (the nearest security label or classification \nhigher than their lattice position) and the highest lower bound (the nearest security label or clas-\nsification lower than their lattice position) of the labels or classifications for their lattice position. \nThus, a subject that falls between the private and sensitive labels in a commercial scheme that \nreads bottom up as public, sensitive, private, proprietary, and confidential can access only pri-\nvate and sensitive data but not public, proprietary, or confidential data. See Figure 1.3 for an illus-\ntration. Lattice-based access controls also fit into the general category of information flow \nmodels and deal primarily with confidentiality (hence the connection to Bell-LaPadula).\nSecret\nClassified \nSensitive\nUnclassified\nWrite up allowed\n(* Property)\nRead up blocked\n(* Property)\nRead down allowed\n(SS Property)\nWrite down blocked\n(SS Property)\n" }, { "page_number": 447, "text": "402\nChapter 11\n\u0002 Principles of Computer Design\nThe Bell-LaPadula efficiently manages confidentiality, but it fails to address or manage \nnumerous other important issues:\n\u0002\nIt does not address integrity or availability.\n\u0002\nIt does not address access control management, nor does it provide a way to assign or \nchange an object’s or subject’s classification level.\n\u0002\nIt does not prevent covert channels. Covert channels, discussed in Chapter 12, “Principles \nof Security Models,” are means by which data can be communicated outside of normal, \nexpected, or detectable methods.\n\u0002\nIt does not address file sharing (a common feature on networked systems).\nBiba\nFor many nonmilitary organizations, integrity is more important than confidentiality. Out of \nthis need, several integrity-focused security models were developed, such those developed by \nBiba and Clark-Wilson.\nThe Biba model was derived as a direct analogue to the Bell-LaPadula model. Biba is also based \non the state machine model and the information flow model. Biba is likewise based on a classifi-\ncation lattice with mandatory access controls. Biba was designed to address three integrity issues:\n\u0002\nPrevent modification of objects by unauthorized subjects.\n\u0002\nPrevent unauthorized modification of objects by authorized subjects.\n\u0002\nProtect internal and external object consistency.\nAs with Bell-LaPadula, Biba requires that all subjects and objects have a classification label. \nThus, data integrity protection is dependent upon data classification.\nBiba has two integrity axioms:\nSimple Integrity Axiom\nThe Simple Integrity Axiom (SI Axiom) states that a subject at a spe-\ncific classification level cannot read data with a lower classification level. This is often shortened \nto “no read down.”\n* Integrity Axiom\nThe * (star) Integrity Axiom (* Axiom) states that a subject at a specific \nclassification level cannot write data to a higher classification level. This is often shortened to \n“no write up.”\nThese Biba model axioms are illustrated in Figure 11.4.\nCritiques of the Biba model mention a few drawbacks:\n\u0002\nIt only addresses integrity, not confidentiality or availability.\n\u0002\nIt focuses on protecting objects from external threats; it assumes that internal threats are \nhandled programmatically.\n\u0002\nIt does not address access control management, nor does it provide a way to assign or \nchange an object’s or subject’s classification level.\n\u0002\nIt does not prevent covert channels (see Chapter 12).\n" }, { "page_number": 448, "text": "Security Models\n403\nF I G U R E\n1 1 . 4\nThe Biba model\nClark-Wilson\nThe Clark-Wilson model is also an integrity-protecting model. The Clark-Wilson model was \ndeveloped after Biba and approaches integrity protection from a different perspective. Rather \nthan employing a lattice structure, it uses a three-part relationship of subject/program/object (or \nsubject/transaction/object) known as a triple or an access control triple. Subjects do not have \ndirect access to objects. Objects can be accessed only through programs. Through the use of two \nprinciples—well-formed transactions and separation of duties—the Clark-Wilson model pro-\nvides an effective means to protect integrity.\nWell-formed transactions take the form of programs. A subject is able to access objects only \nby using a program. Each program has specific limitations on what it can and cannot do to an \nobject. This effectively limits the subject’s capabilities. If the programs are properly designed, \nthen the triple relationship provides a means to protect the integrity of the object.\nSeparation of duties takes the form of dividing critical functions into two or more parts. A \ndifferent subject must complete each part. This prevents authorized subjects from making unau-\nthorized modifications to objects. This further protects the integrity of the object.\nIn addition to these two principles, auditing is required. Auditing tracks changes and access \nto objects as well as inputs from outside the system.\nThe Clark-Wilson model can also be called a restricted interface model. A restricted interface \nmodel uses classification-based restrictions to offer only subject-specific authorized information \nand functions. One subject at one classification level will see one set of data and have access to \none set of functions, whereas another subject at a different classification level will see a different \nset of data and have access to a different set of functions.\nBrewer and Nash Model (a.k.a. Chinese Wall)\nThis model was created to permit access controls to change dynamically based on a user’s pre-\nvious activity (making it a kind of state machine model as well). This model applies to a single \nintegrated database; it seeks to create security domains that are sensitive to the notion of conflict \nConfidential\nPrivate\nSensitive\nPublic\nRead up allowed\n(SI Axiom)\nWrite up blocked\n(* Axiom)\nWrite down allowed\n(* Axiom)\nRead down blocked\n(SI Axiom)\n" }, { "page_number": 449, "text": "404\nChapter 11\n\u0002 Principles of Computer Design\nof interest (for example, someone who works at Company C who has access to proprietary data \nfor Company A should not also be allowed access to similar data for Company B if those two \ncompanies compete with one another). This model is known as the Chinese wall because it cre-\nates a class of data that defines which security domains are potentially in conflict and prevents \nany subject with access to one domain that belongs to a specific conflict class from accessing any \nother domain that belongs to the same conflict class. Metaphorically, this puts a wall around \nall other information in any conflict class and explains the terminology. Thus, this model also \nuses the principle of data isolation within each conflict class to keep users out of potential \nconflict-of-interest situations (e.g., management of company datasets). Because company rela-\ntionships change all the time, this explains the importance of dynamic update to members of \nand definitions for conflict classes.\nClassifying and Comparing Models\nCareful reading of the preceding sections on access control models will reveal that they fall into \nthree broad categories, as follows:\nInformation flow\nInformation flow models deal with how information moves or how changes \nat one security level affect other security levels. They include the information flow and nonin-\nterference models and composition theories.\nIntegrity\nBecause integrity models are concerned with how information moves from one level to \nanother, they are a special type of information flow models. That is, they enforce security by \nenforcing integrity constraints. Two examples of integrity models are the Biba and Clark-Wilson \nmodels. To maintain integrity, the goals are to establish and maintain internal and external con-\nsistency, to prevent authorized users from making improper or illegal modifications, and to block \nunauthorized users from making any modifications whatsoever. Whereas Clark-Wilson delivers \non all three goals, Biba only blocks unauthorized users from making modifications. This explains \nwhy Clark-Wilson is used far more frequently than Biba in real-world applications.\nAccess control\nAccess control models attempt to enforce security using formal access controls, \nwhich determine whether or not subjects can access objects they request. They include the state \nmachine, access matrix, Take-Grant, Bell-LaPadula, and Brewer and Nash models.\nWhen it comes to anticipating questions and coverage of the various models mentioned, the \nfollowing items recur repeatedly in all of the practice exams we reviewed for this chapter:\n\u0002\nBiba and Clark-Wilson versus Bell-LaPadula: Biba or Clark-Wilson is used to enforce integ-\nrity, Bell-LaPadula to enforce confidentiality. Biba uses integrity levels and Clark-Wilson \nuses access triples where subjects must use programs to access objects (all subject to integ-\nrity constraints), whereas Bell-LaPadula uses security levels. Because Bell-LaPadula focuses \non confidentiality, it’s most often used in military applications; likewise, because Biba and \nClark-Wilson focus on integrity, they’re most often used in commercial applications.\n\u0002\nOf all security models, Bell-LaPadula and Biba are best known.\n" }, { "page_number": 450, "text": "Summary\n405\n\u0002\nOf all security models, Bell-LaPadula is used most often in military applications, Clark-Wil-\nson in commercial ones.\n\u0002\nBell-LaPadula defines access permissions using an access control matrix.\n\u0002\nAccess control models provide a formal description of a security policy (one that’s designed \nto make sense to a computer, in fact).\n\u0002\nThe Clark-Wilson access triple involves an object (a constrained data item), a subject (an \nintegrity verification procedure or a certification rule), and a program (a transformation \nprocedure or an enforcement rule). Because these same access triples include a program ele-\nment as well as a subject, Clark-Wilson also supports separation of duties, which divides \noperations into disconnected parts and also requires different users to perform each part to \nprevent fraud or misuse.\n\u0002\nThe access matrix model is most commonly implemented using access control lists (ACLs).\n\u0002\nBrewer and Nash (a.k.a. Chinese wall) manages how subjects access datasets according to \ntheir assignments to conflict-of-interest classes.\nSummary\nDesigning secure computing systems is a complex task, and many security engineers have ded-\nicated their entire careers to understanding the innermost workings of information systems and \nensuring that they support the core security functions required to safely operate in the current \nenvironment. Many security professionals don’t necessarily require an in-depth knowledge of \nthese principles, but they should have at least a broad understanding of the basic fundamentals \nthat drive the process to enhance security within their own organizations.\nSuch understanding begins with an investigation of hardware, software, and firmware and \nhow those pieces fit into the security puzzle. It’s important to understand the principles of com-\nmon computer and network organizations, architectures, and designs, including addressing \n(both physical and symbolic), the difference between address space and memory space, and \nmachine types (real, virtual, multistate, multitasking, multiprogramming, multiprocessing, mul-\ntiprocessor, and multiuser).\nAdditionally, a security professional must have a solid understanding of operating states (sin-\ngle state, multistate), operating modes (user, supervisor, privileged), storage types (primary, sec-\nondary, real, virtual, volatile, nonvolatile, random, sequential), and protection mechanisms \n(layering, abstraction, data hiding, process isolation, hardware segmentation, principle of least \nprivilege, separation of privilege, accountability).\nAll of this understanding must culminate into an effective system security implementation in \nterms of preventive, detective, and corrective controls. That’s why you must also know the access \ncontrol models and their functions. This includes the state machine model, Bell-LaPadula, Biba, \nClark-Wilson, the information flow model, the noninterference model, the Take-Grant model, the \naccess control matrix model, and the Brewer and Nash model.\n" }, { "page_number": 451, "text": "406\nChapter 11\n\u0002 Principles of Computer Design\nExam Essentials\nBe able to explain the differences between multitasking, multithreading, multiprocessing, and \nmultiprogramming.\nMultitasking is the simultaneous execution of more than one application \non a computer and is managed by the operating system. Multithreading permits multiple con-\ncurrent tasks to be performed within a single process. Multiprocessing is the use of more than \none processor to increase computing power. Multiprogramming is similar to multitasking but \ntakes place on mainframe systems and requires specific programming.\nUnderstand the differences between single state processors and multistate processors.\nSingle \nstate processors are capable of operating at only one security level at a time, whereas multistate \nprocessors can simultaneously operate at multiple security levels.\nDescribe the four security modes approved by the federal government for processing classified \ninformation.\nDedicated systems require that all users have appropriate clearance, access permis-\nsions, and need-to-know for all information stored on the system. System high mode removes the \nneed-to-know requirement. Compartmented mode removes the need-to-know requirement and \nthe access permission requirement. Multilevel mode removes all three requirements.\nExplain the two layered operating modes used by most modern processors.\nUser applica-\ntions operate in a limited instruction set environment known as user mode. The operating sys-\ntem performs controlled operations in privileged mode, also known as system mode, kernel \nmode, and supervisory mode.\nDescribe the different types of memory used by a computer.\nROM is nonvolatile and can’t \nbe written to by the end user. PROM chips allow the end user to write data once. EPROM chips \nmay be erased through the use of ultraviolet light and then rewritten. EEPROM chips may be \nerased with electrical current and then rewritten. RAM chips are volatile and lose their contents \nwhen the computer is powered off.\nKnow the security issues surrounding memory components.\nThere are three main security \nissues surrounding memory components: the fact that data may remain on the chip after power \nis removed, the fact that memory chips are highly pilferable, and the control of access to mem-\nory in a multiuser system.\nDescribe the different characteristics of storage devices used by computers.\nPrimary storage \nis the same as memory. Secondary storage consists of magnetic and optical media that must be \nfirst read into primary memory before the CPU can use the data. Random access storage devices \ncan be read at any point, whereas sequential access devices require scanning through all the data \nphysically stored before the desired location.\nKnow the security issues surrounding secondary storage devices.\nThere are three main secu-\nrity issues surrounding secondary storage devices: removable media can be used to steal data, \naccess controls and encryption must be applied to protect data, and data can remain on the \nmedia even after file deletion or media formatting.\n" }, { "page_number": 452, "text": "Exam Essentials\n407\nUnderstand security risks that input and output devices can pose.\nInput/output devices can \nbe subject to eavesdropping and tapping, used to smuggle data out of an organization, or used \nto create unauthorized, insecure points of entry into an organization’s systems and networks. Be \nprepared to recognize and mitigate such vulnerabilities.\nUnderstand I/O addresses, configuration, and setup.\nWorking with legacy PC devices \nrequires some understanding of IRQs, DMA, and memory-mapped I/O. Be prepared to recog-\nnize and work around potential address conflicts and misconfigurations and to integrate legacy \ndevices with Plug and Play (PnP) counterparts.\nKnow the purpose of firmware.\nFirmware is software stored on a ROM chip. At the com-\nputer level, it contains the basic instructions needed to start a computer. Firmware is also used \nto provide operating instructions in peripheral devices such as printers.\nBe able to describe process isolation, layering, abstraction, data hiding, and hardware \nsegmentation.\nProcess isolation ensures that individual processes can access only their own data. \nLayering creates different realms of security within a process and limits communication between \nthem. Abstraction creates “black box” interfaces without requiring knowledge of an algorithm’s or \ndevice’s inner workings. Data hiding prevents information from being read from a different security \nlevel. Hardware segmentation enforces process isolation with physical controls.\nUnderstand how a security policy drives system design, implementation, testing, and deployment.\nThe role of a security policy is to inform and guide the design, development, implementation, \ntesting, and maintenance of some particular system.\nUnderstand how the principle of least privilege, separation of privilege, and accountability \napply to computer architecture.\nThe principle of least privilege ensures that only a minimum \nnumber of processes are authorized to run in supervisory mode. Separation of privilege \nincreases the granularity of secure operations. Accountability ensures that an audit trail exists \nto trace operations back to their source.\nKnow details about each of the access control models.\nKnow the access control models and \ntheir functions. The state machine model ensures that all instances of subjects accessing objects \nare secure. Bell-LaPadula subjects have a clearance level that allows them to access only objects \nwith corresponding classification levels. Biba prevents subjects with lower security levels from \nwriting to objects at higher security levels. Clark-Wilson is an integrity model that relies on \nauditing to ensure that unauthorized subjects cannot access objects and that authorized users \naccess objects properly. The information flow model is designed to prevent unauthorized, inse-\ncure, or restricted information flow. The noninterference model prevents the actions of one sub-\nject from affecting the system state or actions of another subject. The Take-Grant model dictates \nhow rights can be passed from one subject to another or from a subject to an object. Finally, an \naccess control matrix is a table of subjects and objects that indicates the actions or functions that \neach subject can perform on each object.\n" }, { "page_number": 453, "text": "408\nChapter 11\n\u0002 Principles of Computer Design\nReview Questions\n1.\nMany PC operating systems provide functionality that enables them to support the simultaneous \nexecution of multiple applications on single-processor systems. What term is used to describe \nthis capability?\nA. Multiprogramming\nB. Multithreading\nC. Multitasking\nD. Multiprocessing\n2.\nWhich one of the following devices is most susceptible to TEMPEST monitoring of its emanations?\nA. Floppy drive\nB. Monitor\nC. CD-ROM\nD. Keyboard\n3.\nYou have three applications running on a single-processor system that supports multitasking. \nOne of those applications is a word processing program that is managing two threads simulta-\nneously. The other two applications are using only one thread of execution. How many appli-\ncation threads are running on the processor at any given time?\nA. 1\nB. 2\nC. 3\nD. 4\n4.\nWhat type of federal government computing system requires that all individuals accessing the \nsystem have a need-to-know all of the information processed by that system?\nA. Dedicated\nB. System high\nC. Compartmented\nD. Multilevel\n5.\nWhat term describes the processor mode used to run the system tools used by administrators \nseeking to make configuration changes to a machine?\nA. User mode\nB. Supervisory mode\nC. Kernel mode\nD. Privileged mode\n" }, { "page_number": 454, "text": "Review Questions\n409\n6.\nWhat type of memory chip allows the end user to write information to the memory only one time \nand then preserves that information indefinitely without the possibility of erasure?\nA. ROM\nB. PROM\nC. EPROM\nD. EEPROM\n7.\nWhich type of memory chip can be erased only when it is removed from the computer and \nexposed to a special type of ultraviolet light?\nA. ROM\nB. PROM\nC. EPROM\nD. EEPROM\n8.\nWhich one of the following types of memory might retain information after being removed from \na computer and, therefore, represent a security risk?\nA. Static RAM\nB. Dynamic RAM\nC. Secondary memory\nD. Real memory\n9.\nWhat is the single largest security threat RAM chips pose to your organization?\nA. Data retention\nB. Fire\nC. Theft\nD. Electronic emanations\n10. What type of electrical component serves as the primary building block for dynamic RAM chips?\nA. Capacitor\nB. Resistor\nC. Flip-flop\nD. Transistor\n11. Which one of the following storage devices is most likely to require encryption technology in \norder to maintain data security in a networked environment?\nA. Hard disk\nB. Backup tape\nC. Floppy disk\nD. RAM\n" }, { "page_number": 455, "text": "410\nChapter 11\n\u0002 Principles of Computer Design\n12. In which of the following security modes can you be assured that all users have access permis-\nsions for all information processed by the system but will not necessarily have a need-to-know \nall of that information?\nA. Dedicated\nB. System high\nC. Compartmented\nD. Multilevel\n13. Which one of the following security modes does not require that all users have a security clear-\nance for the highest level of information processed by the system?\nA. Dedicated\nB. System high\nC. Compartmented\nD. Multilevel\n14. What type of memory device is normally used to contain a computer’s BIOS?\nA. PROM\nB. EEPROM\nC. ROM\nD. EPROM\n15. What type of memory is directly available to the CPU and does not need to be loaded?\nA. RAM\nB. ROM\nC. Register memory\nD. Virtual memory\n16. In what type of addressing scheme is the data actually supplied to the CPU as an argument to \nthe instruction?\nA. Direct addressing\nB. Immediate addressing\nC. Base+Offset addressing\nD. Indirect addressing\n17.\nWhat type of addressing scheme supplies the CPU with a location that contains the memory \naddress of the actual operand?\nA. Direct addressing\nB. Immediate addressing\nC. Base+Offset addressing\nD. Indirect addressing\n" }, { "page_number": 456, "text": "Review Questions\n411\n18. What security principle helps prevent users from accessing memory spaces assigned to applica-\ntions being run by other users?\nA. Separation of privilege\nB. Layering\nC. Process isolation\nD. Least privilege\n19. Which security principle mandates that only a minimum number of operating system processes \nshould run in supervisory mode?\nA. Abstraction\nB. Layering\nC. Data hiding\nD. Least privilege\n20. Which security principle takes the concept of process isolation and implements it using physical \ncontrols?\nA. Hardware segmentation\nB. Data hiding\nC. Layering\nD. Abstraction\n" }, { "page_number": 457, "text": "412\nChapter 11\n\u0002 Principles of Computer Design\nAnswers to Review Questions\n1.\nC. Multitasking is processing more than one task at the same time. In most cases, multitasking is \nactually simulated by the operating system even when not supported by the processor.\n2.\nB. Although all electronic devices emit some unwanted emanations, monitors are the devices \nmost susceptible to this threat.\n3.\nA. A single-processor system can operate on only one thread at a time. There would be a total \nof four application threads (ignoring any threads created by the operating system), but the oper-\nating system would be responsible for deciding which single thread is running on the processor \nat any given time.\n4.\nA. In a dedicated system, all users must have a valid security clearance for the highest level of \ninformation processed by the system, they must have access approval for all information pro-\ncessed by the system, and they must have a valid need-to-know all information processed by the \nsystem.\n5.\nA. All user applications, regardless of the security permissions assigned to the user, execute in \nuser mode. Supervisory mode, kernel mode, and privileged mode are all terms that describe the \nmode used by the processor to execute instructions that originate from the operating system \nitself.\n6.\nB. Programmable read-only memory (PROM) chips may be written once by the end user but \nmay never be erased. The contents of ROM chips are burned in at the factory and the end user \nis not allowed to write data. EPROM and EEPROM chips both make provisions for the end user to \nsomehow erase the contents of the memory device and rewrite new data to the chip.\n7.\nC. EPROMs may be erased through exposure to high-intensity ultraviolet light. ROM and \nPROM chips do not provide erasure functionality. EEPROM chips may be erased through the \napplication of electrical currents to the chip pins and do not require removal from the computer \nprior to erasure.\n8.\nC. Secondary memory is a term used to describe magnetic and optical media. These devices will \nretain their contents after being removed from the computer and may be later read by another \nuser.\n9.\nC. RAM chips are highly pilferable items and the single greatest threat they pose is the economic \nloss that would result from their theft.\n10. A. Dynamic RAM chips are built from a large number of capacitors, each of which holds a single \nelectrical charge. These capacitors must be continually refreshed by the CPU in order to retain \ntheir contents. The data stored in the chip is lost when power is removed.\n11. C. Floppy disks are easily removed and it is often not possible to apply operating system access \ncontrols to them. Therefore, encryption is often the only security measure short of physical secu-\nrity that can be afforded to them. Backup tapes are most often well controlled through physical \nsecurity measures. Hard disks and RAM chips are often secured through operating system access \ncontrols.\n" }, { "page_number": 458, "text": "Answers to Review Questions\n413\n12. C. In system high mode, all users have appropriate clearances and access permissions for all \ninformation processed by the system but have a need-to-know for only some of the information \nprocessed by that system.\n13. D. In a multilevel security mode system, there is no requirement that all users have appropriate \nclearances to access all of the information processed by the system.\n14. B. BIOS and device firmware are often stored on EEPROM chips in order to facilitate future \nfirmware updates.\n15. C. Registers are small memory locations that are located directly on the CPU chip itself. The \ndata stored within them is directly available to the CPU and can be accessed extremely quickly.\n16. B. In immediate addressing, the CPU does not need to actually retrieve any data from memory. \nThe data is contained in the instruction itself and can be immediately processed.\n17.\nD. In indirect addressing, the location provided to the CPU contains a memory address. The \nCPU retrieves the operand by reading it from the memory address provided (hence the use of the \nterm indirect).\n18. C. Process isolation provides separate memory spaces to each process running on a system. This \nprevents processes from overwriting each other’s data and ensures that a process can’t read data \nfrom another process.\n19. D. The principle of least privilege states that only processes that absolutely need kernel-level \naccess should run in supervisory mode. The remaining processes should run in user mode to \nreduce the number of potential security vulnerabilities.\n20. A. Hardware segmentation achieves the same objectives as process isolation but takes them to \na higher level by implementing them with physical controls in hardware.\n" }, { "page_number": 459, "text": "" }, { "page_number": 460, "text": "Chapter\n12\nPrinciples of \nSecurity Models\nTHE CISSP EXAM TOPICS COVERED IN THIS \nCHAPTER INCLUDE:\n\u0001 Principles of Common Security Models, Architectures, and \nEvaluation Criteria\n\u0001 Common Flaws and Security Issues Associated with System \nArchitectures and Designs\n" }, { "page_number": 461, "text": "Increasing the security level of information systems is a challeng-\ning task for any organization. Ideally, security is something that is \nplanned and integrated from the very inception of a system’s \narchitecture and considered at each stage of its development, testing, deployment, and day-to-\nday use. The first step in this endeavor is to evaluate an organization’s current levels of security \nexposure by carefully examining its information systems and checking for vulnerability to \nthreats or attack. Next, one must decide what steps to take to remedy any such exposures as \nmay be discovered during the examination process. Making decisions about which solutions \nwill work well can be the most difficult part of the process when seeking to secure information \nsystems properly. If this is not to become a constant case of discovering vulnerabilities and \napplying relevant security patches or fixes—as is so common with systems like Windows, Unix, \nand Linux today—the level of security consciousness and attention during initial system design and \nimplementation must be substantially increased.\nUnderstanding the philosophy behind security solutions helps to limit one’s search for the \nbest security controls for a specific situation and for specific security needs. In this chapter, we \ndiscuss methods to evaluate the levels of security that a system provides. We also refer back to \nthe general security models (originally introduced in Chapter 11, “Principles of Computer \nDesign”) upon which many security controls are constructed. Next, we talk about Common \nCriteria and other methods that governments and corporations alike use to evaluate informa-\ntion systems from a security perspective, with particular emphasis on U.S. Department of \nDefense and international security evaluation criteria. We finish off this chapter by discussing \ncommonly encountered design flaws and other security-related issues that can make informa-\ntion systems susceptible to attack.\nCommon Security Models, Architectures, \nand Evaluation Criteria\nThe process of determining how secure a system is can be difficult and time consuming. Organi-\nzations need methods to evaluate given systems, to assign general security ratings, and to deter-\nmine if a system meets a security policy’s requirements. Further, any such security rating should \nbe general enough to enable meaningful comparison among multiple systems, along with their rel-\native levels of security. The following sections describe the process involved in evaluating a com-\nputer system’s level of security. We begin by introducing and explaining basic concepts and \nterminology used to describe information system security and talk about secure computing, secure \n" }, { "page_number": 462, "text": "Common Security Models, Architectures, and Evaluation Criteria\n417\nperimeters, security and access monitors, and kernel code. We turn to security models to explain \nhow access and security controls may be implemented. We also briefly explain how system secu-\nrity may be categorized as either open or closed; describe a set of standard security techniques used \nto ensure confidentiality, integrity, and availability of data; discuss security controls; and intro-\nduce a standard suite of secure networking protocols.\nTrusted Computing Base (TCB)\nAn old U.S. Department of Defense standard known colloquially as “the Orange Book” (DoD \nStandard 5200.28, covered in more detail later in this chapter in the “Rainbow Series” section) \ndescribes a trusted computing base (TCB) as a combination of hardware, software, and controls \nthat works together to form a trusted base to enforce your security policy. The TCB is a subset \nin a complete information system. It should be as small as possible so that a detailed analysis can \nreasonably ensure that the system meets design specifications and requirements. The TCB is the \nonly portion of that system that can be trusted to adhere to and enforce the security policy. It \nis not necessary that every component of a system be trusted. But anytime you consider a system \nfrom a security standpoint, your evaluation should include all trusted components that define \nthat system’s TCB.\nIn general, TCB components in a system are responsible for controlling access to the system. \nThe TCB must provide methods to access resources both inside and outside the TCB itself. TCB \ncomponents commonly restrict the activities of components outside the TCB. It is the respon-\nsibility of TCB components to ensure that a system behaves properly in all cases and that it \nadheres to the security policy under all circumstances.\nSecurity Perimeter\nThe security perimeter of your system is an imaginary boundary that separates the TCB from \nthe rest of the system. For the TCB to communicate with the rest of the system, it must create \nsecure channels, also called trusted paths. A trusted path is a channel established with strict \nstandards to allow necessary communication to occur without exposing the TCB to security \nvulnerabilities. A trusted path also protects system users (sometimes known as subjects) from \ncompromise as a result of a TCB interchange. As you learn more about formal security guide-\nlines and evaluation criteria later in this chapter, you’ll also learn that trusted paths are required \nin systems that seek to deliver high levels of security to their users. According to the TCSEC \nguidelines described later in this chapter, trusted paths are required in B2 and higher systems.\nReference Monitors and Kernels\nWhen the time comes to implement a secure system, it’s essential to develop some part of the \nTCB to enforce access controls on system assets and resources (sometimes known as objects). \nThe part of the TCB that validates access to every resource prior to granting access requests is \ncalled the reference monitor. The reference monitor stands between every subject and object, \nverifying that a requesting subject’s credentials meet the object’s access requirements before any \nrequests are allowed to proceed. If such access requirements aren’t met, access requests are \n" }, { "page_number": 463, "text": "418\nChapter 12\n\u0002 Principles of Security Models\nturned down. The reference monitor may be a conceptual part of the TCB; it need not be an \nactual, stand-alone or independent working system component.\nThe collection of components in the TCB that work together to implement reference monitor \nfunctions is called the security kernel. The purpose of the security kernel is to launch appropri-\nate components to enforce reference monitor functionality and resist all known attacks. The \nsecurity kernel uses a trusted path to communicate with subjects. It also mediates all resource \naccess requests, granting only those requests that match the appropriate access rules in use for \na system.\nThe reference monitor requires descriptive information about each resource that it protects. \nSuch information normally includes its classification and designation. When a subject requests \naccess to an object, the reference monitor consults the object’s descriptive information to dis-\ncern whether access should be granted or denied (see the sidebar “Tokens, Capabilities, and \nLabels” for more information on how this works).\nSecurity Models\nA security model provides a framework inside which one can implement a security policy. \nWhere a security policy is an abstract statement of security intentions, a security model repre-\nsents exactly how the policy should be implemented. A good model accurately represents each \nfacet of the security policy and how to implement some control to enforce the facet. The fol-\nlowing sections discuss three well-known security models, originally introduced in Chapter 11, \nand their basic features and functions. Each security model shares similarities with the others \nbut also has its own unique characteristics.\nA security model provides a way for designers to map abstract statements in a security policy \ninto the algorithms and data structures necessary to build software. Thus, a security model gives \nsoftware designers something against which to measure their design and implementation. That \nmodel, of course, must support each part of the security policy. In this way, developers can be \nsure their security implementation supports the security policy.\nTokens, Capabilities, and Labels\nThere are several different methods in use to describe the necessary security attributes for an \nobject. A security token is a separate object that is associated with a resource and describes its \nsecurity attributes. This token can communicate security information about an object prior to \nrequesting access to the actual object. In other implementations, various lists are used to store \nsecurity information about multiple objects. A capabilities list maintains a row of security \nattributes for each controlled object. Although not as flexible as the token approach, capabili-\nties lists generally offer quicker lookups when a subject requests access to an object. A third \ncommon type of attribute storage is called a security label. A security label is generally a per-\nmanent part of the object to which it’s attached. Once a security label is set, it normally cannot \nbe altered. This permanence provides another safeguard against tampering that neither tokens \nnor capabilities lists provide.\n" }, { "page_number": 464, "text": "Common Security Models, Architectures, and Evaluation Criteria\n419\nBell-LaPadula Model\nThe Bell-LaPadula model was developed by the U.S. Department of Defense (DoD) in the 1970s to \naddress concerns about protecting classified information. The DoD stores multiple levels of classified \ndocuments. The classifications the DoD uses are unclassified, sensitive but unclassified, confidential, \nsecret, and top secret. Any person with a secret security clearance can access secret, confidential, sen-\nsitive but unclassified, and unclassified documents but not top secret documents. Also, to access a \ndocument, the person seeking access must also have a need-to-know for that document.\nThe complexities involved in ensuring the confidentiality of documents are addressed in the \nBell-LaPadula model. This model is built on a state machine concept. The state machine sup-\nports multiple states with explicit transitions between any two states; this concept is used \nbecause the correctness of the machine, and guarantees of document confidentiality, can be \nproven mathematically. There are three basic properties of this state machine:\n\u0002\nThe Simple Security Property states that a subject may not read information at a higher sen-\nsitivity level (no read up).\n\u0002\nThe * (star) Security Property states that a subject may not write information to an object \nat a lower sensitivity level (no write down).\n\u0002\nThe Discretionary Security Property states that the system uses an access matrix to enforce \ndiscretionary access control.\nThe Bell-LaPadula properties are in place to protect data confidentiality. A subject cannot \nread an object that is classified at a higher level than the subject is cleared for. Because objects \nat one level have data that is more sensitive or secret than data at a lower level, a subject cannot \nwrite data from one level to an object at a lower level (with the exception of a trusted subject). \nThat action would be similar to pasting a top secret memo into an unclassified document file. \nThe third property enforces a subject’s “need-to-know” in order to access an object.\nThe Bell-LaPadula model addresses only the confidentiality of data. It does not address its \nintegrity or availability. Because it was designed in the 1970s, it does not support many oper-\nations that are common today, such as file sharing. It also assumes secure transitions between \nsecurity layers and does not address covert channels (covered later in this chapter). Bell-LaPadula \ndoes handle confidentiality well, so it is often used in combination with other models that pro-\nvide mechanisms to handle integrity and availability.\nBiba Model\nThe Biba model was designed after the Bell-LaPadula model. Where the Bell-LaPadula model \naddresses confidentiality, the Biba model addresses integrity. The Biba model is also built on a state \nmachine concept. In fact, Biba appears to be pretty similar to the Bell-LaPadula model. Both use \nstates and transitions. Both have basic properties. The biggest difference is their primary focus: Biba \nprimarily protects data integrity. Here are the basic properties of the Biba model state machine:\n\u0002\nThe Simple Integrity Property states that a subject cannot read an object at a lower integrity \nlevel (no read down).\n\u0002\nThe * (star) Integrity Property states that a subject cannot modify an object at a higher \nintegrity level (no write up).\n" }, { "page_number": 465, "text": "420\nChapter 12\n\u0002 Principles of Security Models\nWhen you compare Biba to Bell-LaPadula, you will notice that they look like they are oppo-\nsite. That’s because they focus on different areas of security. Where Bell-LaPadula model \nensures data confidentiality, Biba ensures data integrity.\nConsider both Biba properties. The second property of the Biba model is pretty straightfor-\nward. A subject cannot write to an object at a higher integrity level. That makes sense. What \nabout the first property? Why can’t a subject read an object at a lower integrity level? The \nanswer takes a little thought. Think of integrity levels as being like the purity level of air. You \nwould not want to pump air from the smoking section into the clean room environment. The \nsame applies to data. When integrity is important, you do not want unvalidated data read into \nvalidated documents. The potential for data contamination is too great to permit such access.\nBecause the Biba model focuses on data integrity, it is a more common choice for commercial \nsecurity models than the Bell-LaPadula model. Most commercial organizations are more con-\ncerned with the integrity of their data than its confidentiality.\nClark-Wilson Model\nAlthough the Biba model works in commercial applications, another model was designed in \n1987 specifically for the commercial environment. The Clark-Wilson model uses a multifaceted \napproach to enforcing data integrity. Instead of defining a formal state machine, the Clark-\nWilson model defines each data item and allows modifications through only a small set of pro-\ngrams. Clark-Wilson defines the following items and procedures:\n\u0002\nA constrained data item (CDI) is any data item whose integrity is protected by the security model.\n\u0002\nAn unconstrained data item (UDI) is any data item that is not controlled by the security \nmodel. Any data that is to be input and hasn’t been validated or any output would be con-\nsidered an unconstrained data item.\n\u0002\nAn integrity verification procedure (IVP) is a procedure that scans data items and confirms \ntheir integrity.\n\u0002\nTransformation procedures (TPs) are the only procedures that are allowed to modify a \nCDI. The limited access to CDIs through TPs forms the backbone of the Clark-Wilson \nintegrity model.\nThe Clark-Wilson model uses security labels to grant access to objects, but only through \ntransformation procedures. The model also enforces separation of duties to further protect the \nintegrity of data. Through these mechanisms, the Clark-Wilson model ensures that data is pro-\ntected from unauthorized changes from any user. The Clark-Wilson design makes it a very good \nmodel for commercial applications.\nObjects and Subjects\nControlling access to any resource in a secure system involves two entities. The subject of the \naccess is the user or process that makes a request to access a resource. Access can mean reading \nfrom or writing to a resource. The object of an access is the resource a user or process wants to \n" }, { "page_number": 466, "text": "Common Security Models, Architectures, and Evaluation Criteria\n421\naccess. Keep in mind that the subject and object refer to some specific access request, so the same \nresource can serve as a subject and an object in different access requests.\nFor example, process A may ask for data from process B. To satisfy process A’s request, pro-\ncess B must ask for data from process C. In this example, process B is the object of the first \nrequest and the subject of the second request:\nClosed and Open Systems\nSystems are designed and built according to two differing philosophies. A closed system is \ndesigned to work well with a narrow range of other systems, generally all from the same man-\nufacturer. The standards for closed systems are often proprietary and not normally disclosed. \nOpen systems, on the other hand, are designed using agreed-upon industry standards. Open sys-\ntems are much easier to integrate with systems from different manufacturers that support the \nsame standards.\nClosed systems are harder to integrate with unlike systems, but they can be more secure. A \nclosed system often comprises proprietary hardware and software that does not incorporate \nindustry standards. This lack of integration ease means that attacks on many generic system \ncomponents either will not work or must be customized to be successful. In many cases, attack-\ning a closed system is harder than launching an attack on an open system. Many software and \nhardware components with known vulnerabilities may not exist on a closed system. In addition \nto the lack of known vulnerable components on a closed system, it is often necessary to possess \nmore in-depth knowledge of the specific target system to launch a successful attack.\nOpen systems are generally far easier to integrate with other open systems. It is easy, for \nexample, to create a LAN with a Microsoft Windows 2000 machine, a Linux machine, and a \nMacintosh machine. Although all three computers use different operating systems and represent \nat least two different hardware architectures, each supports industry standards and makes it \neasy for networked (or other) communications to occur. This ease comes at a price, however. \nBecause standard communications components are incorporated into each of these three open \nsystems, there are far more entry points and methods for launching attacks. In general, their \nopenness makes them more vulnerable to attack, and their widespread availability makes it pos-\nsible for attackers to find (and even to practice on) plenty of potential targets. Also, open sys-\ntems are more popular than closed systems and attract more attention. An attacker who \ndevelops basic cracking skills will find more targets on open systems than on closed ones. This \nlarger “market” of potential targets normally means that there is more emphasis on targeting \nopen systems. Inarguably, there’s a greater body of shared experience and knowledge on how \nto attack open systems than there is for closed systems.\nFirst request\nprocess A (subject) process B (object)\nSecond request\nprocess B (subject) process C (object)\n" }, { "page_number": 467, "text": "422\nChapter 12\n\u0002 Principles of Security Models\nTechniques for Ensuring Confidentiality, Integrity, \nand Availability\nTo guarantee the confidentiality, integrity, and availability of data, you must ensure that all \ncomponents that have access to data are secure and well behaved. Software designers use dif-\nferent techniques to ensure that programs do only what is required and nothing more. Suppose \na program writes to and reads from an area of memory that is being used by another program. \nThe first program could potentially violate all three security tenets: confidentiality, integrity, \nand availability. If an affected program is processing sensitive or secret data, that data’s confi-\ndentiality is no longer guaranteed. If that data is overwritten or altered in an unpredictable way \n(a common problem when multiple readers and writers inadvertently access the same shared \ndata), there is no guarantee of integrity. And, if data modification results in corruption or out-\nright loss, it could become unavailable for future use. Although the concepts we discuss in this \nsection all relate to software programs, they are also commonly used in all areas of security. For \nexample, physical confinement guarantees that all physical access to hardware is controlled.\nConfinement\nSoftware designers use process confinement to restrict the actions of a program. Simply put, \nprocess confinement allows a process to read from and write to only certain memory locations \nand resources. The operating system, or some other security component, disallows illegal read/\nwrite requests. If a process attempts to initiate an action beyond its granted authority, that \naction will be denied. In addition, further actions, such as logging the violation attempt, may be \ntaken. Systems that must comply with higher security ratings most likely record all violations \nand respond in some tangible way. Generally, the offending process is terminated.\nBounds\nEach process that runs on a system is assigned an authority level. The authority level tells the \noperating system what the process can do. In simple systems, there may be only two authority \nlevels: user and kernel. The authority level tells the operating system how to set the bounds for \na process. The bounds of a process consist of limits set on the memory addresses and resources \nit can access. The bounds state the area within which a process is confined. In most systems, \nthese bounds segment logical areas of memory for each process to use. It is the responsibility of \nthe operating system to enforce these logical bounds and to disallow access to other processes. \nMore secure systems may require physically bounded processes. Physical bounds require each \nbounded process to run in an area of memory that is physically separated from other bounded \nprocesses, not just logically bounded in the same memory space. Physically bounded memory \ncan be very expensive, but it’s also more secure than logical bounds.\nIsolation\nWhen a process is confined through enforcing access bounds, that process runs in isolation. Pro-\ncess isolation ensures that any behavior will affect only the memory and resources associated \nwith the isolated process. These three concepts (confinement, bounds, and isolation) make \ndesigning secure programs and operating systems more difficult, but they also make it possible \nto implement more secure systems.\n" }, { "page_number": 468, "text": "Common Security Models, Architectures, and Evaluation Criteria\n423\nControls\nWe introduced the concept of security controls in Chapter 1, “Accountability and Access Con-\ntrol.” To ensure the security of a system, you need to allow subjects to access only authorized \nobjects. A control uses access rules to limit the access by a subject to an object. Access rules state \nwhich objects are valid for each subject. Further, an object might be valid for one type of access \nand be invalid for another type of access. One common control is for file access. A file can be \nprotected from modification by making it read-only for most users but read-write for a small set \nof users who have the authority to modify it.\nRecall from Chapter 1 that there are both mandatory and discretionary access controls, often \ncalled MAC and DAC, respectively. With mandatory controls, static attributes of the subject and the \nobject are considered to determine the permissibility of an access. Each subject possesses attributes \nthat define its clearance, or authority to access resources. Each object possesses attributes that define \nits classification. Different types of security methods classify resources in different ways. For exam-\nple, subject A is granted access to object B if the security system can find a rule that allows a subject \nwith subject A’s clearance to access an object with object B’s classification. This is called rule-based \naccess control. The predefined rules state which subjects can access which objects.\nDiscretionary controls differ from mandatory controls in that the subject has some ability to \ndefine the objects to access. Within limits, discretionary access controls allow the subject to define \na list of objects to access as needed. This access control list (often called an ACL) serves as a \ndynamic access rule set that the subject can modify. The constraints imposed on the modifica-\ntions often relate to the subject’s identity. Based on the identity, the subject may be allowed to \nadd or modify the rules that define access to objects.\nBoth mandatory and discretionary access controls limit the access to objects by subjects. The \nprimary goals of controls are to ensure the confidentiality and integrity of data by disallowing \nunauthorized access by authorized or unauthorized subjects.\nTrust and Assurance\nProper security concepts, controls, and mechanisms must be integrated before and during the \ndesign and architectural period in order to produce a reliably secure product. Security issues \nshould not be added on as an afterthought; this causes oversights, increased costs, and less reli-\nability. Once security is integrated into the design, it must be engineered, implemented, tested, \naudited, evaluated, certified, and finally accredited.\nA trusted system is one in which all protection mechanisms work together to process sensi-\ntive data for many types of users while maintaining a stable and secure computing environment. \nAssurance is simply defined as the degree of confidence in satisfaction of security needs. Assur-\nance must be continually maintained, updated, and reverified. This is true whether the trusted \nsystem experiences a known change or a significant amount of time has passed. In either case, \nchange has occurred at some level. Change is often the antithesis of security; it often diminishes \nsecurity. So, whenever change occurs, the system needs to be reevaluated to verify that the level \nof security it provided previously is still intact. Assurance varies from one system to another and \nmust be established on individual systems. However, there are grades or levels of assurance that \ncan be placed across numerous systems of the same type, systems that support the same services, \nor systems that are deployed in the same geographic location.\n" }, { "page_number": 469, "text": "424\nChapter 12\n\u0002 Principles of Security Models\nUnderstanding System Security Evaluation\nThose who purchase information systems for certain kinds of applications—think, for example, \nabout national security agencies where sensitive information may be extremely valuable (or dan-\ngerous in the wrong hands) or central banks or securities traders where certain data may be worth \nbillions of dollars—often want to understand their security strengths and weaknesses. Such buyers \nare often willing to consider only systems that have been subjected to formal evaluation processes \nin advance and received some kind of security rating so that they know what they’re buying (and, \nusually, also what steps they must take to keep such systems as secure as possible).\nWhen formal evaluations are undertaken, systems are usually subjected to a two-step pro-\ncess. In the first step, a system is tested and a technical evaluation is performed to make sure that \nthe system’s security capabilities meet criteria laid out for its intended use. In the second step, the \nsystem is subjected to a formal comparison of its design and security criteria and its actual capa-\nbilities and performance, and individuals responsible for the security and veracity of such sys-\ntems must decide whether to adopt them, reject them, or make some changes to their criteria \nand try again. Very often, in fact, trusted third parties (such as TruSecure Corporation, well \nknown for its security testing laboratories) are hired to perform such evaluations; the most \nimportant result from such testing is their “seal of approval” that the system meets all essential \ncriteria. Whether or not the evaluations are conducted inside an organization or out of house, \nthe adopting organization must decide to accept or reject the proposed systems. An organiza-\ntion’s management must take formal responsibility if and when systems are adopted and be will-\ning to accept any risks associated with its deployment and use.\nRainbow Series\nSince the 1980s, governments, agencies, institutions, and business organizations of all kinds \nhave had to face the risks involved in adopting and using information systems. This led to a his-\ntorical series of information security standards that attempted to specify minimum acceptable \nsecurity criteria for various categories of use. Such categories were important as purchasers \nattempted to obtain and deploy systems that would protect and preserve their contents or that \nwould meet various mandated security requirements (such as those that contractors must rou-\ntinely meet to conduct business with the government). The first such set of standards resulted \nin the creation of the Trusted Computer System Evaluation Criteria in the 1980s, as the U.S. \nDepartment of Defense (DoD) worked to develop and impose security standards for the systems \nit purchased and used. In turn, this led to a whole series of such publications through the mid-\n1990s. Since these publications were routinely identified by the color of their covers, they are \nknown collectively as the “rainbow series.”\nFollowing in the DoD’s footsteps, other governments or standards bodies created computer \nsecurity standards that built and improved on the rainbow series elements. Significant standards \nin this group include a European model called the Information Technology Security Evaluation \nCriteria (ITSEC) which was developed in 1999 and used through 1998. They also include the so-\ncalled Common Criteria, adopted by the U.S., Canada, France, Germany, and the U.K. in 1998, \nbut more formally known as the “Arrangement on the Recognition of Common Criteria Certifi-\ncates in the Field of IT Security.” Both of these standards will be discussed in later sections as well.\n" }, { "page_number": 470, "text": "Understanding System Security Evaluation\n425\nWhen governments or other security-conscious agencies evaluate information systems, they \nmake use of various standard evaluation criteria. In 1985, the National Computer Security Cen-\nter (NCSC) developed the Trusted Computer System Evaluation Criteria (TCSEC), usually \ncalled the “Orange Book” because of the color of this publication’s covers. The TCSEC estab-\nlished guidelines to be used when evaluating a stand-alone computer from the security perspec-\ntive. These guidelines address basic security functionality and allow evaluators to measure and \nrate a system’s functionality and trustworthiness. In the TSCEC, in fact, functionality and secu-\nrity assurance are combined and not separated as they are in security criteria developed later. \nTCSEC guidelines were designed to be used when evaluating vendor products or by vendors to \nensure that they build all necessary functionality and security assurance into new products.\nNext, we’ll take a look at some of the details in the Orange Book itself and then talk about \nsome of the other important elements in the rainbow series.\nTCSEC Classes and Required Functionality\nTCSEC combines the functionality and assurance rating of a system into four major categories. \nThese categories are then subdivided into additional subcategories. TCSEC defines the follow-\ning major categories:\nCategory A\nVerified protection\nCategory B\nMandatory protection\nCategory C\nDiscretionary protection\nCategory D\nMinimal protection\nCategory D is reserved for systems that have been evaluated but do not meet requirements \nto belong to any other category. In this scheme, category A systems have the highest level of \nsecurity and category D represents systems with the lowest level of security. The sections that \nfollow next include brief discussions of categories A through C along with numeric suffixes that \nrepresent any applicable subcategories.\nDiscretionary Protection (Categories C1, C2)\nDiscretionary protection systems provide basic access control. Systems in this category do pro-\nvide some security controls but are lacking in more sophisticated and stringent controls that \naddress specific needs for secure systems. C1 and C2 systems provide basic controls and com-\nplete documentation for system installation and configuration.\nDiscretionary Security Protection (C1)\nA discretionary security protection system controls \naccess by user IDs and/or groups. Although there are some controls in place that limit object \naccess, systems in this category only provide weak protection.\nControlled Access Protection (C2)\nControlled access protection systems are stronger than C1 \nsystems. Users must be identified individually to gain access to objects. C2 systems must also \nenforce media cleansing. With media cleansing, any media that is reused by another user must \nfirst be thoroughly cleansed so that no remnant of the previous data remains available for \ninspection or use. Additionally, strict logon procedures must be enforced that restrict access for \ninvalid or unauthorized users.\n" }, { "page_number": 471, "text": "426\nChapter 12\n\u0002 Principles of Security Models\nMandatory Protection (Categories B1, B2, B3)\nMandatory protection systems provide more security controls than category D or C systems. \nMore granularity of control is mandated, so security administrators can apply specific controls \nthat allow only very limited sets of subject/object access. This category of systems is based on \nthe Bell-LaPadula model. Mandatory access is based on security labels.\nLabeled Security (B1)\nIn a labeled security system, each subject and each object has a security \nlabel. A B1 system grants access by matching up the subject and object labels and comparing \ntheir permission compatibility. B1 systems support sufficient security to house classified data.\nStructured Protection (B2)\nIn addition to the requirement for security labels (as in B1 sys-\ntems), B2 systems must ensure that no covert channels exist. Operator and administrator func-\ntions are separated and process isolation is maintained. B2 systems are sufficient for classified \ndata that requires more security functionality than a B1 system can deliver.\nSecurity Domains (B3)\nSecurity domain systems provide more secure functionality by further \nincreasing the separation and isolation of unrelated processes. Administration functions are \nclearly defined and separate from functions available to other users. The focus of B3 systems shifts \nto simplicity to reduce any exposure to vulnerabilities in unused or extra code. The secure state of \nB3 systems must also be addressed during the initial boot process. B3 systems are difficult to \nattack successfully and provide sufficient secure controls for very sensitive or secret data.\nVerified Protection (Category A1)\nVerified protection systems are similar to B3 systems in the structure and controls they employ. \nThe difference is in the development cycle. Each phase of the development cycle is controlled \nusing formal methods. Each phase of the design is documented, evaluated, and verified before \nthe next step is taken. This forces extreme security consciousness during all steps of develop-\nment and deployment and is the only way to formally guarantee strong system security.\nA verified design system starts with a design document that states how the resulting system \nwill satisfy the security policy. From there, each development step is evaluated in the context of \nthe security policy. Functionality is crucial, but assurance becomes more important than in \nlower security categories. A1 systems represent the top level of security and are designed to han-\ndle top secret data. Every step is documented and verified, from the design all the way through \nto delivery and installation.\nOther Colors in the Rainbow Series\nAltogether, there are nearly 30 titles in the collection of DoD documents that either add to or \nfurther elaborate on the Orange Book. Although the colors don’t necessarily mean anything, \nthey’re used to describe publications in this series. Other important elements in this collection \nof documents include the following (for a more complete list, please consult Table 12.1):\nRed Book\nBecause the Orange Book applies only to stand-alone computers not attached to a \nnetwork and so many systems were used on networks (even in the 1980s), the Red Book was \ndeveloped to interpret the TCSEC in a networking context. In fact, the official title of the Red \nBook is the Trusted Network Interpretation (TNI), so it could be considered an interpretation \nof the Orange Book with a bent on networking. Quickly, the Red Book became more relevant \n" }, { "page_number": 472, "text": "Understanding System Security Evaluation\n427\nand important to system buyers and builders than the Orange Book. The following list includes \na few other functions of the Red Book:\n\u0002\nRates confidentiality and integrity\n\u0002\nAddresses communications integrity\n\u0002\nAddresses denial of service protection\n\u0002\nAddresses compromise (i.e., intrusion) protection and prevention\n\u0002\nIs restricted to a limited class of networks that are labeled as “centralized networks with a \nsingle accreditation authority”\n\u0002\nUses only 4 rating levels: None, C1 (Minimum), C2 (Fair), B2 (Good)\nT A B L E\n1 2 . 1\nImportant Rainbow Series Elements\nPub#\nTitle\nBook Name\n5200.28-STD\nDoD Trusted Computer System Evaluation Criteria\nOrange Book\nCSC-STD-002-85\nDoD Password Management Guidelines\nGreen Book\nCSC-STD-003-85\nGuidance for Applying TCSEC in Specific Environments\nYellow Book\nNCSC-TG-001\nA Guide to Understanding Audit in Trusted Systems\nTan Book\nNCSC-TG-002\nTrusted Product Evaluation—A Guide for Vendors\nBright Blue Book\nNCSC-TG-002-85\nPC Security Considerations\nLight Blue Book\nNCSC-TG-003\nA Guide to Understanding Discretionary Access Controls \nin Trusted Systems\nNeon Orange \nBook\nNCSC-TG-005\nTrusted Network Interpretation\nRed Book\nNCSC-TG-004\nGlossary of Computer Security Terms\nAqua Book\nNCSC-TG-006\nA Guide to Understanding Configuration Management in \nTrusted Systems\nAmber Book\nNCSC-TG-007\nA Guide to Understanding Design Documentation in \nTrusted Systems\nBurgundy Book\nNCSC-TG-008\nA Guide to Understanding Trusted Distribution in Trusted \nSystems\nLavender Book\nNCSC-TG-009\nComputer Security Subsystem Interpretation of the TCSEC\nVenice Blue Book\nFor more information, please consult http://csrc.ncsl.nist.gov/secpubs/rainbow/, download links available.\n" }, { "page_number": 473, "text": "428\nChapter 12\n\u0002 Principles of Security Models\nGreen Book\nThe Green Book, or the Department of Defense Password Management Guide-\nlines, provides password creation and management guidelines; it’s important for those who con-\nfigure and manage trusted systems.\nGiven all the time and effort that went into formulating the TCSEC, it’s not unreasonable to \nwonder why evaluation criteria have evolved to newer, more advanced standards. The relentless \nmarch of time and technology aside, these are the major critiques of TCSEC and help to explain \nwhy newer standards are now in use worldwide:\n\u0002\nAlthough the TCSEC put considerable emphasis on controlling user access to information, \nthey don’t exercise control over what users do with information once access is granted. This \ncan be a problem in both military and commercial applications alike.\n\u0002\nGiven their origins at the U.S. Department of Defense, it’s understandable that the TCSEC \nfocus their concerns entirely on confidentiality, which assumes that controlling how users \naccess data means that concerns about data accuracy or integrity are irrelevant. This \ndoesn’t work in commercial environments where concerns about data accuracy and integ-\nrity can be more important than concerns about confidentiality.\n\u0002\nOutside their own emphasis on access controls, the TCSEC do not carefully address the \nkinds of personnel, physical, and procedural policy matters or safeguards that must be \nexercised to fully implement security policy. They don’t deal much with how such matters \ncan impact system security either.\n\u0002\nThe Orange Book, per se, doesn’t deal with networking issues (though the Red Book, devel-\noped later in 1987, does).\nTo some extent, these criticisms reflect the unique security concerns of the military, which devel-\noped the TCSEC. Then, too, the prevailing computing tools and technologies widely available at the \ntime (networking was really just getting started in 1985) had an impact as well. Certainly, an increas-\ningly sophisticated and holistic view of security within organizations helps to explain why and where \nthe TCSEC also fell short, procedurally and policy-wise. But because ITSEC has been largely super-\nseded by the Common Criteria, coverage in the next section explains ITSEC as a step along the way \ntoward the Common Criteria (covered in the section after that).\nITSEC Classes and Required Assurance and Functionality\nThe Information Technology Security Evaluation Criteria (ITSEC) represents an initial attempt \nto create security evaluation criteria in Europe. It was developed as an alternative to the TCSEC \nguidelines. The ITSEC guidelines evaluate the functionality and assurance of a system using sep-\narate ratings for each category. In this context, the functionality of a system measures its utility \nvalue for users. The functionality rating of a system states how well the system performs all nec-\nessary functions based on its design and intended purpose. The assurance rating represents the \ndegree of confidence that the system will work properly in a consistent manner.\nITSEC refers to any system being evaluated as a target of evaluation (TOE). All ratings are \nexpressed as TOE ratings in two categories. ITSEC uses two scales to rate functionality and \nassurance. The functionality of a system is rated from F-D through F-B3 (which is used twice; \nthere is no F-A1). The assurance of a system is rated from E0 through E6. Most ITSEC ratings \ngenerally correspond with TCSEC ratings (for example, a TCSEC C1 system corresponds to an \n" }, { "page_number": 474, "text": "Understanding System Security Evaluation\n429\nITSEC F-C1, E1 system). See Table 12.3 (at the end of the next section) for a comparison of \nTCSEC, ITSEC, and Common Criteria ratings.\nDifferences between TCSEC and ITSEC are many and varied. Some of the most important \ndifferences between the two standards include the following:\n\u0002\nAlthough the TCSEC concentrates almost exclusively on confidentiality, ITSEC addresses \nconcerns about the loss of integrity and availability in addition to confidentiality, thereby \ncovering all three elements so important to maintaining complete information security.\n\u0002\nITSEC does not rely on the notion of a TCB, nor does it require that a system’s security \ncomponents be isolated within a TCB.\n\u0002\nUnlike TCSEC, which required any changed systems to be reevaluated anew—be it for \noperating system upgrades, patches or fixes; application upgrades or changes; and so \nforth—ITSEC includes coverage for maintaining targets of evaluation (TOEs) after such \nchanges occur without requiring a new formal evaluation.\nFor more information on ITSEC (now largely supplanted by the Common Criteria, covered \nin the next section), please visit the official ITSEC website at www.cesg.gov.uk/site/iacs/, \nthen click on the link labeled “ITSEC & Common Criteria.”\nCommon Criteria\nThe Common Criteria represent a more or less global effort that involves everybody who \nworked on TCSEC and ITSEC as well as other global players. Ultimately, it results in the ability \nto purchase CC-evaluated products (where CC, of course, stands for Common Criteria). The \nCommon Criteria define various levels of testing and confirmation of systems’ security capabil-\nities, where the number of the level indicates what kind of testing and confirmation has been \nperformed. Nevertheless, it’s wise to observe that even the highest CC ratings do not equate to \na guarantee that such systems are completely secure, nor that they are entirely devoid of vul-\nnerabilities or susceptibility to exploit.\nRecognition of Common Criteria\nCaveats and disclaimers aside, a document entitled “Arrangement on the Recognition of Com-\nmon Criteria Certificates in the Field of IT Security” was signed by representatives from gov-\nernment organizations in Canada, France, Germany, the United Kingdom, and the United States \nin 1998, making it an international standard. This document was converted by ISO into an offi-\ncial standard, namely IS 15408 “Evaluation Criteria for Information Technology Security.” \nThe objectives of the CC are as follows:\n\u0002\nTo add to buyer’s confidence in the security of evaluated, rated IT products.\n\u0002\nTo eliminate duplicate evaluations (among other things, this means that if one country, \nagency, or validation organizations follows the CC in rating specific systems and configu-\nrations, others elsewhere need not repeat this work).\n\u0002\nTo keep making security evaluations and the certification process more cost effective \nand efficient.\n\u0002\nTo make sure evaluations of IT products adhere to high and consistent standards.\n" }, { "page_number": 475, "text": "430\nChapter 12\n\u0002 Principles of Security Models\n\u0002\nTo promote evaluation, and increase availability of evaluated, rated IT products.\n\u0002\nTo evaluate the functionality (i.e., what the system does) and assurance (i.e., how much can \nyou trust the system) of the TOE.\nThe Common Criteria are available at many locations online. In the United States, the National Insti-\ntute of Standards and Technology (NIST) maintains a CC web page at http://csrc.nist.gov/\ncc/. Visit here to get information on the current version of the CC (2.1 as of this writing) and guid-\nance on using the CC, along with lots of other useful, relevant information.\nThe Common Criteria process is based on two key elements: protection profiles and security \ntargets. Protection profiles (PPs) specify the security requirements and protections of a product \nthat is to be evaluated (the TOE), which are considered the security desires or the “I want” from \na customer. Security targets (STs) specify the claims of security from the vendor that are built into a \nTOE. STs are considered the implemented security measures or the “I will provide” from the ven-\ndor. In addition to offering security targets, vendors may also offer packages of additional security \nfeatures. A package is an intermediate grouping of security requirement components that can be \nadded or removed from a TOE (like the option packages when purchasing a new vehicle).\nThe PP is compared to various STs from the selected vendor's TOEs. The closest or best \nmatch is what the client purchases. The client initially selects a vendor based on published or \nmarketed Evaluation Assurance Levels, or EALs (see the next section for more details on EALs), \nfor currently available systems. Using common criteria to choose a vendor allows clients to \nrequest exactly what they need for security rather than having to use static fixed security levels. \nIt also allows vendors more flexibility on what they design and create. A well-defined set of \ncommon criteria supports subjectivity and versatility and it automatically adapts to changing \ntechnology and threat conditions. Furthermore, the EALs provide a method for comparing ven-\ndor systems that is more standardized (like the old TCSEC).\nStructure of the Common Criteria\nThe CC are divided into three topical areas, as follows (complete text for version 2.1 is available \nat NIST at http://csrc.nist.gov/cc/CC-v2.1.html, along with links to earlier versions):\nPart 1\nIntroduction and General Model: Describes the general concepts and underlying model \nused to evaluate IT security and what’s involved in specifying targets of evaluation (TOEs). It’s \nuseful introductory and explanatory material for those unfamiliar with the workings of the \nsecurity evaluation process or who need help reading and interpreting evaluation results.\nPart 2\nSecurity Functional Requirements: Describes various functional requirements in terms \nof security audits, communications security, cryptographic support for security, user data pro-\ntection, identification and authentication, security management, TOE security functions (TSFs), \nresource utilization, system access, and trusted paths. Covers the complete range of security \nfunctions as envisioned in the CC evaluation process, with additional appendices (called \nannexes) to explain each functional area.\nPart 3\nSecurity Assurance: Covers assurance requirements for TOEs in the areas of configuration \nmanagement, delivery and operation, development, guidance documents, and life cycle support plus \nassurance tests and vulnerability assessments. Covers the complete range of security assurance \nchecks and protects profiles as envisioned in the CC evaluation process, with information on eval-\nuation assurance levels (EALs) that describe how systems are designed, checked, and tested.\n" }, { "page_number": 476, "text": "Understanding System Security Evaluation\n431\nMost important of all the information that appears in these various CC documents (worth \nat least a cursory read-through), are the evaluation assurance packages or levels commonly \nknown as EALs. Table 12.2 summarizes EALs 1 through 7.\nT A B L E\n1 2 . 2\nCC Evaluation Assurance Levels\nLevel\nAssurance Level\nDescription\nEAL1\nFunctionally \ntested\nApplies when some confidence in correct operation is required \nbut where threats to security are not serious. Of value when inde-\npendent assurance that due care has been exercised in protecting \npersonal information.\nEAL2\nStructurally \ntested\nApplies when delivery of design information and test results are in \nkeeping with good commercial practices. Of value when developers \nor users require low to moderate levels of independently assured \nsecurity. Especially relevant when evaluating legacy systems.\nEAL3\nMethodically \ntested and \nchecked\nApplies when security engineering begins at the design stage and \nis carried through without substantial subsequent alteration. Of \nvalue when developers or users require moderate level of inde-\npendently assured security, including thorough investigation of \nTOE and its development.\nEAL4\nMethodically \ndesigned, tested, \nand reviewed\nApplies when rigorous, positive security engineering and good \ncommercial development practices are used. Does not require \nsubstantial specialist knowledge, skills, or resources. Involves \nindependent testing of all TOE security functions.\nEAL5\nSemi-formally \ndesigned and \ntested\nUses rigorous security engineering and commercial development \npractices, including specialist security engineering techniques, \nfor semi-formal testing. Applies when developers or users require \na high level of independently assured security in a planned devel-\nopment approach, followed by rigorous development.\nEAL6\nSemi-formally \nverified, \ndesigned, \nand tested\nUses direct, rigorous security engineering techniques at all phase \nof design, development, and testing to produce a premium TOE. \nApplies when TOEs for high-risk situations are needed, where the \nvalue of protected assets justifies additional cost. Extensive test-\ning reduce risks of penetration, probability of cover channels, and \nvulnerability to attack.\nEAL7\nFormally \nverified, \ndesigned, \nand tested\nUsed only for highest-risk situations or where high-value assets \nare involved. Limited to TOEs where tightly focused security func-\ntionality is subject to extensive formal analysis and testing.\nFor a complete description of EALs, consult Chapter 6 in part 3 of the CC documents; page 54 is especially notewor-\nthy since it explains all EALs in terms of the CC’s assurance criteria.\n" }, { "page_number": 477, "text": "432\nChapter 12\n\u0002 Principles of Security Models\nThough the CC are flexible and accommodating enough to capture most security needs and \nrequirements, they are by no means perfect. As with other evaluation criteria, the CC do nothing to \nmake sure that how users act on data is also secure. The CC also does not address administrative \nissues outside the specific purview of security. As with other evaluation criteria, the CC does not \ninclude evaluation of security in situ—that is, it does not address controls related to personnel, orga-\nnizational practices and procedures, or physical security. Likewise, controls over electromagnetic \nemissions are not addressed, nor are the criteria for rating the strength of cryptographic algorithms \nexplicitly laid out. Nevertheless, the CC represent some of the best techniques whereby systems may \nbe rated for security. To conclude this discussion of security evaluation standards, Table 12.3 sum-\nmarizes how various ratings from the TCSEC, ITSEC, and the CC may be compared.\nCertification and Accreditation\nOrganizations that require secure systems need one or more methods to evaluate how well a sys-\ntem meets their security requirements. The formal evaluation process is divided into two phases, \ncalled certification and accreditation. The actual steps required in each phase depend on the \nevaluation criteria an organization chooses. A CISSP candidate must understand the need for \neach phase and the criteria commonly used to evaluate systems. The two evaluation phases are \ndiscussed in the next two sections, and then we present various evaluation criteria and consid-\nerations you must address when assessing the security of a system.\nThe process of evaluation provides a way to assess how well a system measures up to a desired \nlevel of security. Because each system’s security level depends on many factors, all of them must \nbe taken into account during the evaluation. Even though a system is initially described as secure, \nthe installation process, physical environment, and general configuration details all contribute to \nits true general security. Two identical systems could be assessed at different levels of security due \nto configuration or installation differences.\nT A B L E\n1 2 . 3\nComparing Security Evaluation Standards\nTCSEC\nITSEC\nCC\nDesignation\nD\nF-D+E0\nEAL0, EAL1\nMinimal/no protection\nC1\nF-C1+E1\nEAL2\nDiscretionary security mechanisms\nC2\nF-C2+E2\nEAL3\nControlled access protection\nB1\nF-B1+E3\nEAL4\nLabeled security protection\nB2\nF-B2+E4\nEAL5\nStructured security protection\nB3\nF-B3+E5\nEAL6\nSecurity domains\nA1\nF-B3+E6\nEAL7\nVerified security design\n" }, { "page_number": 478, "text": "Understanding System Security Evaluation\n433\nThe terms certification, accreditation, and maintenance used in the following \nsections are official terms used by the defense establishment and you should \nbe familiar with them.\nCertification and accreditation are additional steps in the software and IT systems develop-\nment process normally required from defense contractors and others working in a military envi-\nronment. The official definitions of these terms as used by the U.S. government are from \nDepartment of Defense Instruction 5200.40, Enclosure 2.\nCertification\nThe first phase in a total evaluation process is certification. Certification is the comprehensive \nevaluation of the technical and nontechnical security features of an IT system and other safe-\nguards made in support of the accreditation process to establish the extent to which a particular \ndesign and implementation meets a set of specified security requirements.\nSystem certification is the technical evaluation of each part of a computer system to assess its \nconcordance with security standards. First, you must choose evaluation criteria (we will present \ncriteria alternatives in later sections). Once you select criteria to use, you analyze each system \ncomponent to determine whether or not it satisfies the desired security goals. The certification \nanalysis includes testing the system’s hardware, software, and configuration. All controls are \nevaluated during this phase, including administrative, technical, and physical controls.\nAfter you assess the entire system, you can evaluate the results to determine the security level \nthe system supports in its current environment. The environment of a system is a critical part \nof the certification analysis, so a system can be more or less secure depending on its surround-\nings. The manner in which you connect a secure system to a network can change its security \nstanding. Likewise, the physical security surrounding a system can affect the overall security rat-\ning. You must consider all factors when certifying a system.\nYou complete the certification phase when you have evaluated all factors and determined the level \nof security for the system. Remember that the certification is only valid for a system in a specific envi-\nronment and configuration. Any changes could invalidate the certification. Once you have certified a \nsecurity rating for a specific configuration, you are ready to seek acceptance of the system. Manage-\nment accepts the certified security configuration of a system through the accreditation process.\nAccreditation\nIn the certification phase, you test and document the security capabilities of a system in a spe-\ncific configuration. With this information in hand, the management of an organization com-\npares the capabilities of a system to the needs of the organization. It is imperative that the \nsecurity policy clearly states the requirements of a security system. Management reviews the cer-\ntification information and decides if the system satisfies the security needs of the organization. \nIf management decides the certification of the system satisfies their needs, the system is accred-\nited. Accreditation is the formal declaration by the Designated Approving Authority (DAA) that \nan IT system is approved to operate in a particular security mode using a prescribed set of safe-\nguards at an acceptable level of risk. Once accreditation is performed, management can for-\nmally accept the adequacy of the overall security performance of an evaluated system.\n" }, { "page_number": 479, "text": "434\nChapter 12\n\u0002 Principles of Security Models\nThe process of certification and accreditation is often an iterative process. In the accredita-\ntion phase, it is not uncommon to request changes to the configuration or additional controls \nto address security concerns. Remember that whenever you change the configuration, you must \nrecertify the new configuration. Likewise, you need to recertify the system when a specific time \nperiod elapses or when you make any configuration changes. Your security policy should spec-\nify what conditions require recertification. A sound policy would list the amount of time a cer-\ntification is valid along with any changes that would require you to restart the certification and \naccreditation process.\nCertification and Accreditation Systems\nThere are two government standards currently in place for the certification and accreditation of \ncomputing systems: The DoD standard is the Defense Information Technology Security Certi-\nfication and Accreditation Process (DITSCAP), and the standard for all U.S. government exec-\nutive branch departments, agencies, and their contractors and consultants is the National \nInformation Assurance Certification and Accreditation Process (NIACAP). Both of these pro-\ncesses are divided into four phases:\nPhase 1: Definition\nInvolves the assignment of appropriate project personnel; documentation \nof the mission need; and registration, negotiation, and creation of a System Security Authori-\nzation Agreement (SSAA) that guides the entire certification and accreditation process\nPhase 2: Verification\nIncludes refinement of the SSAA, systems development activities, and a \ncertification analysis\nPhase 3: Validation\nIncludes further refinement of the SSAA, certification evaluation of the \nintegrated system, development of a recommendation to the DAA, and the DAA’s accreditation \ndecision\nPhase 4: Post Accreditation\nIncludes maintenance of the SSAA, system operation, change \nmanagement, and compliance validation\nThese phases are adapted from Department of Defense Instruction 5200.40, \nEnclosure 3.\nThe NIACAP process, administered by the Information Systems Security Organization of the \nNational Security Agency, outlines three different types of accreditation that may be granted. \nThe definitions of these types of accreditation (from National Security Telecommunications and \nInformation Systems Security Instruction 1000) are as follows:\n\u0002\nFor a system accreditation, a major application or general support system is evaluated.\n\u0002\nFor a site accreditation, the applications and systems at a specific, self-contained location \nare evaluated.\n\u0002\nFor a type accreditation, an application or system that is distributed to a number of differ-\nent locations is evaluated.\n" }, { "page_number": 480, "text": "Common Flaws and Security Issues\n435\nCommon Flaws and Security Issues\nNo security architecture is complete and totally secure. There are weaknesses and vulnerabilities \nin every computer system. The goal of security models and architectures is to address as many \nknown weaknesses as possible. This section presents some of the more common security issues \nthat affect computer systems. You should understand each of the issues and how they can \ndegrade the overall security of your system. Some issues and flaws overlap one another and are \nused in creative ways to attack systems. Although the following discussion covers the most com-\nmon flaws, the list is not exhaustive. Attackers are very clever.\nCovert Channels\nA covert channel is a method that is used to pass information and that is not normally used for \ncommunication. Because the path is not normally used for communication, it may not be pro-\ntected by the system’s normal security controls. Usage of a covert channel provides a means to \nviolate, bypass, or circumvent a security policy undetected. As you might imagine, a covert \nchannel is the opposite of an overt channel. An overt channel is a known, expected, authorized, \ndesigned, monitored, and controlled method of communication.\nThere are two basic types of covert channels:\n\u0002\nA covert timing channel conveys information by altering the performance of a system com-\nponent or modifying a resource’s timing in a predictable manner. Using a covert timing \nchannel is generally a more sophisticated method to covertly pass data and is very difficult \nto detect.\n\u0002\nA covert storage channel conveys information by writing data to a common storage area \nwhere another process can read it. Be diligent for any process that writes to any area of \nmemory that another process can read.\nBoth types of covert channels rely on the use of communication techniques to exchange \ninformation with otherwise unauthorized subjects. Because the nature of the covert channel is \nthat it is unusual and outside the normal data transfer environment, detecting it can be difficult. \nThe best defense is to implement auditing and analyze log files for any covert channel activity.\nThe lowest level of security that addresses covert channels is B2 (F4+E4 for ITSEC, EAL5 for \nCC). All levels at or above level B2 must contain controls that detect and prohibit covert channels.\nAttacks Based on Design or Coding Flaws and Security Issues\nCertain attacks may result from poor design techniques, questionable implementation practices \nand procedure, or poor or inadequate testing. Some attacks may result from deliberate design \ndecisions when special points of entry built into code to circumvent access controls, login, or \nother security checks often added to code while under development is not removed when that \ncode is put into production. For what we hope are obvious reasons, such points of egress are \nproperly called back doors because they avoid security measures by design (they’re covered in \n" }, { "page_number": 481, "text": "436\nChapter 12\n\u0002 Principles of Security Models\na later section in this chapter, titled “Maintenance Hooks and Privileged Programs”). Extensive \ntesting and code review is required to uncover such covert means of access, which are incredibly \neasy to remove during final phases of development but can be incredibly difficult to detect dur-\ning testing or maintenance phases.\nAlthough functionality testing is commonplace for commercial code and applications, sepa-\nrate testing for security issues has only been gaining attention and credibility in the past few \nyears, courtesy of widely publicized virus and worm attacks and occasional defacements of or \ndisruptions to widely used public sites online. In the sections that follow, we cover common \nsources of attack or security vulnerability that can be attributed to failures in design, implemen-\ntation, pre-release code cleanup, or out-and-out coding mistakes. While avoidable, finding and \nfixing such flaws requires rigorous security-conscious design from the beginning of a develop-\nment project and extra time and effort spent in testing and analysis. While this helps to explain \nthe often lamentable state of software security, it does not excuse it!\nInitialization and Failure States\nWhen an unprepared system crashes and subsequently recovers, two opportunities to compro-\nmise its security controls may arise during that process. Many systems unload security controls \nas part of their shutdown procedures. Trusted recovery ensures that all controls remain intact \nin the event of a crash. During a trusted recovery, the system ensures that there are no oppor-\ntunities for access to occur when security controls are disabled. Even the recovery phase runs \nwith all controls intact.\nFor example, suppose a system crashes while a database transaction is being written to disk \nfor a database classified as top secret. An unprotected system might allow an unauthorized user \nto access that temporary data before it gets written to disk. A system that supports trusted \nrecovery ensures that no data confidentiality violations occur, even during the crash. This pro-\ncess requires careful planning and detailed procedures for handling system failures. Although \nautomated recovery procedures may make up a portion of the entire recovery, manual inter-\nvention may still be required. Obviously, if such manual action is needed, appropriate identifi-\ncation and authentication for personnel performing recovery is likewise essential.\nInput and Parameter Checking\nOne of the most notorious security violations is called a buffer overflow. This violation occurs \nwhen programmers fail to validate input data sufficiently, particularly when they do not impose \na limit on the amount of data their software will accept as input. Because such data is usually \nstored in an input buffer, when the normal maximum size of the buffer is exceeded, the extra \ndata is called overflow. Thus, the type of attack that results when someone attempts to supply \nmalicious instructions or code as part of program input is called a buffer overflow. Unfortu-\nnately, in many systems such overflow data is often executed directly by the system under attack \nat a high level of privilege or at whatever level of privilege attaches to the process accepting such \ninput. For nearly all types of operating systems, including Windows, Unix, Linux, and others, \nbuffer overflows expose some of the most glaring and profound opportunities for compromise \nand attack of any kind of known security vulnerability.\n" }, { "page_number": 482, "text": "Common Flaws and Security Issues\n437\nThe party responsible for a buffer overflow vulnerability is always the programmer who \nwrote the offending code. Due diligence from programmers can eradicate buffer overflows com-\npletely, but only if programmers check all input and parameters before storing them in any data \nstructure (and limit how much data can be proffered as input). Proper data validation is the only \nway to do away with buffer overflows. Otherwise, discovery of buffer overflows leads to a \nfamiliar pattern of critical security updates that must be applied to affected systems to close the \npoint of attack.\nChecking Code for Buffer Overflows\nIn early 2002, Bill Gates acted in his traditional role as the archetypal Microsoft spokesperson \nwhen he announced something he called the “Trustworthy Computing Initiative,” a series of \ndesign philosophy changes intended to beef up the often questionable standing of Microsoft’s \noperating systems and applications when viewed from a security perspective. As discussion on \nthis subject continued through 2002 and 2003, the topic of buffer overflows occurred repeat-\nedly (more often, in fact, than Microsoft Security Bulletins reported security flaws related to this \nkind of problem, which is among the most serious yet most frequently reported types of pro-\ngramming errors with security implications). As is the case for many other development orga-\nnizations and also for the builders of software development environments (the software tools \nthat developers use to create other software), increased awareness of buffer overflow exploits \nhas caused changes at many stages during the development process:\n\u0002\nDesigners must specify bounds for input data or state acceptable input values and set hard \nlimits on how much data will be accepted, parsed, and handled when input is solicited.\n\u0002\nDevelopers must follow such limitations when building code that solicits, accepts, and \nhandles input.\n\u0002\nTesters must check to make sure that buffer overflows can’t occur and attempt to circum-\nvent or bypass security settings when testing input handling code.\nIn his book Secrets & Lies, noted information security expert Bruce Schneier makes a great case \nthat security testing is in fact quite different from standard testing activities like unit testing, \nmodule testing, acceptance testing, and quality assurance checks (see the glossary) that soft-\nware companies have routinely performed as part of the development process for years and \nyears. What’s not yet clear at Microsoft (and at other development companies as well, to be as \nfair to the colossus of Redmond as possible) is whether this change in design and test philos-\nophy equates to the right kind of rigor necessary to foil all buffer overflows or not (some of the \nmost serious security holes that Microsoft reports as recently as April 2005 clearly invoke \n“buffer overruns” or identify the cause of the vulnerability as an “unchecked buffer”).\n" }, { "page_number": 483, "text": "438\nChapter 12\n\u0002 Principles of Security Models\nMaintenance Hooks and Privileged Programs\nMaintenance hooks are entry points into a system that are known by only the developer of the \nsystem. Such entry points are also called back doors. Although the existence of maintenance \nhooks is a clear violation of security policy, they still pop up in many systems. The original pur-\npose of back doors was to provide guaranteed access to the system for maintenance reasons or \nif regular access was inadvertently disabled. The problem is that this type of access bypasses all \nsecurity controls and provides free access to anyone who knows that the back doors exist. It is \nimperative that you explicitly prohibit such entry points and monitor your audit logs to uncover \nany activity that may indicate unauthorized administrator access.\nAnother common system vulnerability is the practice of executing a program whose security \nlevel is elevated during execution. Such programs must be carefully written and tested so they \ndo not allow any exit and/or entry points that would leave a subject with a higher security rat-\ning. Ensure that all programs that operate at a high security level are accessible only to appro-\npriate users and that they are hardened against misuse.\nIncremental Attacks\nSome forms of attack occur in slow, gradual increments rather than through obvious or recog-\nnizable attempts to compromise system security or integrity. Two such forms of attack are \ncalled data diddling and the salami attack. Data diddling occurs when an attacker gains access \nto a system and makes small, random, or incremental changes to data during storage, process-\ning, input, output, or transaction rather than obviously altering file contents or damaging or \ndeleting entire files. Such changes can be difficult to detect unless files and data are protected by \nencryption or some kind of integrity check (such as a checksum or message digest) is routinely \nperformed and applied each time a file is read or written. Encrypted file systems, file-level \nencryption techniques, or some form of file monitoring (which includes integrity checks like \nthose performed by applications like TripWire) usually offer adequate guarantees that no data \ndiddling is underway. Data diddling is often considered an attack performed more often by \ninsiders rather than outsiders (i.e., external intruders). It should be obvious that since data did-\ndling is an attack that alters data, it is considered an active attack.\nThe salami attack is more apocryphal, by all published reports. The name of the attack refers \nto a systematic whittling at assets in accounts or other records with financial value, where very \nsmall amounts are deducted from balances regularly and routinely. Metaphorically, the attack \nmay be explained as stealing a very thin slice from a salami each time it’s put on the slicing \nmachine when it’s being accessed by a paying customer. In reality, though no documented \nexamples of such an attack are available, most security experts concede that salami attacks are \npossible, especially when organizational insiders could be involved. Only by proper separation \nof duties and proper control over code can organizations completely prevent or eliminate such \nan attack. Setting financial transaction monitors to track very small transfers of funds or other \nitems of value should help to detect such activity; regular employee notification of the practice \nshould help to discourage attempts at such attacks.\nIf you'd like an entertaining method of learning about the salami attack or the \nsalami technique, view the movies Office Space, Sneakers, and Superman III.\n" }, { "page_number": 484, "text": "Common Flaws and Security Issues\n439\nProgramming\nWe have already mentioned the biggest flaw in programming. The buffer overflow comes from \nthe programmer failing to check the format and/or the size of input data. There are other poten-\ntial flaws with programs. Any program that does not handle any exception gracefully is in dan-\nger of exiting in an unstable state. It is possible to cleverly crash a program after it has increased \nits security level to carry out a normal task. If an attacker is successful in crashing the program \nat the right time, they can attain the higher security level and cause damage to the confidenti-\nality, integrity, and availability of your system.\nAll programs that are executed directly or indirectly must be fully tested to comply with your \nsecurity model. Make sure you have the latest version of any software installed and be aware \nof any known security vulnerabilities. Because each security model, and each security policy, is \ndifferent, you must ensure that the software you execute does not exceed the authority you \nallow. Writing secure code is difficult, but it’s certainly possible. Make sure all programs you \nuse are designed to address security concerns.\nTiming, State Changes, and Communication Disconnects\nComputer systems perform tasks with rigid precision. Computers excel at repeatable tasks. \nAttackers can develop attacks based on the predictability of task execution. The common \nsequence of events for an algorithm is to check that a resource is available and then access it \nif you are permitted. The time-of-check (TOC) is the time at which the subject checks on the \nstatus of the object. There may be several decisions to make before returning to the object to \naccess it. When the decision is made to access the object, the procedure accesses it at the time-\nof-use (TOU). The difference between the TOC and the TOU is sometimes large enough for \nan attacker to replace the original object with another object that suits their own needs. Time-\nof-check-to-time-of-use (TOCTTOU) attacks are often called race conditions because the \nattacker is racing with the legitimate process to replace the object before it is used.\nA classic example of a TOCTTOU attack is replacing a data file after its identity has been verified \nbut before data is read. By replacing one authentic data file with another file of the attacker’s choos-\ning and design, an attacker can potentially direct the actions of a program in many ways. Of course, \nthe attacker would have to have in-depth knowledge of the program and system under attack.\nLikewise, attackers can attempt to take action between two known states when the state of a \nresource or the entire system changes. Communication disconnects also provide small windows that \nan attacker might seek to exploit. Any time a status check of a resource precedes action on the \nresource, a window of opportunity exists for a potential attack in the brief interval between check \nand action. These attacks must be addressed in your security policy and in your security model.\nElectromagnetic Radiation\nSimply because of the kinds of electronic components from which they’re built, many computer \nhardware devices emit electromagnetic radiation during normal operation. The process of com-\nmunicating with other machines or peripheral equipment creates emanations that can be inter-\ncepted. It’s even possible to re-create keyboard input or monitor output by intercepting and \n" }, { "page_number": 485, "text": "440\nChapter 12\n\u0002 Principles of Security Models\nprocessing electromagnetic radiation from the keyboard and computer monitor. You can also \ndetect and read network packets passively (that is, without actually tapping into the cable) as \nthey pass along a network segment. These emanation leaks can cause serious security issues but \nare generally easy to address.\nThe easiest way to eliminate electromagnetic radiation interception is to reduce emanation \nthrough cable shielding or conduit and block unauthorized personnel and devices from getting \ntoo close to equipment or cabling by applying physical security controls. By reducing the signal \nstrength and increasing the physical buffer around sensitive equipment, you can dramatically \nreduce the risk of signal interception.\nSummary\nSecure systems are not just assembled. They are designed to support security. Systems that \nmust be secure are judged for their ability to support and enforce the security policy. This \nprocess of evaluating the effectiveness of a computer system is called certification. The cer-\ntification process is the technical evaluation of a system’s ability to meet its design goals. \nOnce a system has satisfactorily passed the technical evaluation, the management of an \norganization begins the formal acceptance of the system. The formal acceptance process is \ncalled accreditation.\nThe entire certification and accreditation process depends on standard evaluation criteria. \nSeveral criteria exist for evaluating computer security systems. The earliest criteria, TCSEC, was \ndeveloped by the U.S. Department of Defense. TCSEC, also called the Orange Book, provides \ncriteria to evaluate the functionality and assurance of a system’s security components. ITSEC is \nan alternative to the TCSEC guidelines and is used more often in European countries. Regard-\nless of which criteria you use, the evaluation process includes reviewing each security control for \ncompliance with the security policy. The better a system enforces the good behavior of subjects’ \naccess to objects, the higher the security rating.\nWhen security systems are designed, it is often helpful to create a security model to represent \nthe methods the system will use to implement the security policy. We discussed three security \nmodels in this chapter. The earliest model, the Bell-LaPadula model, supports data confidenti-\nality only. It was designed for the military and satisfies military concerns. The Biba model and \nthe Clark-Wilson model address the integrity of data and do so in different ways. The latter two \nsecurity models are appropriate for commercial applications.\nNo matter how sophisticated a security model is, flaws exist that attackers can exploit. \nSome flaws, such as buffer overflows and maintenance hooks, are introduced by program-\nmers, whereas others, such as covert channels, are architectural design issues. It is important \nto understand the impact of such issues and modify the security architecture when appropri-\nate to compensate.\n" }, { "page_number": 486, "text": "Exam Essentials\n441\nExam Essentials\nKnow the definitions of certification and accreditation.\nCertification is the technical evalua-\ntion of each part of a computer system to assess its concordance with security standards. \nAccreditation is the process of formal acceptance of a certified configuration.\nBe able to describe open and closed systems.\nOpen systems are designed using industry stan-\ndards and are usually easy to integrate with other open systems. Closed systems are generally \nproprietary hardware and/or software. Their specifications are not normally published and they \nare usually harder to integrate with other systems.\nKnow what confinement, bounds, and isolation are.\nConfinement restricts a process to read-\ning from and writing to certain memory locations. Bounds are the limits of memory a process \ncannot exceed when reading or writing. Isolation is the mode a process runs in when it is con-\nfined through the use of memory bounds.\nBe able to define object and subject in terms of access.\nThe subject of an access is the user or \nprocess that makes a request to access a resource. The object of an access request is the resource \na user or process wishes to access.\nKnow how security controls work and what they do.\nSecurity controls use access rules to \nlimit the access by a subject to an object.\nBe able to list the classes of TCSEC, ITSEC, and the Common Criteria.\nThe classes of \nTCSEC include A: Verified protection; B: Mandatory protection; C: Discretionary protection \nand D: Minimal protection. Table 12.3 covers and compares equivalent and applicable rankings \nfor TCSEC, ITSEC, and the CC (remember that functionality ratings from F7 to F10 in ITSEC \nhave no corresponding ratings in TCSEC).\nDefine a trusted computing base (TCB).\nA TCB is the combination of hardware, software, \nand controls that form a trusted base that enforces the security policy.\nBe able to explain what a security perimeter is.\nA security perimeter is the imaginary bound-\nary that separates the TCB from the rest of the system. TCB components communicate with \nnon-TCB components using trusted paths.\nKnow what the reference monitor and the security kernel are.\nThe reference monitor is the \nlogical part of the TCB that confirms whether a subject has the right to use a resource prior to \ngranting access. The security kernel is the collection of the TCB components that implement the \nfunctionality of the reference monitor.\nDescribe the Bell-LaPadula security model.\nThe Bell-LaPadula security model was developed \nin the 1970s to address military concerns over unauthorized access to secret data. It is built on \na state machine model and ensures the confidentiality of protected data.\nDescribe the Biba integrity model.\nThe Biba integrity model was designed to ensure the integ-\nrity of data. It is very similar to the Bell-LaPadula model, but its properties ensure that data is \nnot corrupted by subjects accessing objects at different security levels.\n" }, { "page_number": 487, "text": "442\nChapter 12\n\u0002 Principles of Security Models\nDescribe the Clark-Wilson security model.\nThe Clark-Wilson security model ensures data \nintegrity as the Biba model does, but it does so using a different approach. Instead of being built \non a state machine, the Clark-Wilson model uses object access restrictions to allow only specific \nprograms to modify objects. Clark-Wilson also enforces the separation of duties, which further \nprotects the data integrity.\nDescribe the difference between certification and accreditation and the various types of \naccreditation.\nUnderstand the certification and accreditation processes used by the U.S. \nDepartment of Defense and all other executive government agencies. Describe the differences \nbetween system accreditation, site accreditation, and type accreditation.\nBe able to explain what covert channels are.\nA covert channel is any method that is used to \npass information but that is not normally used for information.\nUnderstand what buffer overflows and input checking are.\nA buffer overflow occurs when \nthe programmer fails to check the size of input data prior to writing the data into a specific \nmemory location. In fact, any failure to validate input data could result in a security violation.\nDescribe common flaws to security architectures.\nIn addition to buffer overflows, program-\nmers can leave back doors and privileged programs on a system after it is deployed. Even well-\nwritten systems can be susceptible to time-of-check-to-time-of-use (TOCTTOU) attacks. Any \nstate change could be a potential window of opportunity for an attacker to compromise a system.\n" }, { "page_number": 488, "text": "Review Questions\n443\nReview Questions\n1.\nWhat is system certification?\nA. Formal acceptance of a stated system configuration\nB. A technical evaluation of each part of a computer system to assess its compliance with secu-\nrity standards\nC. A functional evaluation of the manufacturer’s goals for each hardware and software com-\nponent to meet integration standards\nD. A manufacturer’s certificate stating that all components were installed and configured correctly\n2.\nWhat is system accreditation?\nA. Formal acceptance of a stated system configuration\nB. A functional evaluation of the manufacturer’s goals for each hardware and software com-\nponent to meet integration standards\nC. Acceptance of test results that prove the computer system enforces the security policy\nD. The process to specify secure communication between machines\n3.\nWhat is a closed system?\nA. A system designed around final, or closed, standards\nB. A system that includes industry standards\nC. A proprietary system that uses unpublished protocols\nD. Any machine that does not run Windows\n4.\nWhich best describes a confined process?\nA. A process that can run only for a limited time\nB. A process that can run only during certain times of the day\nC. A process that can access only certain memory locations\nD. A process that controls access to an object\n5.\nWhat is an access object?\nA. A resource a user or process wishes to access\nB. A user or process that wishes to access a resource\nC. A list of valid access rules\nD. The sequence of valid access types\n6.\nWhat is a security control?\nA. A security component that stores attributes that describe an object\nB. A document that lists all data classification types\nC. A list of valid access rules\nD. A mechanism that limits access to an object\n" }, { "page_number": 489, "text": "444\nChapter 12\n\u0002 Principles of Security Models\n7.\nFor what type of information system security accreditation are the applications and systems at \na specific, self-contained location evaluated?\nA. System accreditation\nB. Site accreditation\nC. Application accreditation\nD. Type accreditation\n8.\nHow many major categories do the TCSEC criteria define?\nA. Two\nB. Three\nC. Four\nD. Five\n9.\nWhat is a trusted computing base (TCB)?\nA. Hosts on your network that support secure transmissions\nB. The operating system kernel and device drivers\nC. The combination of hardware, software, and controls that work together to enforce a \nsecurity policy\nD. The software and controls that certify a security policy\n10. What is a security perimeter? (Choose all that apply.)\nA. The boundary of the physically secure area surrounding your system\nB. The imaginary boundary that separates the TCB from the rest of the system\nC. The network where your firewall resides\nD. Any connections to your computer system\n11. What part of the TCB validates access to every resource prior to granting the requested access?\nA. TCB partition\nB. Trusted library\nC. Reference monitor\nD. Security kernel\n12. What is the best definition of a security model?\nA. A security model states policies an organization must follow.\nB. A security model provides a framework to implement a security policy.\nC. A security model is a technical evaluation of each part of a computer system to assess its con-\ncordance with security standards.\nD. A security model is the process of formal acceptance of a certified configuration.\n" }, { "page_number": 490, "text": "Review Questions\n445\n13. Which security models are built on a state machine model?\nA. Bell-LaPadula and Take-Grant\nB. Biba and Clark-Wilson\nC. Clark-Wilson and Bell-LaPadula\nD. Bell-LaPadula and Biba\n14. Which security model(s) address(es) data confidentiality?\nA. Bell-LaPadula\nB. Biba\nC. Clark-Wilson\nD. Both A and B\n15. Which Bell-LaPadula property keeps lower-level subjects from accessing objects with a higher \nsecurity level?\nA. * (star) Security Property\nB. No write up property\nC. No read up property\nD. No read down property\n16. What is a covert channel?\nA. A method that is used to pass information and that is not normally used for communication\nB. Any communication used to transmit secret or top secret data\nC. A trusted path between the TCB and the rest of the system\nD. Any channel that crosses the security perimeter\n17.\nWhat term describes an entry point that only the developer knows about into a system?\nA. Maintenance hook\nB. Covert channel\nC. Buffer overflow\nD. Trusted path\n18. What is the time-of-check?\nA. The length of time it takes a subject to check the status of an object\nB. The time at which the subject checks on the status of the object\nC. The time at which a subject accesses an object\nD. The time between checking and accessing an object\n" }, { "page_number": 491, "text": "446\nChapter 12\n\u0002 Principles of Security Models\n19. How can electromagnetic radiation be used to compromise a system?\nA. Electromagnetic radiation can be concentrated to disrupt computer operation.\nB. Electromagnetic radiation makes some protocols inoperable.\nC. Electromagnetic radiation can be intercepted.\nD. Electromagnetic radiation is necessary for some communication protocol protection \nschemes to work.\n20. What is the most common programmer-generated security flaw?\nA. TOCTTOU vulnerability\nB. Buffer overflow\nC. Inadequate control checks\nD. Improper logon authentication\n" }, { "page_number": 492, "text": "Answers to Review Questions\n447\nAnswers to Review Questions\n1.\nB. A system certification is a technical evaluation. Option A describes system accreditation. \nOptions C and D refer to manufacturer standards, not implementation standards.\n2.\nA. Accreditation is the formal acceptance process. Option B is not an appropriate answer \nbecause it addresses manufacturer standards. Options C and D are incorrect because there is no \nway to prove that a configuration enforces a security policy and accreditation does not entail \nsecure communication specification.\n3.\nC. A closed system is one that uses largely proprietary or unpublished protocols and standards. \nOptions A and D do not describe any particular systems, and Option B describes an open system.\n4.\nC. A constrained process is one that can access only certain memory locations. Options A, B, \nand D do not describe a constrained process.\n5.\nA. An object is a resource a user or process wishes to access. Option A describes an access object.\n6.\nD. A control limits access to an object to protect it from misuse from unauthorized users.\n7.\nB. The applications and systems at a specific, self-contained location are evaluated for DITSCAP \nand NIACAP site accreditation.\n8.\nC. TCSEC defines four major categories: Category A is verified protection, category B is mandatory \nprotection, category C is discretionary protection, and category D is minimal protection.\n9.\nC. The TCB is the part of your system you can trust to support and enforce your \nsecurity policy.\n10. A, B. Although the most correct answer in the context of this chapter is B, option A is also a cor-\nrect answer in the context of physical security.\n11. C. Options A and B are not valid TCB components. Option D, the security kernel, is the collection \nof TCB components that work together to implement the reference monitor functions.\n12. B. Option B is the only option that correctly defines a security model. Options A, C, and D \ndefine part of a security policy and the certification and accreditation process.\n13. D. The Bell-LaPadula and Biba models are built on the state machine model.\n14. A. Only the Bell-LaPadula model addresses data confidentiality. The other models address data \nintegrity.\n15. C. The no read up property, also called the Simple Security Policy, prohibits subjects from read-\ning a higher security level object.\n16. A. A covert channel is any method that is used to secretly pass data and that is not normally used \nfor communication. All of the other options describe normal communication channels.\n17.\nA. An entry point that only the developer knows about into a system is a maintenance hook, or \nback door.\n" }, { "page_number": 493, "text": "448\nChapter 12\n\u0002 Principles of Security Models\n18. B. Option B defines the time-of-check (TOC), which is the time at which a subject verifies the \nstatus of an object.\n19. C. If a receiver is in close enough proximity to an electromagnetic radiation source, it can be \nintercepted.\n20. B. By far, the buffer overflow is the most common, and most avoidable, programmer-generated \nvulnerability.\n" }, { "page_number": 494, "text": "Chapter\n13\nAdministrative \nManagement\nTHE CISSP EXAM TOPICS COVERED IN THIS \nCHAPTER INCLUDE:\n\u0001 Operations Security Concepts\n\u0001 Handling of Media\n\u0001 Types of Security Controls\n\u0001 Operations Security Controls\n" }, { "page_number": 495, "text": "All companies must take into account the issues that can make \nday-to-day operations susceptible to breaches in security. Person-\nnel management is a form of administrative control, or adminis-\ntrative management, and is an important factor in maintaining operations security. Clearly \ndefined personnel management practices must be included in your security policy and subse-\nquent formalized security structure documentation (i.e., standards, guidelines, and procedures).\nOperations security topics are related to personnel management because personnel manage-\nment can directly affect security and daily operations. They are included in the Operations Secu-\nrity domain of the Common Body of Knowledge (CBK) for the CISSP certification exam, which \ndeals with topics and issues related to maintaining an established secure IT environment. Oper-\nations security is concerned with maintaining the IT infrastructure after it has been designed and \ndeployed and involves using hardware controls, media controls, and subject (user) controls that \nare designed to protect against asset threats.\nThis domain is discussed in this chapter and further in the following chapter (Chapter 14, \n“Auditing and Monitoring”). Be sure to read and study both chapters to ensure your under-\nstanding of the essential antivirus and operations material.\nOperations Security Concepts\nThe primary purpose of operations security is to safeguard information assets that are resident \nin a system on a day-to-day basis, to identify and safeguard any vulnerabilities there are in the \nsystem, and finally, to prevent any exploitation of threats. Administrators often consider the \nrelationship between assets, vulnerabilities, and threats an operations security triple. The trick \nfrom here is how to tackle the operations security triple.\nThe Operations Security domain is a broad collection of many concepts that are both distinct \nand interrelated, including antivirus management, operational assurance, backup maintenance, \nchanges in location, privileges, trusted recovery, configuration and change management con-\ntrol, due care and due diligence, privacy, security, and operations controls.\nThe following sections highlight these important day-to-day issues that affect company oper-\nations by discussing them in relation to maintaining security.\n" }, { "page_number": 496, "text": "Operations Security Concepts\n451\nAntivirus Management\nViruses are the most common form of security breach in the IT world. Any communications \npathway can be and is being exploited as a delivery mechanism for a virus or other malicious \ncode. Viruses are distributed via e-mail (the most common means), websites, and documents \nand even within commercial software. Antivirus management is the design, deployment, and \nmaintenance of an antivirus solution for your IT environment.\nIf users are allowed to install and execute software without restriction, then the IT infra-\nstructure is more vulnerable to virus infections. To provide a more virus-free environment, you \nshould make sure software is rigidly controlled. Users should be able to install and execute only \ncompany approved and distributed software. All new software should be thoroughly tested and \nscanned before it is distributed on a production network. Even commercial software has \nbecome an inadvertent carrier of viruses.\nUsers should be trained in the skills of safe computing, especially if they are granted Internet \naccess or have any form of e-mail. In areas where technical controls cannot prevent virus infec-\ntions, users should be trained to prevent them. User awareness training should include informa-\ntion about handling attachments or downloads from unknown sources and unrequested \nattachments from known sources. Users should be told to never test an executable by executing \nit. All instances of suspect software should be reported immediately to the security administrator.\nAntivirus software should be deployed on multiple levels of a network. All traffic—including \ninternal, inbound, and outbound—should be scanned for viruses. A virus scanning tool should \nbe present on all border connection points, on all servers, and on all clients. Installing products \nfrom different vendors on each of these three arenas will provide a more thorough and fool-\nproof scanning gauntlet.\nNever install more than one virus scanning tool on a single system. It will cause \nan unrecoverable system failure in most cases.\nEndeavor to have 100-percent virus-free servers and 100-percent virus-free backups. To \naccomplish the former, you must scan every single bit of data before it is allowed into or onto \na server for processing or storage. To accomplish the latter, you must scan every bit of data \nbefore it is stored onto the backup media. Having virus-free systems and backups will enable \nyou to recover from a virus infection in an efficient and timely manner.\nIn addition to using a multilevel or concentric circle antivirus strategy, you must maintain the \nsystem. A concentric circle strategy basically consists of multiple layers of antivirus scanning \nthroughout the environment to ensure that all current data and backups are free from viruses. \nRegular updates to the virus signature and definitions database should be performed. However, \ndistribution of updates should occur only after verifying that the update is benign. It is possible \nfor virus lists and engine updates to crash a system.\nMaintain vigilance by joining notification newsletters, mailing lists, and vendor sites. When \na new virus epidemic breaks out, take appropriate action by shutting down your e-mail service \nor Internet connectivity (if at all possible) until a solution/repair/inoculation is available.\n" }, { "page_number": 497, "text": "452\nChapter 13\n\u0002 Administrative Management\nOperational Assurance and Life Cycle Assurance\nAssurance is the degree of confidence you can place in the satisfaction of security needs of a \ncomputer, network, solution, and so on. It is based on how well a specific system complies with \nstated security needs and how well it upholds the security services it provides. Assurance was \ndiscussed in Chapter 12, “Principles of Security Models,” but there is another element of assur-\nance that applies to the Operation Security domain.\nThe Trusted Computer System Evaluation Criteria (TCSEC) is used to assign a level of assur-\nance to systems. TCSEC, or the Orange Book, also defines two additional types or levels of \nassurance: operational assurance and life cycle assurance. As you are aware, TCSEC was \nreplaced by Common Criteria in December 2000. It is, however, important to be aware of \nTCSEC-related material simply as a means to convey concepts and theories about security eval-\nuation. Thus, you don’t need to know the complete details of these two assurance levels, but \nthere are a few specific issues that you should be familiar with.\nOperational assurance focuses on the basic features and architecture of a system that lend \nthemselves to supporting security. There are five requirements or elements of operation assurance:\n\u0002\nSystem architecture (We discussed system architecture in Chapter 7.)\n\u0002\nSystem integrity (For more information, see Chapters 11 and 12.)\n\u0002\nCovert channel analysis (For more information, see Chapter 12.)\n\u0002\nTrusted facility management (Check out Chapter 19 for information about trusted facility \nmanagement.)\n\u0002\nTrusted recovery (We discussed this in Chapter 13.)\nLife cycle assurance focuses on the controls and standards that are necessary for designing, \nbuilding, and maintaining a system. The following are the four requirements or elements of life \ncycle assurance (these are all covered in detail in Chapter 7):\n\u0002\nSecurity testing\n\u0002\nDesign specification and testing\n\u0002\nConfiguration management\n\u0002\nTrusted distribution\nBackup Maintenance\nBacking up critical information is a key part of maintaining the availability and integrity of \ndata. Systems fail for various reasons, such as hardware failure, physical damage, software cor-\nruption, and malicious destruction from intrusions and attacks. Having a reliable backup is the \nbest form of insurance that the data on the affected system is not permanently lost. Backups are \nthe only form of insurance available against data loss. Without a backup, it is often impossible \nto restore data to its pre-disaster state. A backup can be considered reliable only if it is period-\nically tested. Testing a backup involves restoring files from backup media and then checking \nthem to make sure they’re readable and correct.\n" }, { "page_number": 498, "text": "Operations Security Concepts\n453\nBackups are an essential part of maintaining operations security and are discussed further in \nChapter 16, “Disaster Recovery Planning.”\nChanges in Workstation/Location\nChanges in a user’s workstation or in their physical location within an organization can be used \nas a means to improve or maintain security. Similar to job rotation, changing a user’s worksta-\ntion prevents a user from altering the system or installing unapproved software because the next \nperson to use the system would most likely be able to discover it. Having nonpermanent work-\nstations encourages users to keep all materials stored on network servers where it can be easily \nprotected, overseen, and audited. It also discourages the storage of personal information on the \nsystem as a whole. A periodic change in the physical location of a user’s workspace can also be \na deterrent to collusion because they are less likely to be able to convince employees with whom \nthey’re not familiar to perform unauthorized or illegal activities.\nNeed-to-Know and the Principle of Least Privilege\nNeed-to-know and the principle of least privilege are two standard axioms of high-security \nenvironments. A user must have a need-to-know to gain access to data or resources. Even if that \nuser has an equal or greater security classification than the requested information, if they do not \nhave a need-to-know, they are denied access. A need-to-know is the requirement to have access \nto, knowledge about, or possession of data or a resource to perform specific work tasks. The \nprinciple of least privilege is the notion that users should be granted the least amount of access \nto the secure environment as possible for them to be able to complete their work tasks.\nPeriodic Reviews of User Account Management\nMany administrators utilize periodic reviews of user account management to revisit and main-\ntain processes and procedures employed by the administrative staff in their support of users. \nSuch reviews should include examination of how well the principle of least privilege is being \nenforced, whether active accounts are still in use, if out-of-use accounts have been disabled or \ndeleted, and whether all current practices are approved by management.\nReview of user account management typically does not address whether a specific user’s pass-\nword conforms to the stated company password policy. That issue is covered by the enroll-\nment tools, password policies, and periodic penetration testing/ethical hacking activities.\nIt is also important to note that the action of adding, removing, and managing the settings of \nuser accounts are the purview of the account administrators or operations administrators, not \nthat of a security administrator. However, it is the responsibility of security administrators to \nset the clearances of users in a MAC-based environment.\n" }, { "page_number": 499, "text": "454\nChapter 13\n\u0002 Administrative Management\nPrivileged Operations Functions\nPrivileged operations functions are activities that require special access or privileges to perform \nwithin a secured IT environment. In most cases, these functions are restricted to administrators \nand system operators. Maintaining privileged control over these functions is an essential part of \nsustaining the system’s security. Many of these functions could be easily exploited to violate the \nconfidentiality, integrity, or availability of the system’s assets.\nThe following list includes some examples of privileged operations functions:\n\u0002\nUsing operating system control commands\n\u0002\nConfiguring interfaces\n\u0002\nAccessing audit logs\n\u0002\nManaging user accounts\n\u0002\nConfiguring security mechanism controls\n\u0002\nRunning script/task automation tools\n\u0002\nBacking up and restoring the system\n\u0002\nControlling communication\n\u0002\nUsing database recovery tools and log files\n\u0002\nControlling system reboots\nManaging privileged access is an important part of keeping security under control. In addi-\ntion to restricting privileged operations functions, you should also employ separation of duties. \nSeparation of duties ensures that no single person has total control over a system’s or environ-\nment’s security mechanisms. This is necessary to ensure that no single person can compromise \nthe system as a whole. It can also be called a form of split knowledge. In deployment, separation \nof duties is enforced by dividing the top- and mid-level administrative capabilities and functions \namong multiple trusted users.\nFurther control and restriction of privileged capabilities can be implemented by using two-\nman controls and rotation of duties. Two-man controls is the configuration of privileged activ-\nities so that they require two administrators to work in conjunction in order to complete the \ntask. The necessity of two operators also gives you the benefits of peer review and reduced like-\nlihood of collusion and fraud. Rotation of duties is the security control that involves switching \nseveral privileged security or operational roles among several users on a regular basis. For exam-\nple, if an organization has divided its administrative activities into six distinct roles or job \ndescriptions, then six or seven people need to be cross-trained for those distinct roles. Each per-\nson would work in a specific role for two to three months, and then everyone in this group \nwould be switched or rotated to a new role. When the organization has more than the necessary \nminimum number of trained administrators, every rotation leaves out one person, who can take \nsome vacation time and serve as a fill-in when necessary. The rotation of duties security control \nprovides for peer review, reduces collusion and fraud, and provides for cross-training. Cross-\ntraining makes your environment less dependent on any single individual.\n" }, { "page_number": 500, "text": "Operations Security Concepts\n455\nTrusted Recovery\nFor a secured system, trusted recovery is recovering securely from operation failures or system \ncrashes. The purpose of trusted recovery is to provide assurance that after a failure or crash, the \nrebooted system is no less secure than it was before the failure or crash. You must address two ele-\nments of the process to implement a trusted recovery solution. The first element is failure prepa-\nration. In most cases, this is simply the deployment of a reliable backup solution that keeps a \ncurrent backup of all data. A reliable backup solution also implies that there is a means by which \ndata on the backup media can be restored in a protected and efficient manner. The second element \nis the process of system recovery. The system should be forced to reboot into a single-user non-\nprivileged state. This means that the system should reboot so that a normal user account can be \nused to log in and that the system does not grant unauthorized access to users. System recovery \nalso includes the restoration of all affected files and services active or in use on the system at the \ntime of the failure or crash. Any missing or damaged files are restored, any changes to classifica-\ntion labels are corrected, and the settings on all security critical files is verified.\nTrusted recovery is a security mechanism discussed in the Common Criteria. The Common \nCriteria defines three types or hierarchical levels of trusted recovery:\nManual Recovery\nAn administrator is required to manually perform the actions necessary to \nimplement a secured or trusted recovery after a failure or system crash.\nAutomated Recovery\nThe system itself is able to perform trusted recovery activities to restore \na system, but only against a single failure.\nAutomated Recovery without Undue Loss\nThe system itself is able to perform trusted recov-\nery activities to restore a system. This level of trusted recovery allows for additional steps to pro-\nvide verification and protection of classified objects. These additional protection mechanisms \nmay include restoring corrupted files, rebuilding data from transaction logs, and verifying the \nintegrity of key system and security components.\nWhat happens when a systems suffers from an uncontrolled TCB or media failure? Such fail-\nures may compromise the stability and security of the environment, and the only possible \nresponse is to terminate the current environment and re-create the environment through reboo-\nting. Related to trusted recovery, an emergency system restart is the feature of a security system \nthat forces an immediate reboot once the system goes down.\nConfiguration and Change Management Control\nOnce a system has been properly secured, it is important to keep that security intact. Change in \na secure environment can introduce loopholes, overlaps, missing objects, and oversights that \ncan lead to new vulnerabilities. The only way to maintain security in the face of change is to sys-\ntematically manage change. Typically, this involves extensive logging, auditing, and monitoring \nof activities related to security controls and mechanisms. The resulting data is then used to iden-\ntify agents of change, whether objects, subjects, programs, communication pathways, or even \nthe network itself. The means to provide this function is to deploy configuration management \ncontrol or change management control. These mechanisms ensure that any alterations or \n" }, { "page_number": 501, "text": "456\nChapter 13\n\u0002 Administrative Management\nchanges to a system do not result in diminished security. Configuration/change management \ncontrols provide a process by which all system changes are tracked, audited, controlled, iden-\ntified, and approved. It requires that all system changes undergo a rigorous testing procedure \nbefore being deployed onto the production environment. It also requires documentation of any \nchanges to user work tasks and the training of any affected users. Configuration/change man-\nagement controls should minimize the effect on security from any alteration to the system. They \noften provide a means to roll back a change if it is found to cause a negative or unwanted effect \non the system or on security.\nThere are five steps or phases involved in configuration/change management control:\n1.\nApplying to introduce a change\n2.\nCataloging the intended change\n3.\nScheduling the change\n4.\nImplementing the change\n5.\nReporting the change to the appropriate parties\nWhen a configuration/change management control solution is enforced, it creates complete \ndocumentation of all changes to a system. This provides a trail of information if the change needs \nto be removed. It also provides a roadmap or procedure to follow if the same change is imple-\nmented on other systems. When a change is properly documented, that documentation can assist \nadministrators in minimizing the negative effects of the change throughout the environment.\nConfiguration/change management control is a mandatory element of the TCSEC ratings of \nB2, B3, and A1 but it is recommended for all other TCSEC rating levels. Ultimately, change \nmanagement improves the security of an environment by protecting implemented security from \nunintentional, tangential, or effected diminishments. Those in charge of change management \nshould oversee alterations to every aspect of a system, including hardware configuration and \nsystem and application software. It should be included in design, development, testing, evalu-\nation, implementation, distribution, evolution, growth, ongoing operation, and application of \nmodifications. Change management requires a detailed inventory of every component and con-\nfiguration. It also requires the collection and maintenance of complete documentation for every \nsystem component (including hardware and software) and for everything from configuration \nsettings to security features.\nStandards of Due Care and Due Diligence\nDue care is using reasonable care to protect the interests of an organization. Due diligence is \npracticing the activities that maintain the due care effort. For example, due care is developing \na formalized security structure containing a security policy, standards, baselines, guidelines, and \nprocedures. Due diligence is the continued application of this security structure onto the IT \ninfrastructure of an organization. Operational security is the ongoing maintenance of continued \ndue care and due diligence by all responsible parties within an organization.\nIn today’s business environment, showing prudent due care and due diligence is the only way \nto disprove negligence in an occurrence of loss. Senior management must show reasonable due \n" }, { "page_number": 502, "text": "Operations Security Concepts\n457\ncare and due diligence to reduce their culpability and liability when a loss occurs. Senior man-\nagement could be responsible for monetary damages up to $290 million for nonperformance of \ndue diligence in accordance with the U.S. Federal Sentencing Guidelines of 1991.\nPrivacy and Protection\nPrivacy is the protection of personal information from disclosure to any unauthorized individ-\nual or entity. In today’s online world, the line between public information and private informa-\ntion is often blurry. For example, is information about your web surfing habits private or \npublic? Can that information be gathered legally without your consent? And can the gathering \norganization sell that information for a profit that you don’t share in? However, your personal \ninformation includes more than information about your online habits; it also includes who you \nare (name, address, phone, race, religion, age, etc.), your health and medical records, your \nfinancial records, and even your criminal or legal records.\nDealing with privacy is a requirement for any organization that has people as employees. \nThus, privacy is a central issue for all organizations. The protection of privacy should be a core \nmission or goal set forth in the security policy of an organization. Privacy issues are discussed \nat greater length in Chapter 17, “Law and Investigations.”\nLegal Requirements\nEvery organization operates within a certain industry and country. Both of these entities impose \nlegal requirements, restrictions, and regulations on the practices of organizations that fall \nwithin their realm. These legal requirements can apply to licensed use of software, hiring restric-\ntions, handling of sensitive materials, and compliance with safety regulations. Complying with \nall applicable legal requirements is a key part of sustaining security. The legal requirements of \nan industry and of a country (and often of a state and city) should be considered the baseline \nor foundation upon which the remainder of the security infrastructure must be built.\nIllegal Activities\nIllegal activities are actions that violate a legal restriction, regulation, or requirement. They \ninclude fraud, misappropriation, unauthorized disclosure, theft, destruction, espionage, entrap-\nment, and so on. A secure environment should provide mechanisms to prevent the committal of \nillegal activities and the means to track illegal activities and maintain accountability from the \nindividuals perpetrating the crimes.\nPreventative control mechanisms include identification and authentication, access control, \nseparation of duties, job rotation, mandatory vacations, background screening, awareness \ntraining, least privilege, and many more. Detective mechanisms include auditing, intrusion \ndetection systems, and more.\n" }, { "page_number": 503, "text": "458\nChapter 13\n\u0002 Administrative Management\nRecord Retention\nRecord retention is the organizational policy that defines what information is maintained and \nfor how long. In most cases, the records in question are audit trails of user activity. This may \ninclude file and resource access, logon patterns, e-mail, and the use of privileges. Note that in \nsome legal jurisdictions, users must be made aware that their activities are being tracked.\nDepending upon your industry and your relationship with the government, you may need to \nretain records for three years, seven years, or indefinitely. In most cases, a separate backup \nmechanism is used to create archived copies of sensitive audit trails and accountability infor-\nmation. This allows for the main data backup system to periodically reuse its media without \nviolating the requirement to retain audit trails and the like.\nIf data about individuals is being retained by your organization (such as a conditional \nemployment agreement or a use agreement), the employees and customers need to be made \naware of it. In many cases, the notification requirement is a legal issue; in others, it is simply a \ncourtesy. In either case, it is a good idea to discuss the issue with appropriate legal counsel.\nSensitive Information and Media\nManaging information and media properly—especially in a high-security environment in which \nsensitive, confidential, and proprietary data is processed—is crucial to the security and stability \nof an organization. Because the value of the stored data is momentous in comparison with the \ncost of the storage media, always purchase media of the highest quality. In addition to media \nselection, there are several key areas of information and media management: marking, han-\ndling, storage, life span, reuse, and destruction. Marking, handling, storage, and observance of \nlife span ensure the viability of data on a storage media. Reuse and destruction focus on destroy-\ning the hosted data, not retaining it.\nMarking and Labeling Media\nThe marking of media is the simple and obvious activity of clearly and accurately defining its \ncontents. The most important aspect of marking is to indicate the security classification of the \ndata stored on the media so that the media itself can be handled properly. Tapes with unclas-\nsified data do not need as much security in their storage and transport as do tapes with classified \ndata. Data labels should be created automatically and stored as part of the backup set on the \nmedia. Additionally, a physical label should be applied to the media and maintained for the life-\ntime of the media. Media used to store classified information should never be reused to store \nless-sensitive data. Media labels help to ensure proper handling of hosted sensitive, classified, or \nconfidential data. All removable media, including tapes, USB drives, floppies, CDs, hard drives, \nand printouts, should be labeled.\nHandling Media\nHandling refers to the secured transportation of media from the point of purchase through stor-\nage and finally to destruction. Media must be handled in a manner consistent with the classifi-\ncation of the data it hosts. The environment within which media is stored can significantly affect \nits useful lifetime. For example, very warm environments or very dusty environments can cause \n" }, { "page_number": 504, "text": "Operations Security Concepts\n459\ndamage to tape media, shortening its life span. Here are some useful guidelines for handling \nmedia:\n\u0002\nKeep new media in its original sealed packaging until it’s needed to keep it isolated from the \nenvironment’s dust and dirt.\n\u0002\nWhen opening a media package, take extra caution not to damage the media in any way. \nThis includes avoiding sharp objects and not twisting or flexing the media.\n\u0002\nAvoid exposing the media to temperature extremes; it shouldn’t be stored too close to heat-\ners, radiators, air conditioners, or anything else that could cause extreme temperatures.\n\u0002\nDo not use media that has been damaged in any way, exposed to abnormal levels of dust \nand dirt, or dropped.\n\u0002\nMedia should be transported from one site to another in a temperature-controlled vehicle.\n\u0002\nMedia should be protected from exposure to the outside environment; avoid sunlight, \nmoisture, humidity, heat, and cold. Always transport media in an airtight, waterproof, \nsecured container.\n\u0002\nMedia should be acclimated for 24 hours before use.\n\u0002\nAppropriate security should be maintained over media from the point of departure from \nthe backup device to the secured offsite storage facility. Media is vulnerable to damage and \ntheft at any point during transportation.\n\u0002\nAppropriate security should be maintained over media at all other times (including when \nit’s reused) throughout the lifetime of the media until destruction.\nStoring Media\nMedia should be stored only in a secured location in which the temperature and humidity is con-\ntrolled, and it should not be exposed to magnetic fields, especially tape media. Elevator motors, \nprinters, and CRT monitors all have strong electric fields. The cleanliness of the storage area \nwill directly affect the life span and usefulness of media. Access to the storage facility should be \ncontrolled at all times. Physical security is essential to maintaining the confidentiality, integrity, \nand availability of backup media.\nManaging Media Life Span\nAll media has a useful life span. Reusable media will have a mean time to failure (MTTF) that \nis usually represented in the number of times it can be reused. Most tape backup media can be \nreused 3 to 10 times. When media is reused, it must be properly cleared. Clearing is a method \nof sufficiently deleting data on media that will be reused in the same secured environment. Purg-\ning is erasing the data so the media can be reused in a less-secure environment. Unless absolutely \nnecessary, do not employ media purging. The cost of supplying each classification level with its \nown media is insignificant compared to the damage that can be caused by disclosure. If media \nis not to be archived or reused within the same environment, it should be securely destroyed.\nOnce a backup media has reached its MTTF, it should be destroyed. Secure destruction of \nmedia that contained confidential and sensitive data is just as important as the storage of such \n" }, { "page_number": 505, "text": "460\nChapter 13\n\u0002 Administrative Management\nmedia. When destroying media, it should be erased properly to remove data remanence. Once \nproperly purged, media should be physically destroyed to prevent easy reuse and attempted data \ngleaning through casual (keyboard attacks) or high-tech (laboratory attacks) means. Physical \ncrushing is often sufficient, but incineration may be necessary.\nPreventing Disclosure via Reused Media\nPreventing disclosure of information from backup media is an important aspect of maintaining \noperational security. Disclosure prevention must occur at numerous instances in the life span of \nmedia. It must be addressed upon every reuse in the same secure environment, upon every reuse \nin a different or less-secure environment, upon removal from service, and upon destruction. \nAddressing this issue can take many forms, including erasing, clearing, purging, declassifica-\ntion, sanitization, overwriting, degaussing, and destruction.\nErasing media is simply performing a delete operation against a file, a selection of files, or the \nentire media. In most cases, the deletion or removal process only removes the directory or cat-\nalog link to the data. The actual data remains on the drive. The data will remain on the drive \nuntil it is overwritten by other data or properly removed from the media.\nClearing, or overwriting, is a process of preparing media for reuse and assuring that the \ncleared data cannot be recovered by any means. When media is cleared, unclassified data is writ-\nten over specific locations or over the entire media where classified data was stored. Often, the \nunclassified data is strings of 1s and 0s. The clearing process typically prepares media for reuse \nin the same secure environment, not for transfer to other environments.\nPurging is a more intense form of clearing that prepares media for reuse in less-secure envi-\nronments. Depending on the classification of the data and the security of the environment, the \npurging process is repeated 7 to 10 times to provide assurance against data recovery via labo-\nratory attacks.\nDeclassification involves any process that clears media for reuse in less-secure environments. \nIn most cases, purging is used to prepare media for declassification, but most of the time, the \nefforts required to securely declassify media are significantly greater than the cost of new media \nfor a less-secure environment.\nSanitization is any number of processes that prepares media for destruction. It ensures that \ndata cannot be recovered by any means from destroyed or discarded media. Sanitization can \nalso be the actual means by which media is destroyed. Media can be sanitized by purging or \ndegaussing without physically destroying the media. Degaussing magnetic media returns it to its \noriginal pristine, unused state. Sanitization methods that result in the physical destruction of the \nmedia include incineration, crushing, and shredding.\nCare should be taken when performing any type of sanitization, clearing, or purging process. \nIt is possible that the human operator or the tool involved in the activity will not properly per-\nform the task of removing data from the media. Software can be flawed, magnets can be faulty, \nand either can be used improperly. Always verify that the desired result is achieved after per-\nforming a sanitization process.\nDestruction is the final stage in the life cycle of backup media. Destruction should occur after \nproper sanitization or as a means of sanitization. When media destruction takes place, you must \nensure that the media cannot be reused or repaired and that data cannot be extracted from the \n" }, { "page_number": 506, "text": "Operations Security Concepts\n461\ndestroyed media by any possible means. Methods of destruction can include incineration, crush-\ning, shredding, and dissolving using caustic or acidic chemicals.\nYou might also consider demagnetizing the hard drive. However, in practice \nthis activity is a function of degaussing, which is itself unreliable. When donat-\ning or selling used computer equipment, it is usually recommended to remove \nand destroy storage devices rather than attempting to purge or sanitize them.\nSecurity Control Types\nThere are several methods used to classify security controls. The classification can be based on \nthe nature of the control, such as administrative, technical/logical, or physical. It can also be \nbased on the action or objective of the control, such as directive, preventative, detective, cor-\nrective, and recovery. Some controls can have multiple action/objective classifications.\nA directive control is a security tool used to guide the security implementation of an organi-\nzation. Examples of directive controls include security policies, standards, guidelines, proce-\ndures, laws, and regulations. The goal or objective of directive controls is to cause or promote \na desired result.\nA preventive control is a security mechanism, tool, or practice that can deter or mitigate \nundesired actions or events. Preventive controls are designed to stop or reduce the occurrence \nof various crimes, such as fraud, theft, destruction, embezzlement, espionage, and so on. They \nare also designed to avert common human failures such as errors, omissions, and oversights. \nPreventative controls are designed to reduce risk. Although not always the most cost effective, \nthey are preferred over detective or corrective controls from a perspective of maintaining secu-\nrity. Stopping an unwanted or unauthorized action before it occurs results in a more secure envi-\nronment than detecting and resolving problems after they occur does. Examples of preventive \ncontrols include firewalls, authentication methods, access controls, antivirus software, data \nclassification, separation of duties, job rotation, risk analysis, encryption, warning banners, \ndata validation, prenumbered forms, checks for duplications, and account lockouts.\nA detective control is a security mechanism used to verify whether the directive and preven-\ntative controls have been successful. Detective controls actively search for both violations of the \nsecurity policy and actual crimes. They are used to identify attacks and errors so that appropri-\nate action can be taken. Examples of detective controls include audit trails, logs, closed-circuit \ntelevision (CCTV), intrusion detection systems, antivirus software, penetration testing, pass-\nword crackers, performance monitoring, and cyclical redundancy checks (CRCs).\nCorrective controls are instructions, procedures, or guidelines used to reverse the effects of \nan unwanted activity, such as attacks and errors. Examples of corrective controls include man-\nuals, procedures, logging and journaling, incident handling, and fire extinguishers.\nA recovery control is used to return affected systems back to normal operations after an \nattack or an error has occurred. Examples of recovery controls include system restoration, \nbackups, rebooting, key escrow, insurance, redundant equipment, fault-tolerant systems, \nfailover, checkpoints, and contingency plans.\n" }, { "page_number": 507, "text": "462\nChapter 13\n\u0002 Administrative Management\nOperations Controls\nOperations controls are the mechanisms and daily procedures that provide protection for sys-\ntems. They are typically security controls that must be implemented or performed by people \nrather than automated by the system. Most operations controls are administrative in nature, but \nthey also include some technical or logical controls.\nWhen possible, operations controls should be invisible or transparent to users. The less a user \nsees the security controls, the less likely they will feel that security is hampering their produc-\ntivity. Likewise, the less users know about the security of the system, the less likely they will be \nable to circumvent it.\nResource Protection\nThe operations controls for resource protection are designed to provide security for the \nresources of an IT environment. Resources are the hardware, software, and data assets that an \norganization’s IT infrastructure comprises. To maintain confidentiality, integrity, and availabil-\nity of the hosted assets, the resources themselves must be protected. When designing a protec-\ntion scheme for resources, it is important to keep the following aspects or elements of the IT \ninfrastructure in mind:\n\u0002\nCommunication hardware/software\n\u0002\nBoundary devices\n\u0002\nProcessing equipment\n\u0002\nPassword files\n\u0002\nApplication program libraries\n\u0002\nApplication source code\n\u0002\nVendor software\n\u0002\nOperating system\n\u0002\nSystem utilities\n\u0002\nDirectories and address tables\n\u0002\nProprietary packages\n\u0002\nMain storage\n\u0002\nRemovable storage\n\u0002\nSensitive/critical data\n\u0002\nSystem logs/audit trails\n\u0002\nViolation reports\n\u0002\nBackup files and media\n\u0002\nSensitive forms and printouts\n\u0002\nIsolated devices, such as printers and faxes\n\u0002\nTelephone network\n" }, { "page_number": 508, "text": "Operations Security Concepts\n463\nPrivileged Entity Controls\nAnother aspect of operations controls is privileged entity controls. A privileged entity is an \nadministrator or system operator who has access to special, higher-order functions and capa-\nbilities that normal users don’t have access to. Privileged entity access is required for many \nadministrative and control job tasks, such as creating new user accounts, adding new routes to \na router table, or altering the configuration of a firewall. Privileged entity access can include sys-\ntem commands, system control interfaces, system log/audit files, and special control parame-\nters. Access to privileged entity controls should be restricted and audited to prevent usurping of \npower by unauthorized users.\nHardware Controls\nHardware controls are another part of operations controls. Hardware controls focus on \nrestricting and managing access to the IT infrastructure hardware. In many cases, periodic \nmaintenance, error/attack repair, and system configuration changes require direct physical \naccess to hardware. An operations control to manage access to hardware is a form of physical access \ncontrol. All personnel who are granted access to the physical components of the system must \nhave authorization. It is also a good idea to provide supervision while hardware operations are \nbeing performed by third parties.\nOther issues related to hardware controls include management of maintenance accounts and \nport controls. Maintenance accounts are predefined default accounts that are installed on hard-\nware (and in software) and have preset and widely known passwords. These accounts should \nbe renamed and a strong password assigned. Many hardware devices have diagnostic or con-\nfiguration/console ports. They should be accessible only to authorized personnel, and if possi-\nble, they should disabled when not in use for approved maintenance operations.\nInput/Output Controls\nInput and output controls are mechanisms used to protect the flow of information into and out \nof a system. These controls also protect applications and resources by preventing invalid, over-\nsized, or malicious input from causing errors or security breaches. Output controls restrict the \ndata that is revealed to users by restricting content based on subject classification and the secu-\nrity of the communication’s connection. Input and output controls are not limited to technical \nmechanisms; they can also be physical controls (for example, restrictions against bringing mem-\nory flashcards, printouts, floppy disks, CD-Rs, and so on into or out of secured areas).\nApplication Controls\nApplication controls are designed into software applications to minimize and detect operational \nirregularities. They limit end users’ use of applications in such a way that only particular \nscreens, records, and data are visible and only specific authorized functions are enabled. Par-\nticular uses of application can be focused on for monitoring and auditing. Application controls \nare transparent to the endpoint applications, so changes are not required to the applications \ninvolved.\nSome applications include integrity verification controls, much like those employed by \nDMBS. These controls look for evidence of data manipulation, errors, and omissions. These \n" }, { "page_number": 509, "text": "464\nChapter 13\n\u0002 Administrative Management\ntypes of controls are considered to be application controls (i.e., internal controls) rather than \nsoftware management controls (i.e., external controls).\nMedia Controls\nMedia controls are similar to the topics discussed in the section “Sensitive Information and \nMedia” earlier in this chapter. Media controls should encompass the marking, handling, stor-\nage, transportation, and destruction of media such as floppies, memory cards, hard drives, \nbackup tapes, CD-Rs, CD-RWs, and so on. A tracking mechanism should be used to record and \nmonitor the location and uses of media. Secured media should never leave the boundaries of the \nsecured environment. Likewise, any media brought into a secured environment should not con-\ntain viruses, malicious code, or other unwanted code elements, nor should that media ever leave \nthe secured environment except after proper sanitization or destruction.\nAdministrative Controls\nOperations controls include many of the administrative controls that we have already discussed \nnumerous times, such as separation of duties and responsibilities, rotation of duties, least priv-\nilege, and so on. However, in addition to these controls we must consider how the maintenance \nof hardware and software is performed.\nWhen assessing the controls used to manage and sustain hardware and software mainte-\nnance, here are some key issues to ponder:\n\u0002\nAre program libraries properly restricted and controlled?\n\u0002\nIs version control or configuration management enforced?\n\u0002\nAre all components of a new product properly tested, documented, and approved prior to \nrelease to production?\n\u0002\nAre the systems properly hardened? Hardening a system involves removing unnecessary \nprocesses, segregating interprocess communications, and reducing executing privileges to \nincrease system security.\nPersonnel Controls\nNo matter how much effort, expense, and expertise you put into physical access control and \nlogical/technical security mechanisms, you will always have to deal with people. In fact, people \nare both your last line of defense and your worse security management issue. People are vul-\nnerable to a wide range of attacks, plus they can intentionally violate security policy and \nattempt to circumvent physical and logical/technical security controls. Because of this, you must \nendeavor to employ only those people who are the most trustworthy.\nSecurity controls to manage personnel are considered a type of administrative controls. \nThese controls and issues should be clearly outlined in your security policy and followed as \n" }, { "page_number": 510, "text": "Personnel Controls\n465\nclosely as possible. Failing to employ strong personnel controls may render all of your other \nsecurity efforts worthless.\nThe first type of personnel controls are used in the hiring process. To hire a new employee, \nyou must first know what position needs to be filled. This requires the creation of a detailed job \ndescription. The job description should outline the work tasks and responsibilities of the posi-\ntion, which will in turn dictate the access and privileges needed in the environment. Further-\nmore, the job description defines the knowledge, skill, and experience level required by the \nposition. Only after the job description has been created is it possible to begin screening appli-\ncants for the position.\nThe next step in using personnel controls is selecting the best person for the job. In terms of \nsecurity, this means the most trustworthy. Often trustworthiness is determined through back-\nground and reference checks, employment history verification, and education and certification \nverification. This process could even include credit checks and FBI background checks.\nOnce a person has been hired, personnel controls should be deployed to continue to monitor \nand evaluate their work. Personnel controls monitoring activity should be deployed for all \nemployees, not just new ones. These controls can include access audit and review, validation of \nsecurity clearances, periodic skills assessment, supervisory employee ratings, and supervisor \noversight and review. Often companies will employ a policy of mandatory vacations in one or \ntwo week increments. Such a tool removes the employee from the environment and allows \nanother cross-trained employee to perform their work tasks during the interim. This activity \nserves as a form of peer review, providing a means to detect fraud and collusion. At any time, \nif an employee is found to be in violation of security policy, they should be properly repri-\nmanded and warned. If the employee continues to commit security policy violations, they \nshould be terminated.\nFinally, there are personnel controls that govern the termination process. When an employee \nis to be fired, an exit interview should be conducted. For the exit interview, the soon-to-be-\nreleased employee is brought to a manager’s office for a private meeting. This meeting is \ndesigned to remove them from their workspace and to minimize the effect of the firing activity \non other employees. The meeting usually consists of the employee, a manager, and a security \nguard. The security guard acts as a witness and as a protection agent. The exit interview should \nbe coordinated with the security administration staff so that just as the exit interview begins, the \nemployee’s network and building access is revoked. During the exit interview, the employee is \nreminded of his legal obligations to comply with any nondisclosure agreements and not to dis-\nclose any confidential data. The employee must return all badges, keys, and other company \nequipment on their person. Once the exit interview is complete, the security guard escorts the \nterminated employee out of the facility and possibly even off of the grounds. If the ex-employee \nhas any company equipment at home or at some other location, the security guard should \naccompany the ex-employee to recover those items. The purpose of an exit interview is prima-\nrily to reinforce the nondisclosure issue, but it also serves the purpose of removing the ex-\nemployee from the environment, having all access removed and devices returned, and prevent-\ning or minimizing any retaliatory activities because of the termination.\n" }, { "page_number": 511, "text": "466\nChapter 13\n\u0002 Administrative Management\nSummary\nThere are many areas of day-to-day operations that are susceptible to security breaches. There-\nfore, all standards, guidelines, and procedures should clearly define personnel management \npractices. Important aspects of personnel management include antivirus management and oper-\nations security.\nPersonnel management is a form of administrative control or administrative management. \nYou must include clearly defined personnel management practices in your security policy and \nsubsequent formalized security documentation. From a security perspective, personnel manage-\nment focuses on three main areas: hiring practices, ongoing job performance, and termination \nprocedures.\nOperations security consists of controls to maintain security in an office environment from \ndesign to deployment. Such controls include hardware, media, and subject (user) controls that \nare designed to protect against asset threats. Because viruses are the most common form of secu-\nrity breach in the IT world, managing a system’s antivirus protection is one of the most impor-\ntant aspect of operations security. Any communications pathway, such as e-mail, websites, and \ndocuments, and even commercial software, can and will be exploited as a delivery mechanism \nfor a virus or other malicious code. Antivirus management is the design, deployment, and main-\ntenance of an antivirus solution for your IT environment.\nBacking up critical information is a key part of maintaining the availability and integrity of \ndata and an essential part of maintaining operations security. Having a reliable backup is the \nbest form of insurance that the data on the affected system is not permanently lost.\nChanges in a user’s workstation or their physical location within an organization can be used \nas a means to improve or maintain security. When a user’s workstation is changed, the user is \nless likely to alter the system or install unapproved software because the next person to use the \nsystem would most likely be able to discover it.\nThe concepts of need-to-know and the principle of least privilege are two important aspects \nof a high-security environment. A user must have a need-to-know to gain access to data or \nresources. To comply with the principle of least privilege, users should be granted the least \namount of access to the secure environment as possible for them to be able to complete their \nwork tasks.\nActivities that require special access or privilege to perform within a secured IT environment \nare considered privileged operations functions. Such functions should be restricted to adminis-\ntrators and system operators.\nDue care is performing reasonable care to protect the interest of an organization. Due dili-\ngence is practicing the activities that maintain the due care effort. Operational security is the \nongoing maintenance of continued due care and due diligence by all responsible parties within \nan organization.\nAnother central issue for all organizations is privacy, which means providing protection of \npersonal information from disclosure to any unauthorized individual or entity. The protection \nof privacy should be a core mission or goal set forth in an organization’s security policy.\n" }, { "page_number": 512, "text": "Exam Essentials\n467\nIt’s also important that an organization operate within the legal requirements, restrictions, \nand regulations of its country and industry. Complying with all applicable legal requirements \nis a key part of sustaining security.\nIllegal activities are actions that violate a legal restriction, regulation, or requirement. Fraud, \nmisappropriation, unauthorized disclosure, theft, destruction, espionage, and entrapment are \nall examples of illegal activities. A secure environment should provide mechanisms to prevent \nthe committal of illegal activities and the means to track illegal activities and maintain account-\nability from the individuals perpetrating the crimes.\nIn a high-security environment where sensitive, confidential, and proprietary data is pro-\ncessed, managing information and media properly is crucial to the environment’s security and \nstability. There are four key areas of information and media management: marking, handling, \nstorage, and destruction. Record retention is the organizational policy that defines what infor-\nmation is maintained and for how long. If data about individuals is being retained by your orga-\nnization, the employees and customers need to be made aware of it.\nThe classification of security controls can be based on their nature, such as administrative, \ntechnical/logical, or physical. It can also be based on the action or objective of the control, such \nas directive, preventative, detective, corrective, and recovery.\nOperations controls are the mechanisms and daily procedures that provide protection for \nsystems. They are typically security controls that must be implemented or performed by people \nrather than automated by the system. Most operations controls are administrative in nature, but \nas you can see from the following list, they also include some technical or logical controls:\n\u0002\nResource protection\n\u0002\nPrivileged-entity controls\n\u0002\nChange control management\n\u0002\nHardware controls\n\u0002\nInput/output controls\n\u0002\nMedia controls\n\u0002\nAdministrative controls\n\u0002\nTrusted recovery process\nExam Essentials\nUnderstand that personnel management is a form of administrative control, also called admin-\nistrative management.\nYou must clearly define personnel management practices in your secu-\nrity policy and subsequent formalized security structure documentation. Personnel manage-\nment focuses on three main areas: hiring practices, ongoing job performance, and termination \nprocedures.\nUnderstand antivirus management.\nAntivirus management includes the design, deployment, \nand maintenance of an antivirus solution for your IT environment.\n" }, { "page_number": 513, "text": "468\nChapter 13\n\u0002 Administrative Management\nKnow how to prevent unrestricted installation of software.\nTo provide a virus-free environ-\nment, installation of software should be rigidly controlled. This includes allowing users to \ninstall and execute only company-approved and -distributed software as well as thoroughly \ntesting and scanning all new software before it is distributed on a production network. Even \ncommercial software has become an inadvertent carrier of viruses.\nUnderstand backup maintenance.\nA key part of maintaining the availability and integrity of \ndata is a reliable backup of critical information. Having a reliable backup is the only form of \ninsurance that the data on a system that has failed or has been damaged or corrupted is not per-\nmanently lost.\nKnow how changes in workstation or location promote a secure environment.\nChanges in a \nuser’s workstation or their physical location within an organization can be used as a means to \nimprove or maintain security. Having a policy of changing users’ workstations prevents them \nfrom altering the system or installing unapproved software and encourages them to keep all \nmaterial stored on network servers where it can be easily protected, overseen, and audited.\nUnderstand the need-to-know concept and the principle of least privilege.\nNeed-to-know \nand the principle of least privilege are two standard axioms of high-security environments. To \ngain access to data or resources, a user must have a need to know. If users do not have a need \nto know, they are denied access. The principle of least privilege means that users should be \ngranted the least amount of access to the secure environment as possible for them to be able to \ncomplete their work tasks.\nUnderstand privileged operations functions.\nPrivileged operations functions are activities \nthat require special access or privilege to perform within a secured IT environment. For maxi-\nmum security, such functions should be restricted to administrators and system operators.\nKnow the standards of due care and due diligence.\nDue care is using reasonable care to pro-\ntect the interest of an organization. Due diligence is practicing the activities that maintain the \ndue care effort. Senior management must show reasonable due care and due diligence to reduce \ntheir culpability and liability when a loss occurs.\nUnderstand how to maintain privacy.\nMaintaining privacy means protecting personal infor-\nmation from disclosure to any unauthorized individual or entity. In today’s online world, the \nline between public information and private information is often blurry. The protection of pri-\nvacy should be a core mission or goal set forth in the security policy of an organization.\nKnow the legal requirements in your region and field of expertise.\nEvery organization oper-\nates within a certain industry and country, both of which impose legal requirements, restric-\ntions, and regulations on its practices. Legal requirements can involve licensed use of software, \nhiring restrictions, handling of sensitive materials, and compliance with safety regulations.\nUnderstand what constitutes an illegal activity.\nAn illegal activity is an action that violates a \nlegal restriction, regulation, or requirement. A secure environment should provide mechanisms \nto prevent illegal activities from being committed and the means to track illegal activities and \nmaintain accountability from the individuals perpetrating the crimes.\n" }, { "page_number": 514, "text": "Exam Essentials\n469\nKnow the proper procedure for record retention.\nRecord retention is the organizational pol-\nicy that defines what information is maintained and for how long. In most cases, the records in \nquestion are audit trails of user activity. This can include file and resource access, logon pat-\nterns, e-mail, and the use of privileges.\nUnderstand the elements of securing sensitive media.\nManaging information and media \nproperly, especially in a high-security environment where sensitive, confidential, and propri-\netary data is processed, is crucial to the security and stability of an organization. In addition to \nmedia selection, there are several key areas of information and media management: marking, \nhandling, storage, life-span, reuse, and destruction.\nKnow and understand the security control types.\nThere are several methods used to classify \nsecurity controls. The classification can be based on the nature of the control (administrative, \ntechnical/logical, or physical) or on the action or objective of the control (directive, preventa-\ntive, detective, corrective, and recovery).\nKnow the importance of control transparency.\nWhen possible, operations controls should be \ninvisible or transparent to users to prevent users from feeling that security is hampering their \nproductivity. Likewise, the less users know about the security of the system, the less likely they \nwill be able to circumvent it.\nUnderstand how to protect resources.\nThe operations controls for resource protection are \ndesigned to provide security for the IT environment’s resources, including hardware, software, \nand data assets. To maintain confidentiality, integrity, and availability of the hosted assets, the \nresources themselves must be protected.\nBe able to explain change and configuration control management.\nChange in a secure envi-\nronment can introduce loopholes, overlaps, misplaced objects, and oversights that can lead to \nnew vulnerabilities. Therefore, you must systematically manage change by logging, auditing, \nand monitoring activities related to security controls and security mechanisms. The resulting \ndata is then used to identify agents of change, whether they are objects, subjects, programs, \ncommunication pathways, or even the network itself. The goal of change management is to \nensure that any change does not lead to reduced or compromised security.\nUnderstand the trusted recovery process.\nThe trusted recovery process ensures that a system \nis not breached during a crash, failure, or reboot and that every time they occur, the system \nreturns to a secure state.\n" }, { "page_number": 515, "text": "470\nChapter 13\n\u0002 Administrative Management\nReview Questions\n1.\nPersonnel management is a form of what type of control?\nA. Administrative\nB. Technical\nC. Logical\nD. Physical\n2.\nWhat is the most common means of distribution for viruses?\nA. Unapproved software\nB. E-mail\nC. Websites\nD. Commercial software\n3.\nWhich of the following causes the vulnerability of being affected by viruses to increase?\nA. Length of time the system is operating\nB. The classification level of the primary user\nC. Installation of software\nD. Use of roaming profiles\n4.\nIn areas where technical controls cannot be used to prevent virus infections, what should be used \nto prevent them?\nA. Security baselines\nB. Awareness training\nC. Traffic filtering\nD. Network design\n5.\nWhich of the following is not true?\nA. Complying with all applicable legal requirements is a key part of sustaining security.\nB. It is often possible to disregard legal requirements if complying with regulations would cause \na reduction in security.\nC. The legal requirements of an industry and of a country should be considered the baseline or \nfoundation upon which the remainder of the security infrastructure must be built.\nD. Industry and governments impose legal requirements, restrictions, and regulations on the \npractices of an organization.\n" }, { "page_number": 516, "text": "Review Questions\n471\n6.\nWhich of the following is not an illegal activity that can be performed over a computer network?\nA. Theft\nB. Destruction of assets\nC. Waste of resources\nD. Espionage\n7.\nWho does not need to be informed when records about their activities on a network are being \nrecorded and retained?\nA. Administrators\nB. Normal users\nC. Temporary guest visitors\nD. No one\n8.\nWhat is the best form of antivirus protection?\nA. Multiple solutions on each system\nB. A single solution throughout the organization\nC. Concentric circles of different solutions\nD. One-hundred-percent content filtering at all border gateways\n9.\nWhich of the following is an effective means of preventing and detecting the installation of unap-\nproved software?\nA. Workstation change\nB. Separation of duties\nC. Discretionary access control\nD. Job responsibility restrictions\n10. What is the requirement to have access to, knowledge about, or possession of data or a resource \nto perform specific work tasks commonly known as?\nA. Principle of least privilege\nB. Prudent man theory\nC. Need-to-know\nD. Role-based access control\n11. Which are activities that require special access to be performed within a secured IT environment?\nA. Privileged operations functions\nB. Logging and auditing\nC. Maintenance responsibilities\nD. User account management\n" }, { "page_number": 517, "text": "472\nChapter 13\n\u0002 Administrative Management\n12. Which of the following requires that archives of audit logs be kept for long periods of time?\nA. Data remanence\nB. Record retention\nC. Data diddling\nD. Data mining\n13. What is the most important aspect of marking media?\nA. Date labeling\nB. Content description\nC. Electronic labeling\nD. Classification\n14. Which operation is performed on media so it can be reused in a less-secure environment?\nA. Erasing\nB. Clearing\nC. Purging\nD. Overwriting\n15. Sanitization can be unreliable due to which of the following?\nA. No media can be fully swept clean of all data remnants.\nB. Even fully incinerated media can offer extractable data.\nC. The process can be performed improperly.\nD. Stored data is physically etched into the media.\n16. Which security tool is used to guide the security implementation of an organization?\nA. Directive control\nB. Preventive control\nC. Detective control\nD. Corrective control\n17.\nWhich security mechanism is used to verify whether the directive and preventative controls have \nbeen successful?\nA. Directive control\nB. Preventive control\nC. Detective control\nD. Corrective control\n" }, { "page_number": 518, "text": "Review Questions\n473\n18. When possible, operations controls should be ________________ .\nA. Simple\nB. Administrative\nC. Preventative\nD. Transparent\n19. What is the primary goal of change management?\nA. Personnel safety\nB. Allowing rollback of changes\nC. Ensuring that changes do not reduce security\nD. Auditing privilege access\n20. What type of trusted recovery process requires the intervention of an administrator?\nA. Restricted\nB. Manual\nC. Automated\nD. Controlled\n" }, { "page_number": 519, "text": "474\nChapter 13\n\u0002 Administrative Management\nAnswers to Review Questions\n1.\nA. Personnel management is a form of administrative control. Administrative controls also include \nseparation of duties and responsibilities, rotation of duties, least privilege, and so on.\n2.\nB. E-mail is the most common distribution method for viruses.\n3.\nC. As more software is installed, more vulnerabilities are added to the system, thus adding more \navenues of attack for viruses.\n4.\nB. In areas where technical controls cannot prevent virus infections, users should be trained on \nhow to prevent them.\n5.\nB. Laws and regulations must be obeyed and security concerns must be adjusted accordingly.\n6.\nC. Although wasting resources is considered inappropriate activity, it is not actually a crime in \nmost cases.\n7.\nD. Everyone should be informed when records about their activities on a network are being \nrecorded and retained.\n8.\nC. Concentric circles of different solutions is the best form of antivirus protection.\n9.\nA. Workstation change is an effective means of preventing and detecting the presence of unap-\nproved software.\n10. C. Need-to-know is the requirement to have access to, knowledge about, or possession of data \nor a resource to perform specific work tasks.\n11. A. Privileged operations functions are activities that require special access to perform within a \nsecured IT environment. They may include auditing, maintenance, and user account management.\n12. B. To use record retention properly, archives of audit logs must be kept for long periods of time.\n13. D. Classification is the most important aspect of marking media because it determines the pre-\ncautions necessary to ensure the security of the hosted content.\n14. C. Purging of media is erasing media so it can be reused in a less-secure environment. The purg-\ning process may need to be repeated numerous times depending on the classification of the data \nand the security of the environment.\n15. C. Sanitization can be unreliable because the purging, degaussing, or other processes can be per-\nformed improperly.\n16. A. A directive control is a security tool used to guide the security implementation of an organization.\n17.\nC. A detective control is a security mechanism used to verify whether the directive and preven-\ntative controls have been successful.\n" }, { "page_number": 520, "text": "Answers to Review Questions\n475\n18. D. When possible, operations controls should be invisible, or transparent, to users. This keeps \nusers from feeling hampered by security and reduces their knowledge of the overall security \nscheme, thus further restricting the likelihood that users will violate system security deliberately.\n19. C. The goal of change management is to ensure that any change does not lead to reduced or com-\npromised security.\n20. B. A manual recovery type of trusted recovery process requires the intervention of an administrator.\n" }, { "page_number": 521, "text": "" }, { "page_number": 522, "text": "Chapter\n14\nAuditing and \nMonitoring\nTHE CISSP EXAM TOPICS COVERED IN THIS \nCHAPTER INCLUDE:\n\u0001 Auditing and Audit Trails\n\u0001 Monitoring\n\u0001 Penetration Testing\n\u0001 Inappropriate Activities\n\u0001 Indistinct Threats and Countermeasures\n" }, { "page_number": 523, "text": "The Operations Security domain of the Common Body of Knowl-\nedge (CBK) for the CISSP certification exam deals with the activ-\nities and efforts directed at maintaining operational security and \nincludes the primary concerns of auditing and monitoring. Auditing and monitoring prompt IT \ndepartments to make efforts at detecting intrusions and unauthorized activities. Vigilant admin-\nistrators must sort through a selection of countermeasures and perform penetration testing that \nhelps to limit, restrict, and prevent inappropriate activities, crimes, and other threats.\nWe discussed the Operations Security domain in some detail in Chapter 13, “Administrative \nManagement,” and we will be finishing up coverage on this domain in this chapter. Be sure to \nread and study the materials from both chapters to ensure complete coverage of the essential \noperations security material for the CISSP certification exam.\nAuditing\nAuditing is a methodical examination or review of an environment to ensure compliance with \nregulations and to detect abnormalities, unauthorized occurrences, or outright crimes. Secure \nIT environments rely heavily on auditing. Overall, auditing serves as the primary type of detec-\ntive control used in a secure environment.\nAuditing Basics\nAuditing encompasses a wide variety of different activities, including the recording of event/\noccurrence data, examination of data, data reduction, the use of event/occurrence alarm trig-\ngers, and log analysis. These activities are also known as, for example, logging, monitoring, \nexamining alerts, analysis, and even intrusion detection. Logging is the activity of recording \ninformation about events or occurrences to a log file or database. Monitoring is the activity of \nmanually or programmatically reviewing logged information looking for something specific. \nAlarm triggers are notifications sent to administrators when a specific event occurs. Log anal-\nysis is a more detailed and systematic form of monitoring in which the logged information is \nanalyzed in detail for trends and patterns as well as abnormal, unauthorized, illegal, and policy-\nviolating activities. Intrusion detection is a specific form of monitoring both recorded informa-\ntion and real-time events to detect unwanted system access.\n" }, { "page_number": 524, "text": "Auditing\n479\nAccountability\nAuditing and monitoring are required factors for sustaining and enforcing accountability. Mon-\nitoring is the programmatic means by which subjects are held accountable for their actions \nwhile authenticated on a system. Without an electronic account of a subject’s actions, it is not \npossible to correlate IT activities, events, and occurrences with subjects. Monitoring is also the \nprocess by which unauthorized or abnormal activities are detected on a system. It is needed to \ndetect malicious actions by subjects, attempted intrusions, and system failures and to recon-\nstruct events, provide evidence for prosecution, and produce problem reports and analysis. \nAuditing and logging are usually native features of an operating system and most applications \nand services. Thus, configuring the system to record information about specific types of events \nis fairly straightforward.\nAuditing is also used to monitor the health and performance of a system through recording \nthe activities of subjects and objects as well as core system functions that maintain the operating \nenvironment and the security mechanisms. The audit trails created by recording system events \nto logs can be used to evaluate the health and performance of a system. System crashes can indi-\ncate faulty programs, corrupt drivers, or intrusion attempts. The event logs leading up to a crash \ncan often be used to discover the reason a system failed. Log files provide an audit trail for re-\ncreating step-by-step the history of an event, intrusion, or system failure.\nIn most cases, when sufficient logging and auditing is enabled to monitor a system, so much \ndata is collected that the important details get lost in the bulk. The art of data reduction is cru-\ncial when working with large volumes of monitoring data. There are numerous tools to search \nthrough log files for specific events or ID codes. However, for true automation and even real-\ntime analysis of events, an intrusion detection system (IDS) is required. IDS solutions are dis-\ncussed in Chapter 2, “Attacks and Monitoring.”\nCompliance\nAuditing is also commonly used for compliance testing, or compliance checking. Verification \nthat a system complies with laws, regulations, baselines, guidelines, standards, and policies is an \nimportant part of maintaining security in any environment. Compliance testing ensures that all \nof the necessary and required elements of a security solution are properly deployed and func-\ntioning as expected. Compliance checks can take many forms, such as vulnerability scans and \npenetration testing. They can also be performed using log analysis tools to determine if any vul-\nnerabilities for which countermeasures have been deployed have been realized on the system.\nAudits can be performed from one of two perspectives: internal or external. Organizational \nemployees from inside the IT environment who are aware of the implemented security solutions \nperform internal audits. Independent auditors from outside the IT environment who are not \nfamiliar with the implemented security solutions perform external audits. Insurance agencies, \naccounting firms, or even the organization itself hire external auditors to test the validity of \nsecurity claims. The goal of both internal and external auditing is to measure the effectiveness \nof the deployed security solution.\n" }, { "page_number": 525, "text": "480\nChapter 14\n\u0002 Auditing and Monitoring\nAudit Time Frames\nThe frequency of an IT infrastructure security audit or security review is based on risk. When per-\nforming risk analysis, it must be determined whether sufficient risk exists to warrant the expense \nof and interruption caused by a security audit on a more or less frequent basis. In any case, the fre-\nquency of audit reviews should be clearly defined in the security guidelines or standards of an \norganization. Once defined in the formalized security infrastructure, it should be adhered to. \nWithout regular assessments of the state of security of an IT infrastructure, there is no way to \nknow how secure the environment is until an attack is either successful or thwarted. Waiting until \nthe battle to determine whether or not you will succeed is a very poor business strategy.\nAs with many other aspects of deploying and maintaining security, security audits and effec-\ntiveness reviews are often viewed as key elements in displaying due care. If senior management \nfails to enforce compliance with regular periodic security reviews, then they will be held account-\nable and liable for any asset losses that occur due to security breaches or policy violations.\nAudit Trails\nAudit trails are the records created by recording information about events and occurrences into \na database or log file. They are used to reconstruct an event, to extract information about an \nincident, to prove or disprove culpability, and much more. They allow events to be examined \nor traced in forward or reverse order. This flexibility is useful when tracking down problems, \ncoding errors, performance issues, attacks, intrusions, security breaches, and other security pol-\nicy violations. Using audit trails is a passive form of detective security control. They serve as a \ndeterrent in the same manner closed-circuit television (CCTV) or security guards do: if the \nattacker knows they are being watched and their activities recorded, they are less likely to per-\nform the illegal, unauthorized, or malicious activity. Audit trails are also essential as evidence \nin the prosecution of criminals. They can often be used to produce a before-and-after picture of \nthe state of resources, systems, and assets. This in turn helps to identify whether the change or \nalteration is the result of the action of a user or an action of the OS or software or caused by \nsome other sources (such as hardware failure).\nAccountability is maintained for individual subjects through the use of audit trails. When \nactivities of users and events caused by the actions of users while online are recorded, individ-\nuals can be held accountable for their actions. This directly promotes good user behavior and \ncompliance with the organization’s security policy. Users who are aware that their IT activities \nare being recorded are less likely to attempt to circumvent security controls or to perform unau-\nthorized or restricted activities.\nAudit trails give system administrators the ability to reconstruct events long after they have \npassed. When a security violation is detected, the conditions and system state leading up to the \nevent, during the event, and after the event can be reconstructed through a close examination \nof the audit trail.\nAudit trails offer details about recorded events. A wide range of information can be recorded \nin log files, including time, date, system, user, process, and type of error/event. Log files can even \ncapture the memory state or the contents of memory. This information can help pinpoint the \ncause of the event. Using log files for this purpose is often labeled as problem identification. \n" }, { "page_number": 526, "text": "Auditing\n481\nOnce a problem is identified, performing problem resolution is little more than following up on \nthe disclosed information. Audit trails record system failures, OS bugs, and software errors as \nwell as abuses of access, violations of privileges, attempted intrusions, and many forms of \nattacks. Intrusion detection is a specialized form of problem identification through the use of \naudit trails.\nIf auditing records or logs are transmitted across a network from a sentry agent \nto a collector warehouse, the transaction should be encrypted. Log and audit \ninformation should never be allowed on the network in cleartext.\nOnce a security policy violation or a breach occurs, the source of that violation should be \ndetermined. If it is possible to track the individual who perpetrated the activity, they should be \nreprimanded or terminated (if an employee) or prosecuted (if an external intruder). In every case \nwhere a true security policy violation or breach has occurred (especially if a loss can be pin-\npointed), you should report the incident to your local authorities, possibly the FBI, and if the \nviolation occurred online, to one or more Internet incident tracking organizations.\nYou should time-synchronize all systems against a centralized or trusted public \ntime server. This ensures that all audit logs are in sync so you can perform \ndependable and secure logging activities.\nReporting Concepts\nThe actual formats used by an organization to produce reports from audit trails will vary \ngreatly. However, the reports should all address a few basic or central concepts: the purpose of \nthe audit, the scope of the audit, and the results discovered or revealed by the audit. In addition \nto these basic foundational concepts, audit reports often include many details specific to the \nenvironment, such as time, date, specific systems, and so on. Audit reports can include a wide \nrange of content that focuses on problems/events/conditions, standards/criteria/baselines, \ncauses/reasons, impact/effect, or solutions/recommendations/safeguards.\nReporting Format\nAudit reports should have a structure or design that is clear, concise, and objective. It is common \nfor the auditor to include opinions or recommendations for response to the content of a report, \nbut the actual findings of the audit report should be based on fact and evidence from audit trails. \nAudit reports include sensitive information and should be assigned a classification label and \nhandled appropriately. Within the hierarchy of the organization, only those people with suffi-\ncient privilege should have access to audit reports. An audit report may also be prepared in var-\nious forms according to the hierarchy of the organization. They should provide only the details \nrelevant to the position of the staff members who have access to them. For example, senior man-\nagement does not need to know all of the minute details of an audit report. Therefore, the audit \n" }, { "page_number": 527, "text": "482\nChapter 14\n\u0002 Auditing and Monitoring\nreport for senior management is much more concise and offers more of an overview or summary \nof the findings. An audit report for the IT manager or the security administrator should be very \ndetailed and include all available information on the events contained in it.\nReporting Time Frames\nThe frequency of producing audit reports is based on the value of the assets and the level of risk. The \nmore valuable the asset and the higher the risk, the more often an audit report should be pro-\nduced. Once an audit report is completed, it should be submitted to the assigned recipient (as \ndefined in the security policy documentation) and a signed confirmation of receipt should be \nfiled. When an audit report contains information about serious security violations or perfor-\nmance issues, the report should be escalated to higher levels of management for review, notifi-\ncation, and assignment of a response. Keep in mind that, in a formalized security infrastructure, \nonly the higher levels of management have any decision-making power. All entities at the lower \nend of the structure must follow prescribed procedures and follow instruction.\nSampling\nSampling, or data extraction, is the process of extracting elements from a large body of data in \norder to construct a meaningful representation or summary of the whole. In other words, sam-\npling is a form of data reduction that allows an auditor to quickly determine the important \nissues or events from an audit trail. There are two forms of sampling: statistical and nonstatis-\ntical. An auditing tool using precise mathematical functions to extract meaningful information \nfrom a large volume of data performs statistical sampling. There is always a risk that sampled \ndata is not an accurate representation of the whole body of data and that it may mislead audi-\ntors and managers, and statistical sampling can be used to measure that risk.\nClipping, a form of sampling, selects only those error events that cross the clipping level \nthreshold. Clipping levels are widely used in the process of auditing events to establish baseline \nof system or user activity that is considered routine activity. If this baseline is exceeded, an \nunusual event alarm is triggered. This works especially well when individuals exceed their \nauthority, when there are too many people with unrestricted access, and for serious intrusion \npatterns.\nClipping levels are often associated with a form of mainframe auditing known as violation \nanalysis. In violation analysis, an older form of auditing, the environment is monitored for \noccurrences of errors. A baseline of errors is expected and known, and this level of common \nerrors is labeled as the clipping level. Any errors that exceed the clipping level threshold trigger \na violation and details about such events are recorded into a violation record for later analysis.\nNonstatistical sampling can be described as random sampling or sampling at the auditor’s \ndiscretion. It offers neither assurance of an accurate representation of the whole body of data \nnor a gauge of the sampling risk. Nonstatistical sampling is less expensive, requires less training, \nand does not require computer facilities.\nBoth statistical and nonstatistical sampling are accepted as valid mechanisms to create sum-\nmaries or overviews of large bodies of audit data. However, statistical sampling is more reliable.\n" }, { "page_number": 528, "text": "Auditing\n483\nRecord Retention\nAs the term implies, record retention involves retaining and maintaining important informa-\ntion. An organization should have a policy that defines what information is maintained and for \nhow long. As it applies to the security infrastructure, in most cases, the records in question are \naudit trails of user activity, which may include file and resource access, logon patterns, e-mail, \nand the use of privileges.\nRetention Time Frames\nDepending upon your industry and your relationship with the government, you may need to \nretain records for three years, seven years, or indefinitely. In most cases, a separate backup \nmechanism is used to create archived copies of sensitive audit trails and accountability infor-\nmation. This allows for the main data backup system to periodically reuse its media without \nviolating the requirement to retain audit trails and the like.\nIf data about individuals is being retained by your organization, the employees and custom-\ners need to be made aware of it (such as in a conditional employment agreement or a use agree-\nment). In many cases, the notification requirement is a legal issue, whereas in others it is a \nsimply a courtesy. In either case, it is a good idea to discuss the issue with a lawyer.\nMedia, Destruction, and Security\nThe media used to store or retain audit trails must be properly maintained. This includes taking \nsecure measures for the marking, handling, storage, and destruction of media. For details on \nhandling sensitive media, please see the section titled “Sensitive Information and Media” in \nChapter 13, “Administrative Management.”\nRetained records should be protected against unauthorized and untimely destruction, \nagainst alteration, and against hindrances to availability. Many of the same security controls \nused to protect online resources and assets can be imposed to protect audit logs, audit trails, \naudit reports, and backup media containing audit information.\nAccess to audit information should be strictly controlled. Audit information can be used in \ninference attacks to discover information about higher classifications of data, thus the audit logs \ncontaining records about highly confidential assets should be handled in the same secure man-\nner as the actual assets. Another way of stating this is that when an audit log is created, you are \ncreating another asset entity with the same security needs as the original audited asset.\nAs the value of assets and the audit data goes up and risk increases, so does the need for an \nincrease in security and frequency of backups for the audit information. Audit data should be \ntreated with the same security precautions as all other high-classification data within an IT envi-\nronment. It should be protected by physical and logical security controls, it should be audited, \nit should be regularly backed up, and the backup media should be stored off site in a controlled \nfacility. The backup media hosting audit data should be protected from loss, destruction, alter-\nation, and unauthorized physical and logical access. The integrity of audit data must be main-\ntained and protected at all times. If audit data is not accurate, it is useless.\n" }, { "page_number": 529, "text": "484\nChapter 14\n\u0002 Auditing and Monitoring\nExternal Auditors\nIt is often necessary to test or verify the security mechanisms deployed in an environment. The \ntest process is designed to ensure that the requirements dictated by the security policy are fol-\nlowed and that no significant holes or weaknesses exist in the deployed security solution. Many \norganizations conduct independent audits by hiring outside or external security auditors to \ncheck the security of their environment. External audits provide a level of objectivity that an \ninternal audit cannot.\nAn external auditor is given access to the company’s security policy and the authorization to \ninspect every aspect of the IT and physical environment. Thus the auditor must be a trusted \nentity. The goal of the audit activity is to obtain a final report that details any findings and sug-\ngests countermeasures when appropriate. However, an audit of this type can take a consider-\nable amount of time to complete—weeks or months, in fact. During the course of the audit, the \nauditor may issue interim reports. An interim report is a written or verbal report given to the \norganization about a discovered security weakness that needs immediate attention. Interim \nreports are issued whenever a problem or issue is too severe to wait until the final audit report \nis issued.\nOnce the auditor completes their investigations, an exit conference is held. During the exit \nconference, the auditor presents and discusses their findings and discusses resolution issues \nwith the affected parties. However, only after the exit conference is over and the auditor has \nleft the premises does the auditor write and submit the final audit report to the organization. \nThis allows the final audit report to be as unaffected as possible by office politics and coer-\ncion. After the final audit report is received, the internal auditors should verify whether or not \nthe recommendations in the report are carried out. However, it is the responsibility of senior \nmanagement to select which recommendations to follow and to delegate the implementation \nto the security team.\nMonitoring\nMonitoring is a form of auditing that focuses on the active review of the audited information \nor the audited asset. For example, you would audit the activity of failed logons, but you would \nmonitor CPU performance. Monitoring is most often used in conjunction with performance, \nbut it can be used in a security context as well. Monitoring can focus on events, subsystems, \nusers, hardware, software, or any other object within the IT environment.\nA common implementation of monitoring is known as illegal software monitoring. This type \nof monitoring is used to watch for attempted or successful installation of unapproved software, \nuse of unauthorized software, or unauthorized use of approved software (i.e., attempts to \nbypass the restrictions of the security classification hierarchy). Monitoring in this fashion \nreduces the likelihood of a virus or Trojan horse being installed or of software circumventing \nthe security controls imposed.\n" }, { "page_number": 530, "text": "Monitoring\n485\nMonitoring Tools and Techniques\nThe actual tools and techniques used to perform monitoring vary greatly between environments \nand system platforms. However, there are several common forms found in most environments. \nThese include warning banners, keystroke monitoring, traffic analysis, and trend analysis, and \nother monitoring tools.\nWarning Banners\nWarning banners are used to inform would-be intruders or those who attempt to violate secu-\nrity policy that their intended activities are restricted and that any further activities will be \naudited and monitored. A warning banner is basically an electronic equivalent of a no trespass-\ning sign. In most situations, the wording of the banners is important from a legal standpoint. Be \nsure to consult with your attorneys about the proper wording for your banners. Only through \nvalid warnings (i.e., clear explanations that unauthorized access is prohibited and that any such \nactivity will be monitored and recorded) can most intrusions and attacks be prosecuted. Both \nauthorized and unauthorized users should be informed when their activities are being logged. \nMost authorized users should assume such, and often their employment agreements will include \nspecific statements indicating that any and all activity on the IT infrastructure may be recorded.\nKeystroke Monitoring\nKeystroke monitoring is the act of recording the key presses a user performs on a physical key-\nboard. The act of recording can be visual (such as with a video recorder) or logical/technical \n(such as with a capturing hardware device or a software program). In most cases, keystroke \nmonitoring is used for malicious purposes. Only in extreme circumstances and highly secured \nenvironments is keystroke monitoring actually employed as a means to audit and analyze the \nactivity of users at the keyboard. Keystroke monitoring can be extremely useful to track the key-\nstroke-by-keystroke activities of physical intruders in order to learn the kinds of attacks and \nmethods used to infiltrate a system.\nKeystroke monitoring is often compared to wiretapping. There is some debate about \nwhether keystroke monitoring should be restricted and controlled in the same manner as tele-\nphone wiretaps. Because there is no legal precedent set yet, many organizations that employ \nkeystroke monitoring notify authorized and unauthorized users of such monitoring through \nemployment agreements, security policies, and warning banners.\nTraffic Analysis and Trend Analysis\nTraffic analysis and trend analysis are forms of monitoring that examine the flow of packets \nrather than the actual content of packets. Traffic and trend analysis can be used to infer a large \namount of information, such as primary communication routes, sources of encrypted traffic, \nlocation of primary servers, primary and backup communication pathways, amount of traffic \nsupported by the network, typical direction of traffic flow, frequency of communications, and \nmuch more.\n" }, { "page_number": 531, "text": "486\nChapter 14\n\u0002 Auditing and Monitoring\nOther Monitoring Tools\nThere is a wide range of available tools to perform monitoring. Many are automated and per-\nform the monitoring activities in real time. Some monitoring tools are developed in-house and \nare ad hoc implementations focusing on a single type of observation. Most monitoring tools are \npassive. This means they cause no effect on the monitored activity, event, or traffic and make \nno original transmissions of their own.\nA common example of a tool for monitoring physical access is the use of closed-circuit tele-\nvision (CCTV). CCTV can be configured to automatically record the viewed events onto tape \nfor later review, or personnel who watch for unwanted, unauthorized, and illegal activities in \nreal time can watch it.\nFailure recognition and response is an important part of monitoring and auditing. Other-\nwise, what is the point of performing the monitoring and auditing activities? On systems that \nuse manual review, failure recognition is the responsibility of the observer or auditor. In order \nto recognize a failure, one must understand what is normal and expected. When the monitored \nor audited events stray from this standard baseline, then a failure, breach, intrusion, error, or \nproblem has occurred and a response must be initiated.\nAutomated monitoring and auditing systems are usually programmed to recognize failures. \nFailure recognition can be based on signatures or be knowledge based. For a discussion of these \ntwo mechanisms, please see the intrusion detection discussion in Chapter 2.\nIn either case of a manual or automated recognition, the first step in a response is to notify the \nauthority responsible for sustaining security and handling the problem or breach. Often this is the \nlocal administrator, the local manager, or the local security professional. The notification usually \ntakes the form of an alarm or warning message. Once notification is performed, the responsible \npersonnel (i.e., the administrator, manager, or security professional) or the automated tool can \nperform a response. When a person is responsible for the response, they can adapt the response \nto the specific condition and situation. For this reason, personnel-controlled responses are often \nthe most effective. Automated tool responses are typically predefined response scripts that are usu-\nally much broader in scope than necessary. Automated tools are excellent for quick and efficient \nlockdown, but often the countermeasure or response imposed by a tool will significantly affect the \nability of the system to continue to support and perform productive work. Whenever an auto-\nmated tool response is deployed, personnel should be notified so the response can be fine-tuned \nand the network can be returned to normal as soon as possible.\nPenetration Testing Techniques\nIn security terms, a penetration occurs when an attack is successful and an intruder is able to \nbreach the perimeter of your environment. The breach can be as small as reading a few bits of \ndata from your network or as big as logging in as a user with unrestricted privileges. One of the \nprimary goals of security is to prevent penetrations.\nOne common method to test the strength of your security measures is to perform penetration \ntesting. Penetration testing is a vigorous attempt to break into a protected network using any \n" }, { "page_number": 532, "text": "Penetration Testing Techniques\n487\nmeans necessary. It is common for organizations to hire external consultants to perform the \npenetration testing so the testers are not privy to confidential elements of the security’s config-\nuration, network design, and other internal secrets.\nPlanning Penetration Testing\nPenetration testing is the art and science of evaluating implemented safeguards. It is just another \nname for launching intrusion attempts and attacks against a network. The activity in either is \nexactly the same, but penetration testing is performed with the approval and knowledge of \nsenior management by security professionals in a controlled and monitored environment. Mali-\ncious users intent on violating the security of your IT environment perform intrusion attacks. \nIf an internal user performs a test against a security measure without authorization, then it will \nbe viewed as an attack rather than as a penetration test.\n Penetration testing will typically include social engineering attacks, network \nand system configuration review, and environment vulnerability assessment. \nVulnerability analysis or vulnerability assessment is an element or phase \nwithin penetration testing where networks or hosts are evaluated or tested to \ndetermine whether or not they are vulnerable to known attacks.\nPenetration testing can be performed using automated attack tools or manually. Automated \nattack tools range from professional vulnerability scanners to wild, underground cracker/\nhacker tools discovered on the Internet. Manual attacks often employ tools, such as penetration \nsuites like ISS, Ballista and SATAN, but much more onus is placed on the attacker to know the \ndetails involved in perpetrating an attack.\nIt is generally considered unethical and a poor business practice to hire ex-\nhackers, especially those with a criminal record, for any security activity includ-\ning security assessment, penetration testing, or ethical hacking.\nPenetration testing should be performed only with the consent and knowledge of the man-\nagement staff. Performing unapproved security testing could result in productivity loss, trig-\nger emergency response teams, or even cost you your job. However, even with full consent of \nsenior management, your security assessment activities should fall short of actual damage to \nthe target systems. Subversion or target destruction is never a valid or ethical activity of a pen-\netration test. Furthermore, demonstration of the effect or flaws, weaknesses, and vulnerabil-\nities should not be included as part of a penetration test. If such evidence is required, it should \nbe performed only on a dedicated and isolated lab system created for the sole purpose of \nexploit demonstration.\nRegularly staged penetration attempts are a good way to accurately judge the security mech-\nanisms deployed by an organization. Penetration testing may also reveal areas where patches or \nsecurity settings are insufficient and where new vulnerabilities have developed.\n" }, { "page_number": 533, "text": "488\nChapter 14\n\u0002 Auditing and Monitoring\nPenetration Testing Teams\nPenetration testing teams can have various levels of knowledge about the environment to be \nevaluated. The three commonly recognized knowledge levels are zero, partial, and full. Zero \nknowledge teams know nothing about the site except for basic information, such as domain \nname and company address. An attack by a zero knowledge team most closely resembles a real \nexternal hacker attack because all information about the environment must be obtained from \nscratch. A partial knowledge team is given an inventory of hardware and software used at the \nsite and possibly network design and configuration details. The team is then able to focus its \nefforts on attacks and vulnerabilities specific to actual hardware and software in use at the site. \nA full knowledge team is completely aware of every aspect of the environment, down to patch \nand upgrades installed and exact security configurations. The normal security administration \nstaff can be considered a full knowledge team. Unfortunately, a full knowledge team is the least \npreferred type of penetration testing team because its members are often biased and may have \nblind spots. A full knowledge team knows what has been secured, so it may fail to properly test \nevery possibility.\nThe TCSEC has several suggestions on how to conduct penetration testing with teams. In the \nNCSC/DOD/NIST Orange Book, the TCSEC recommends that appropriate personnel be well \nversed in the Flaw Hypothesis Methodology of Penetration Testing. With flaw hypothesis, \ngeneral-purpose OSes are assessed using an open box testing technique. Team members are \nrequired to document and analyze potential flaws in the system—essentially to hypothesize any \nflaws that may exist. Using a system of probability, team members prioritize the list of potential flaws \nbased on whether flaws exist, the vulnerability and exploitability of those flaws (if they do \nindeed exist), and the amount of control or compromise those flaws may inflict on the system. \nThis list of priorities becomes the basis for the team’s testing initiative.\nEthical Hacking\nEthical hacking is often used as another name for penetration testing. However, ethical hacking \nis not exactly the same as penetration testing. Ethical hacking is a security assessment process \nwhereby hacking techniques and tools are employed. When an ethical hacker is engaged as part \nof your assessment tactics, it is important to ensure that the person does not have a conflict of \ninterest. This would be a person who also is a provider, reseller, or consultant for security prod-\nucts or add-in or value-add services. An ethical hacker should not exploit discovered vulnera-\nbilities. Writing to, altering, or damaging a target of evaluation is a violation of the concept of \nethical hacking and bleeds into the realm of unethical (and often criminal) hacking.\nWar Dialing\nWar dialing is the act of using a modem to search for a system that will accept inbound con-\nnection attempts. A war dialer can be a typical computer with a modem attached and a war \ndialer program running or it can be a stand-alone device. In either case, they are used to sys-\ntematically dial phone numbers and listen for a computer carrier tone. When a computer carrier \n" }, { "page_number": 534, "text": "Penetration Testing Techniques\n489\ntone is detected, the war dialer adds this number to its report that is generated at the end of the \nsearch process. A war dialer can be used to search any range of numbers, such as all 10,000 \nnumbers within a specific prefix or all 10,000,000 within a specific area code.\nWar dialing is often used to locate unauthorized modems that have been installed on client \nsystems within an otherwise secured network and have been inadvertently configured to answer \ninbound calls. An attacker can guess a relatively small range of phone numbers to scan by learn-\ning one or more of the phone numbers used by the organization. In most cases, the prefix is the \nsame for all numbers within the organization if located within the same building or within a \nsmall geographic area. Thus, the war dialing search could be limited to 10,000 numbers. If sev-\neral of the organization’s phone numbers are sequentially close, the attacker may focus the war \ndialing search on a group of only a few hundred numbers.\nWar dialing as a penetration test is a useful tool to ensure that no unauthorized answering \nmodems are present within your organization. In most cases, you will have a definitive list of the \nphone numbers controlled by or assigned to your organization. Such a list provides a focused plan \nof testing for war dialing.\nCountermeasures against malicious war dialing include imposing strong remote access secu-\nrity (primarily in the arena of authentication), ensuring that no unauthorized modems are \npresent, and using callback security, protocol restriction, and call logging.\nSniffing and Eavesdropping\nSniffing is a form of network traffic monitoring. Sniffing often involves the capture or duplica-\ntion of network traffic for examination, re-creation, and extraction. It can be used both as a \npenetration test mechanism and as a malicious attack method. Sniffing is often an effective tool \nin capturing or extracting data from nonencrypted network traffic streams. Passwords, user-\nnames, IP addresses, message contents, and much more can be captured using software- or \nhardware-based sniffers.\nSniffers can capture either only the traffic directed to their host system’s IP address or all traffic \npassing over the local network segment. To capture all traffic on a local network segment, the \nsniffer’s NIC must be placed into promiscuous mode. Placing a NIC into promiscuous mode \ngrants the operator the ability to obtain a complete statistical understanding of network activity.\nThere are many commercial, freeware, and hacker-ware sniffers available. These include \nEtherpeek, WinDump, Ethereal, sniffit, and Snmpsniff.\nThe primary countermeasure to sniffing attacks is to use encrypted traffic. Sniffing can also \nbe thwarted by preventing unwanted software from being installed, by locking down all unused \nports, and by using an IDS or a vulnerability scanner that is able to detect the telltale signs of \na sniffer product.\nEavesdropping is just another term for sniffing. However, eavesdropping can include more \nthan just capturing and recording network traffic. Eavesdropping also includes recording or lis-\ntening to audio communications, faxes, radio signals, and so on. In other words, eavesdropping \nis listening in on, recording, capturing, or otherwise becoming aware of the contents of any \nform of communication.\n" }, { "page_number": 535, "text": "490\nChapter 14\n\u0002 Auditing and Monitoring\nRadiation Monitoring\nRadiation monitoring is a specific form of sniffing or eavesdropping that involves the detection, \ncapture, and recording of radio frequency signals and other radiated communication methods, \nincluding sound and light. Radiation monitoring can be as simple as using a hidden microphone \nin a room to record voices or as sophisticated as using a camera to record the light reflections in \na room to reconstruct the contents of a visual computer display that is otherwise hidden from \ndirect viewing. Radiation monitoring also includes the tapping of radio frequencies often used \nby cell phones, wireless network interfaces, two-way radios, radio and television broadcastings, \nshort-wave radios, and CBs. In addition, it includes the tapping of a wide range of electrical sig-\nnal variations that may not directly offer information but can be used in inference attacks. These \ninclude the change in electrical usage by an entire computer system, a hard drive, a modem, a \nnetwork interface, a switch, and a router. Depending on the device, the electromagnetic signals \nproduced by hardware can be captured and used to re-create the data, or at least metadata \nabout the data, and the communication session.\nTEMPEST is a standard that defines the study and control of electronic signals produced \nby various types of electronic hardware, such as computers, televisions, and phones. Its pri-\nmary goal is to prevent electromagnetic interference (EMI) and radio frequency (RF) radia-\ntion from leaving a strictly defined area so as to eliminate the possibility of external radiation \nmonitoring, eavesdropping, and signal sniffing. TEMPEST defines control zones, which gen-\nerally consist of rooms or facilities that are enclosed with copper or some other kind of shield-\ning to prevent EMI/RF from either leaving or entering the facility. Such facilities are \nsurrounded by radiation capturing, stopping, hiding, and disrupting equipment. TEMPEST \nmay use a form of white noise to broadcast an unintelligible worthless signal to mask the pres-\nence of a real signal. TEMPEST countermeasures are designed to protect against undetectable \npassive monitoring of EMI and RF.\nDumpster Diving\nDumpster diving is the act of digging through the refuse, remains, or leftovers from an organi-\nzation or operation in order to discover or infer confidential information. Dumpster diving is \nprimarily associated with digging through actual garbage. It can also include searching, inves-\ntigating, and reverse-engineering an organization’s website, commercial products, and publicly \naccessible literature (such as financial statements, brochures, product information, shareholder \nreports, etc.).\nScavenging is a form of dumpster diving performed electronically. Online scavenging is per-\nformed to search for useful information in the remnants of data left over after processes or tasks \nare completed. This could include audit trails, log files, memory dumps, variable settings, port \nmappings, and cached data.\nDumpster diving and scavenging can be employed as a penetration test to discover how much \ninformation about your organization is carelessly discarded into the garbage or left around after \nclosing a facility. Countermeasures to dumpster diving and scavenging include secure disposal \nof all garbage. This usually means shredding all documentation. Other safeguards include main-\ntaining physical access control.\n" }, { "page_number": 536, "text": "Inappropriate Activities\n491\nSocial Engineering\nA social engineering attack is an attempt by an attacker to convince an employee to perform an \nunauthorized activity to subvert the security of an organization. Often the goal of social engi-\nneering is to gain access to the IT infrastructure or the physical facility.\nSocial engineering is a skill by which an unknown person gains the trust of someone inside \nof your organization. Adept individuals can convince employees that they are associated with \nupper management, technical support, the help desk, and so on. Once this deception is success-\nful, the victim is often encouraged to make a change to their user account on the system, such \nas reset their password. Other attacks include instructing the victim to open specific e-mail \nattachments, launch an application, or connect to a specific URL. Whatever the actual activity \nis, it is usually directed toward opening a back door that the attacker can use to gain access to \nthe network.\nSocial engineering attacks do not exclusively occur over the phone; they can happen in per-\nson as well. Malicious individuals impersonating repair technicians, upper management, or \ntraveling company managers can intimidate some employees into performing activities that vio-\nlate security. Countermeasures to in-person social engineering attacks include verifying the \nidentity of the intruder/visitor via a secured photograph, contacting their source company, or \nfinding a local manager that recognizes the individual.\nSocial engineering attacks can be used as penetration tests. These sorts of tests will help \ndetermine how vulnerable your frontline employees are to individuals adept at lying. For a \ndetailed discussion of social engineering attacks, see Chapter 4, “Communications Security \nand Countermeasures.”\nProblem Management\nOnce auditing, monitoring, and penetration testing has occurred, the next step is problem man-\nagement. Problem management is exactly what it sounds like: a formalized process or structure \nfor resolving problems. For the most part, problem management is a solution developed in-\nhouse to address the various types of issues and problems encountered in your environment. \nProblem management is typically defined as having three goals or purposes:\n\u0002\nTo reduce failures to a manageable level\n\u0002\nTo prevent the occurrence or reoccurrence of a problem\n\u0002\nTo mitigate the negative impact of problems on computing services and resources\nInappropriate Activities\nInappropriate activities are actions that may take place on a computer or over the IT infrastruc-\nture and that may not be actual crimes but are often grounds for internal punishments or ter-\nmination. Some types of inappropriate activities include creating or viewing inappropriate \ncontent, sexual and racial harassment, waste, and abuse.\n" }, { "page_number": 537, "text": "492\nChapter 14\n\u0002 Auditing and Monitoring\nInappropriate content can be defined as anything that is not related to and supportive of the \nwork tasks of an organization. It includes, but is not limited to, pornography, sexually explicit \nmaterial, entertainment, political data, and violent content. The definition of inappropriate \ncontent can be defined by example (by listing types of information deemed inappropriate) or by \nexclusion (by listing types of information deemed appropriate). Inappropriate content can be \ndefined to include personal e-mail that is not work related.\nKeeping inappropriate content to a minimum requires several steps. First, it must be included \nas an objective in the security policy. Second, staff must have awareness training in regard to \ninappropriate content. Third, content filtering tools can be deployed to filter data based on \nsource or word content. It is not possible to programmatically prevent all inappropriate con-\ntent, but sufficient penalties can be levied against violations, along with regular auditing/mon-\nitoring to keep its level to a minimum.\nSexual and racial harassment is a form of inappropriate content or activity on company \nequipment. Sexual harassment can take many forms, including distribution of images, videos, \naudio clips, or text information (such as jokes). Sexual and racial harassment controls include \nawareness training and content filtering.\nWaste of resources can have a direct effect on the profitability of an organization. If the stor-\nage space, computing power, or networking bandwidth capacity is consumed by inappropriate \nor non-work-related data, the organization is losing money on non-profit-producing activities. \nSome of the more common examples of resource waste include operating a personal business \nover company equipment, accessing and distributing inappropriate data (pornography, enter-\ntainment, music, videos, etc.), and aimlessly surfing the Internet. Just as with inappropriate \nmaterial, resource waste can be reduced but not eliminated. Some of the primary means to \nreduce waste include user awareness training, activity monitoring, and content filtering.\nAbuse of rights and privileges is the attempt to perform activities or gain access to resources \nthat are restricted or assigned to a higher classification and access level. When access is gained \ninappropriately, the confidentiality of data is violated and sensitive information can be disclosed. \nCountermeasures to abuse include strong implementations of access controls and activity logging.\nIndistinct Threats and Countermeasures\nNot all problems that an IT infrastructure will face have definitive countermeasures or are even \na recognizable threat. There are numerous vulnerabilities against which there are no immediate \nor distinct threats and against such threats there are few countermeasures. Many of these vul-\nnerabilities lack direct-effect countermeasures, or the deployment of available countermeasures \noffers little in risk reduction.\nErrors and Omissions\nOne of the most common vulnerabilities and hardest to protect against is the occurrence of \nerrors and omissions. Errors and omissions occur because humans interact with, program, con-\ntrol, and provide data for IT. There are no direct countermeasures to prevent all errors and \n" }, { "page_number": 538, "text": "Indistinct Threats and Countermeasures\n493\nomissions. Some safeguards against errors and omissions include input validators and user \ntraining. However, these mechanisms offer only a minimal reduction in overall errors and omis-\nsions encountered in an IT environment.\nFraud and Theft\nFraud and theft are criminal activities that can be perpetrated over computers or are made pos-\nsible by computers. Most of the access controls deployed in a secured environment will reduce \nfraud and theft, but not every form of these crimes can be predicted and protected against. Both \ninternal authorized users and external unauthorized intruders can exploit your IT infrastructure \nto perform various forms of fraud and theft. Maintaining an intensive auditing and monitoring \nprogram and prosecuting all criminal incidents will help reduce fraud and theft.\nCollusion\nCollusion is an agreement among multiple people to perform an unauthorized or illegal action. \nIt is hindered by separation of duties, restricted job responsibilities, audit logging, and job rota-\ntion, which all reduce the likelihood that a coworker will be willing to collaborate on an illegal \nor abusive scheme due to the higher risk of detection. However, these safeguards are not pri-\nmarily directed toward collusion prevention. The reduction of collusion is simply a side benefit \nof these security controls.\nSabotage\nEmployee sabotage can become an issue if an employee is knowledgeable enough about the IT \ninfrastructure of an organization, has sufficient access to manipulate critical aspects of the envi-\nronment, and has become disgruntled. Employee sabotage occurs most often when an employee \nsuspects they will be terminated without just cause. This is one important reason terminations \nshould be handled swiftly, including disabling all access to the infrastructure (IT and physical) \nand escorting the ex-employee off of the premises. Safeguards against employee sabotage are \nintensive auditing, monitoring for abnormal or unauthorized activity, keeping lines of commu-\nnication open between employees and managers, and properly compensating and recognizing \nemployees for excellence and extra work.\nLoss of Physical and Infrastructure Support\nThe loss of physical and infrastructure support can be caused by power outages, natural disas-\nters, communication interruptions, severe weather, loss of any core utility or service, disruption \nof transportation, strikes, and national emergencies. It may result in IT downtime and almost \nalways significantly reduces productivity and profitability during the length of the event. It is \nnearly impossible to predict and protect against events that cause physical and infrastructure \nsupport loss. Disaster recovery and business continuity planning can provide restoration meth-\nods if the loss event is severe. In most cases, you must simply wait until the emergency or con-\ndition expires and things return to normal.\n" }, { "page_number": 539, "text": "494\nChapter 14\n\u0002 Auditing and Monitoring\nUnix Details\nFor the most part, the CISSP exam is product- and vendor-independent. However, there are a \nhandful of issues specific to Unix that you should aware of. If you have worked with Unix or \neven Linux, most of these items will be simple review. If you have never touched a Unix system, \nthen read the following items carefully.\nOn Unix systems, passwords are stored in a password file. The password file is stored as a \nshadow file so that it does not appear by default in a directory listing. The shadow setting is \nsimilar to the file setting of hidden Windows system files. Although this is an improvement, it \nis not a real security mechanism because everyone knows that the password file is set not to \ndisplay in a directory listing by default but a simple modification of the directory command \nparameters reveals all hidden or shadowed files.\nThe most privileged account on a Unix system is known as the root. Other powerful accounts \nwith similar levels of access are known as superusers. It is important to restrict access to these \ntypes of user accounts to only those people who absolutely need that level of access to perform \ntheir work tasks. The root or superuser accounts on Unix are similar to the administrator \naccount(s) on Windows systems. Whenever possible, root and superuser access should be \nrestricted to the local console so that they cannot be used over a network connection.\nThe two utilities, setuid and setgid, should be closely monitored and their uses logged. These \ntwo tools are used to manipulate access to resources. Thus, if they are employed by a non-\nadministrator, or when employed by an administrator in an unapproved fashion, it can indicate \nsecurity policy violations.\nAnother important command to monitor is the mount command, which is used to map a local \ndrive letter to a shared network drive. This activity may seem like an efficient method to access \nnetwork resources. However, it also makes malicious code and intruder attacks easier to imple-\nment. When the mount command is used when it is not authorized for use, it could indicate an \nintrusion or an attempt to create a security loophole.\nYou should also consider monitoring the use of the following commands: systat, bootp, tftp, \nsunrpc, snmp, snmp-trap, and nfs.\nFinally, Unix systems can be configured to boot into a fixed dedicated security mode where \nauthentication is not required. When this is done, anyone accessing the system has complete \naccess to everything at the security level at which the system is currently operating. You can \neasily determine if a system has been configured to perform this operation if there is a /etc/\nhost.equiv file present. Removing this file disables this feature.\n" }, { "page_number": 540, "text": "Indistinct Threats and Countermeasures\n495\nMalicious Hackers or Crackers\nMalicious hackers or crackers are individuals who actively seek to infiltrate your IT infrastruc-\nture whether for fame, access, or financial gain. These intrusions or attacks are important \nthreats against which your security policy and your entire security infrastructure is designed to \nrepel. Most safeguards and countermeasures protect against one specific threat or another, but \nit is not possible to protect against all possible threats that a cracker represents. Remaining vig-\nilant about security, tracking activity, and implementing intrusion detection systems can pro-\nvide a reasonable level of protection.\nEspionage\nEspionage is the malicious act of gathering proprietary, secret, private, sensitive, or confidential \ninformation about an organization for the express purpose of disclosing and often selling that \ndata to a competitor or other interested organization (such as a foreign government). Espionage \nis sometimes committed by internal employees who have become dissatisfied with their jobs and \nhave become compromised in some way. It can also be committed by a mole or plant placed into \nyour organization to steal information for their primary secret employer. Countermeasures \nagainst espionage are to strictly control access to all non-public data, thoroughly screen new \nemployee candidates, and efficiently track the activities of all employees.\nMalicious Code\nMalicious code is any script or program that performs an unwanted, unauthorized, or unknown \nactivity on a computer system. Malicious code can take many forms, including viruses, worms, \nTrojan horses, documents with destructive macros, and logic bombs. Some form of malicious \ncode exists for every type of computer or computing device. Monitoring and filtering the traffic \nthat enters and travels within a secured environment is the only effective countermeasure to \nmalicious code.\nTraffic and Trend Analysis\nThe ongoing activities of a network and even a business environment may produce recognizable \npatterns. These patterns are known as trends or traffic patterns. A specific type of attack called \ntraffic and trend analysis examines these patterns for what they reveal. What is interesting about \nthese types of examinations or attacks is that they reveal only the patterns of traffic, not the \nactual content of the traffic. Patterns and trends can reveal operations that occur on a regular \nbasis or that are somehow considered important. For example, suppose an attacker watches \nyour T1 line and notices that from 3 PM to approximately 4:30 PM every Friday your organi-\nzation consumes nearly 80 percent of the capacity of the T1 line. The attacker can infer that the \nnoticeable pattern is a file or data transfer activity that is important because it always occurs at \nthe same time every week. Thus, the attacker can schedule an attack for 2:45 PM to take out the \nT1 or otherwise cause a denial of service to prevent legitimate activity from occurring. Traffic \n" }, { "page_number": 541, "text": "496\nChapter 14\n\u0002 Auditing and Monitoring\nand trend analysis can be used against both encrypted and nonencrypted traffic because pat-\nterns of traffic rather than contents are examined. Traffic and trend analysis can be used against \nphysical environments and people as well. For example, a security guard can be watched to dis-\ncover that it takes 12 minutes for him to walk the perimeter of a building and for 8 of those min-\nutes, he will be unable to see a section of fence where an intruder could easily climb.\nCountermeasures to traffic and trend analysis include performing traffic and trend analysis \non your own environment to see what types of information you are inadvertently revealing if \nanyone happens to be watching. You can alter your common and mission-critical activities so \nas not to produce easily recognizable patterns. Other countermeasures to traffic and trend anal-\nysis are traffic padding, noise, and use of covert channels. You can pad your communication \nchannels through traffic generation tools or broadcasting noise whenever legitimate traffic is \nnot occurring.\nInitial Program Load Vulnerabilities\nThere is a period of time between the moments when a device is off and when it is fully booted \nand operational that the system is not fully protected by its security mechanisms. This time \nperiod is known as the initial program load (IPL) and it has numerous vulnerabilities. With-\nout physical security, there are no countermeasures for IPL vulnerabilities. Anyone with phys-\nical access to a device can easily exploit its weaknesses during its bootup process. Some IPL \nvulnerabilities are accessing alternate boot menus, booting to a mobile operating system off \nof a CD or floppy, and accessing CMOS to alter configuration settings, such as enabling or \ndisabling devices.\nLinux Details\nJust as there are a few Unix issues to take notice of, there are a few Linux items as well:\nSalts are added to Linux passwords to increase randomness and ensure uniqueness of the \nstored hash. Think of a salt as a random number appended to the password before hashing.\nLow Water-Mark Mandatory Access Control (LOMAC) is a loadable kernel module for Linux \ndesigned to protect the integrity of processes and data. It is an OS security architecture exten-\nsion or enhancement that provides flexible support for security policies.\nFlask is an OS prototyped in the Fluke research OS. Flask is a security architecture for operating \nsystems that includes flexible support for security polices. Some features of the Fluke proto-\ntype were ported into the OSKit (a programmer’s toolkit for writing OSes). Many of the Flask \narchitecture features were being incorporated into SE Linux (Security-Enhanced Linux) since it \nwas built using the OSKit. Therefore, Flask led to the Fluke OS, which led to the OSKit, which \nwas used to write SE Linux, which incorporates flask features.\n" }, { "page_number": 542, "text": "Summary\n497\nSummary\nMaintaining operations security requires directed efforts in auditing and monitoring. These \nefforts give rise to detecting attacks and intrusions. This in turn guides the selection of counter-\nmeasures, encourages penetration testing, and helps to limit, restrict, and prevent inappropriate \nactivities, crimes, and other threats.\nAuditing is a methodical examination or review of an environment to ensure compliance \nwith regulations and to detect abnormalities, unauthorized occurrences, or outright crimes. \nSecure IT environments rely heavily on auditing. Overall, auditing serves as the primary type of \ndetective control used by a secure environment.\nAudit trails are the records created by recording information about events and occurrences \ninto a database or log file, and they can be used to, for example, reconstruct an event, extract \ninformation about an incident, and prove or disprove culpability. Audit trails provide a passive \nform of detective security control and serve as a deterrent in the same manner as CCTV or secu-\nrity guards do. In addition, they can be essential as evidence in the prosecution of criminals.\nRecord retention is the organizational policy that defines what information is maintained \nand for how long. In most cases, the records in question are audit trails of user activity, includ-\ning file and resource access, logon patterns, e-mail, and the use of privileges.\nMonitoring is a form of auditing that focuses more on the active review of the audited infor-\nmation or the audited asset. It is most often used in conjunction with performance, but it can \nbe used in a security context as well. The actual tools and techniques used to perform monitor-\ning vary greatly between environments and system platforms, but there are several common \nforms found in most environments: warning banners, keystroke monitoring, traffic analysis and \ntrend analysis, and other monitoring tools.\nPenetration testing is a vigorous attempt to break into your protected network using any \nmeans necessary, and it is a common method for testing the strength of your security measures. \nOrganizations often hire external consultants to perform the penetration testing so the testers \nare not privy to confidential elements of the security’s configuration, network design, and other \ninternal secrets. Penetration testing methods can include war dialing, sniffing, eavesdropping, \nradiation monitoring, dumpster diving, and social engineering.\nInappropriate activities may take place on a computer or over the IT infrastructure, and may \nnot be actual crimes, but they are often grounds for internal punishments or termination. Inap-\npropriate activities include creating or viewing inappropriate content, sexual and racial harass-\nment, waste, and abuse.\nAn IT infrastructure can include numerous vulnerabilities against which there is no immedi-\nate or distinct threat and against such threats there are few countermeasures. These types of \nthreats include errors, omissions, fraud, theft, collusion, sabotage, loss of physical and infra-\nstructure support, crackers, espionage, and malicious code. There are, however, steps you can \ntake to lessen the impact of most of these.\n" }, { "page_number": 543, "text": "498\nChapter 14\n\u0002 Auditing and Monitoring\nExam Essentials\nUnderstand auditing.\nAuditing is a methodical examination or review of an environment to \nensure compliance with regulations and to detect abnormalities, unauthorized occurrences, or \noutright crimes. Secure IT environments rely heavily on auditing. Overall, auditing serves as the \nprimary type of detective control used by a secure environment.\nKnow the types or forms of auditing.\nAuditing encompasses a wide variety of different activ-\nities, including the recording of event/occurrence data, examination of data, data reduction, the \nuse of event/occurrence alarm triggers, log analysis, and response (some other names for these \nactivities are logging, monitoring, examining alerts, analysis, and even intrusion detection). Be \nable to explain what each type of auditing activity involves.\nUnderstand compliance checking.\nCompliance checking (or compliance testing) ensures that \nall of the necessary and required elements of a security solution are properly deployed and func-\ntioning as expected. Compliance checks can take many forms, such as vulnerability scans and \npenetration testing. They can also involve auditing and be performed using log analysis tools to \ndetermine if any vulnerabilities for which countermeasures have been deployed have been real-\nized on the system.\nUnderstand the need for frequent security audits.\nThe frequency of an IT infrastructure secu-\nrity audit or security review is based on risk. You must determine whether sufficient risk exists \nto warrant the expense and interruption of a security audit on a more or less frequent basis. The \nfrequency of audit reviews should be clearly defined and adhered to.\nUnderstand that auditing is an aspect of due care.\nSecurity audits and effectiveness reviews \nare key elements in displaying due care. Senior management must enforce compliance with reg-\nular periodic security reviews or they will be held accountable and liable for any asset losses that \noccur as a result.\nUnderstand audit trails.\nAudit trails are the records created by recording information about \nevents and occurrences into a database or log file. They are used to reconstruct an event, to \nextract information about an incident, and to prove or disprove culpability. Using audit trails \nis a passive form of detective security control, and audit trails are essential evidence in the pros-\necution of criminals.\nUnderstand how accountability is maintained.\nAccountability is maintained for individual \nsubjects through the use of audit trails. Activities of users and events caused by the actions of \nusers while online can be recorded so users can be held accountable for their actions. This \ndirectly promotes good user behavior and compliance with the organization’s security policy.\nKnow the basic elements of an audit report.\nAudit reports should all address a few basic or \ncentral concepts: the purpose of the audit, the scope of the audit, and the results discovered or \nrevealed by the audit. They often include many other details specific to the environment, such \nas time, date, and specific systems. Audit reports can include a wide range of content that \nfocuses on problems/events/conditions, standards/criteria/baselines, causes/reasons, impact/\neffect, or solutions/recommendations/safeguards.\n" }, { "page_number": 544, "text": "Exam Essentials\n499\nUnderstand the need to control access to audit reports.\nAudit reports include sensitive infor-\nmation and should be assigned a classification label and handled appropriately. Only people \nwith sufficient privilege should have access to them. An audit report should also be prepared in \nvarious versions according to the hierarchy of the organization, providing only the details rel-\nevant to the position of the staff members they are prepared for.\nUnderstand sampling.\nSampling, or data extraction, is the process of extracting elements of \ndata from a large body of data in order to construct a meaningful representation or summary \nof the whole. There are two forms of sampling: statistical and nonstatistical. An auditing tool \nusing precise mathematical functions to extract meaningful information from a large volume of \ndata performs statistical sampling. Statistical sampling is used to measure the risk associated \nwith the sampling process.\nUnderstand record retention.\nRecord retention is the act of retaining and maintaining impor-\ntant information. There should be an organizational policy that defines what information is \nmaintained and for how long. The records in question are usually audit trails of user activity, \nincluding file and resource access, logon patterns, e-mail, and the use of privileges. Depending \nupon your industry and your relationship with the government, you may need to retain records \nfor three years, seven years, or indefinitely.\nUnderstand monitoring and the uses of monitoring tools.\nMonitoring is a form of auditing \nthat focuses more on the active review of the audited information or the audited asset. It’s most \noften used in conjunction with performance, but it can be used in a security context as well. \nMonitoring can focus on events, subsystems, users, hardware, software, or any other object \nwithin the IT environment. Although the actual tools and techniques used to perform monitor-\ning vary greatly between environments and system platforms, there are several common forms \nfound in most environments: warning banners, keystroke monitoring, traffic analysis and trend \nanalysis, and other monitoring tools. Be able to list the various monitoring tools and know \nwhen and how to use each tool.\nUnderstand failure recognition and response.\nOn systems that use manual review, failure rec-\nognition is the responsibility of the observer or auditor. In order to recognize a failure, one must \nunderstand what is normal and expected. When the monitored or audited events stray from this \nstandard baseline, then a failure, breach, intrusion, error, or problem has occurred and a \nresponse must be initiated.\nUnderstand what penetration testing is and be able to explain the methods used.\nOrganiza-\ntions use penetration testing to evaluate the strength of their security infrastructure. Know that \nit involves launching intrusion attacks on your network and be able to explain the methods \nused: war dialing, sniffing and eavesdropping, radiation monitoring, dumpster diving, and \nsocial engineering.\nKnow what TEMPEST is.\nTEMPEST is a standard for the study and control of electronic sig-\nnals produced by various types of electronic hardware, such as computers, televisions, phones, \nand so on. Its primary goal is to prevent EMI and RF radiation from leaving a strictly defined \narea so as to eliminate the possibility of external radiation monitoring, eavesdropping, and sig-\nnal sniffing.\n" }, { "page_number": 545, "text": "500\nChapter 14\n\u0002 Auditing and Monitoring\nKnow what dumpster diving and scavenging are.\nDumpster diving and scavenging involve \ndigging through the refuse, remains, or leftovers from an organization or operation in order to \ndiscover or infer confidential information. Countermeasures to dumpster diving and scavenging \ninclude secure disposal of all garbage. This usually means shredding all documentation and \nincinerating all shredded material and other waste. Other safeguards include maintaining phys-\nical access control and monitoring privilege activity use online.\nUnderstand social engineering.\nA social engineering attack is an attempt by an attacker to con-\nvince an employee to perform an unauthorized activity to subvert the security of an organization. \nOften the goal of social engineering is to gain access to the IT infrastructure or the physical facility. \nThe only way to protect against social engineering attacks is to thoroughly train users how to \nrespond and interact with communications as well as with unknown personnel.\nKnow what inappropriate activities are.\nInappropriate activities are actions that may take \nplace on a computer or over the IT infrastructure and that may not be actual crimes but are often \ngrounds for internal punishments or termination. Some types of inappropriate activities include \ncreating or viewing inappropriate content, sexual and racial harassment, waste, and abuse.\nKnow that errors and omissions can cause security problems.\nOne of the most common vul-\nnerabilities and hardest to protect against are errors and omissions. Errors and omissions occur \nbecause humans interact with, program, control, and provide data for IT. There are no direct \ncountermeasures to prevent all errors and omissions. Some safeguards against errors and omis-\nsions include input validators and user training. However, these mechanisms offer only a min-\nimal reduction in overall errors and omissions encountered in an IT environment.\nUnderstand fraud and theft.\nFraud and theft are criminal activities that can be perpetrated \nover computers or made possible by computers. Most of the access controls deployed in a \nsecured environment will reduce fraud and theft, but not every form of these crimes can be pre-\ndicted and protected against. Both internal authorized users and external unauthorized intrud-\ners can exploit your IT infrastructure to perform various forms of fraud and theft. Maintaining \nan intensive auditing and monitoring program and prosecuting all criminal incidents will help \nreduce fraud and theft.\nKnow what collusion is.\nCollusion is an agreement among multiple people to perform an \nunauthorized or illegal action. It is hindered by separation of duties, restricted job responsibil-\nities, audits, and job rotation, which all reduce the likelihood that a coworker will be willing to \ncollaborate on an illegal or abusive scheme due to the higher risk of detection.\nUnderstand employee sabotage.\nEmployee sabotage can become an issue if an employee is \nknowledgeable enough about the IT infrastructure of an organization, has sufficient access to \nmanipulate critical aspects of the environment, and has become disgruntled. Safeguards against \nemployee sabotage are intensive auditing, monitoring for abnormal or unauthorized activity, \nkeeping lines of communication open between employees and managers, and properly compen-\nsating and recognizing employees for excellence and extra work.\nKnow how loss of physical and infrastructure support can cause security problems.\nThe loss \nof physical and infrastructure support is caused by power outages, natural disasters, commu-\nnication interruptions, severe weather, loss of any core utility or service, disruption of trans-\nportation, strikes, and national emergencies. It is nearly impossible to predict and protect \n" }, { "page_number": 546, "text": "Exam Essentials\n501\nagainst events of physical and infrastructure support loss. Disaster recovery and business con-\ntinuity planning can provide restoration methods if the loss event is severe. In most cases, you \nmust simply wait until the emergency or condition subsides and things return to normal.\nUnderstand espionage.\nEspionage is the malicious act by an internal employee of gathering \nproprietary, secret, private, sensitive, or confidential information about an organization for the \nexpress purpose of disclosing and often selling that data to a competitor or other interested \norganization (such as a foreign government). Countermeasures against espionage are to strictly \ncontrol access to all nonpublic data, thoroughly screen new employee candidates, and efficiently \ntrack the activities of all employees.\n" }, { "page_number": 547, "text": "502\nChapter 14\n\u0002 Auditing and Monitoring\nReview Questions\n1.\nWhat is a methodical examination or review of an environment to ensure compliance with reg-\nulations and to detect abnormalities, unauthorized occurrences, or outright crimes?\nA. Penetration testing\nB. Auditing\nC. Risk analysis\nD. Entrapment\n2.\nWhich of the following is not considered a type of auditing activity?\nA. Recording of event data\nB. Data reduction\nC. Log analysis\nD. Deployment of countermeasures\n3.\nMonitoring can be used to perform all but which of the following?\nA. Detect availability of new software patches\nB. Detect malicious actions by subjects\nC. Detect attempted intrusions\nD. Detect system failures\n4.\nWhat provides data for re-creating step-by-step the history of an event, intrusion, or system failure?\nA. Security policies\nB. Log files\nC. Audit reports\nD. Business continuity planning\n5.\nWhat is the frequency of an IT infrastructure security audit or security review based on?\nA. Asset value\nB. Management discretion\nC. Risk\nD. Level of realized threats\n6.\nFailure to perform which of the following can result in the perception that due care is not being \nmaintained?\nA. Periodic security audits\nB. Deployment of all available safeguards\nC. Performance reviews\nD. Creating audit reports for shareholders\n" }, { "page_number": 548, "text": "Review Questions\n503\n7.\nAudit trails are considered to be what type of security control?\nA. Administrative\nB. Passive\nC. Corrective\nD. Physical\n8.\nWhich essential element of an audit report is not considered to be a basic concept of the audit?\nA. Purpose of the audit\nB. Recommendations of the auditor\nC. Scope of the audit\nD. Results of the audit\n9.\nWhy should access to audit reports be controlled and restricted?\nA. They contain copies of confidential data stored on the network.\nB. They contain information about the vulnerabilities of the system.\nC. They are useful only to upper management.\nD. They include the details about the configuration of security controls.\n10. What are used to inform would-be intruders or those who attempt to violate security policy that \ntheir intended activities are restricted and that any further activities will be audited and monitored?\nA. Security policies\nB. Interoffice memos\nC. Warning banners\nD. Honey pots\n11. Which of the following focuses more on the patterns and trends of data rather than the actual \ncontent?\nA. Keystroke monitoring\nB. Traffic analysis\nC. Event logging\nD. Security auditing\n12. Which of the following activities is not considered a valid form of penetration testing?\nA. Denial of service attacks\nB. Port scanning\nC. Distribution of malicious code\nD. Packet sniffing\n" }, { "page_number": 549, "text": "504\nChapter 14\n\u0002 Auditing and Monitoring\n13. The act of searching for unauthorized modems is known as ___________________.\nA. Scavenging\nB. Espionage\nC. System auditing\nD. War dialing\n14. Which of the following is not a useful countermeasure to war dialing?\nA. Restricted and monitored Internet access\nB. Imposing strong remote access security\nC. Callback security\nD. Call logging\n15. The standard for study and control of electronic signals produced by various types of electronic \nhardware is known as ___________________.\nA. Eavesdropping\nB. TEMPEST\nC. SESAME\nD. Wiretapping\n16. Searching through the refuse, remains, or leftovers from an organization or operation to dis-\ncover or infer confidential information is known as ___________________.\nA. Impersonation\nB. Dumpster diving\nC. Social engineering\nD. Inference\n17.\nWhich of the following is not an effective countermeasure against inappropriate content being \nhosted or distributed over a secured network?\nA. Activity logging\nB. Content filtering\nC. Intrusion detection system\nD. Penalties and termination for violations\n18. One of the most common vulnerabilities of an IT infrastructure and hardest to protect against \nis the occurrence of ___________________.\nA. Errors and omissions\nB. Inference\nC. Data destruction by malicious code\nD. Data scavenging\n" }, { "page_number": 550, "text": "Review Questions\n505\n19. The willful destruction of assets or elements within the IT infrastructure as a form of revenge or \njustification for perceived wrongdoing is known as ___________________.\nA. Espionage\nB. Entrapment\nC. Sabotage\nD. Permutation\n20. What is the most common reaction to the loss of physical and infrastructure support?\nA. Deploying OS updates\nB. Vulnerability scanning\nC. Waiting for the event to expire\nD. Tightening of access controls\n" }, { "page_number": 551, "text": "506\nChapter 14\n\u0002 Auditing and Monitoring\nAnswers to Review Questions\n1.\nB. Auditing is a methodical examination or review of an environment to ensure compliance with \nregulations and to detect abnormalities, unauthorized occurrences, or outright crimes.\n2.\nD. Deployment of countermeasures is not considered a type of auditing activity; rather, it’s an \nactive attempt to prevent security problems.\n3.\nA. Monitoring is not used to detect the availability of new software patches.\n4.\nB. Log files provide an audit trail for re-creating step-by-step the history of an event, intrusion, \nor system failure. An audit trail is used to reconstruct an event, to extract information about an \nincident, to prove or disprove culpability, and much more.\n5.\nC. The frequency of an IT infrastructure security audit or security review is based on risk. You \nmust establish the existence of sufficient risk to warrant the expense of and interruption caused \nby a security audit on a more or less frequent basis.\n6.\nA. Failing to perform periodic security audits can result in the perception that due care is not \nbeing maintained. Such audits alert personnel that senior management is practicing due diligence \nin maintaining system security.\n7.\nB. Audit trails are a passive form of detective security control. Administrative, corrective, and \nphysical security controls are active ways to maintain security.\n8.\nB. Recommendations of the auditor are not considered basic and essential concepts to be \nincluded in an audit report. Key elements of an audit report include the purpose, scope, and \nresults of the audit.\n9.\nB. Audit reports should be secured because they contain information about the vulnerabilities of \nthe system. Disclosure of such vulnerabilities to the wrong person could lead to security \nbreaches.\n10. C. Warning banners are used to inform would-be intruders or those who attempt to violate the \nsecurity policy that their intended activities are restricted and that any further activities will be \naudited and monitored.\n11. B. Traffic analysis focuses more on the patterns and trends of data rather than the actual con-\ntent. Such an analysis offers insight into primary communication routes, sources of encrypted \ntraffic, location of primary servers, primary and backup communication pathways, amount of \ntraffic supported by the network, typical direction of traffic flow, frequency of communications, \nand much more.\n12. C. Distribution of malicious code will almost always result in damage or loss of assets. Thus, it \nis not an element of penetration testing under any circumstance, even if it’s done with the \napproval of upper management.\n13. D. War dialing is the act of searching for unauthorized modems that will accept inbound calls \non an otherwise secure network in an attempt to gain access.\n" }, { "page_number": 552, "text": "Answers to Review Questions\n507\n14. A. Users often install unauthorized modems because of restricted and monitored Internet access. \nBecause war dialing is often used to locate unauthorized modems, restricting and monitoring \nInternet access wouldn’t be an effective countermeasure.\n15. B. TEMPEST is the standard that defines the study and control of electronic signals produced by \nvarious types of electronic hardware.\n16. B. Dumpster diving is the act of searching through the refuse, remains, or leftovers from an orga-\nnization or operation to discover or infer confidential information.\n17.\nC. An IDS is not a countermeasure against inappropriate content.\n18. A. One of the most common vulnerabilities and hardest to protect against is the occurrence of \nerrors and omissions.\n19. C. The willful destruction of assets or elements within the IT infrastructure as a form of revenge \nor justification for perceived wrongdoing is known as sabotage.\n20. C. In most cases, you must simply wait until the emergency or condition expires and things \nreturn to normal.\n" }, { "page_number": 553, "text": "" }, { "page_number": 554, "text": "Chapter\n15\nBusiness Continuity \nPlanning\nTHE CISSP EXAM TOPICS COVERED IN THIS \nCHAPTER INCLUDE:\n\u0001 Business Continuity Planning\n\u0001 Project Scope and Planning\n\u0001 Business Impact Assessment\n\u0001 Containment Strategy\n" }, { "page_number": 555, "text": "Despite our best wishes, disasters of one form or another eventu-\nally strike every organization. Whether it’s a natural disaster like \na hurricane or earthquake or a manmade disaster like a riot or \nexplosion, every organization will encounter events that threaten their very existence. Strong \norganizations have plans and procedures in place to help mitigate the effects a disaster has on \ntheir continuing operations and to speed the return to normal operations. Recognizing the \nimportance of planning for business continuity and disaster recovery, (ISC)2 designated these \ntwo processes as the eighth domain of the Common Body of Knowledge for the CISSP program. \nKnowledge of these fundamental topics will help you prepare for the exam and help you prepare \nyour organization for the unexpected.\nIn this chapter, we’ll explore the concepts behind Business Continuity Planning. Chapter 16, \n“Disaster Recovery Planning,” will continue our discussion.\nBusiness Continuity Planning\nBusiness Continuity Planning (BCP) involves the assessment of a variety of risks to organiza-\ntional processes and the creation of policies, plans, and procedures to minimize the impact those \nrisks might have on the organization if they were to occur. BCP is used to restore operations \nback to normal in the event of a minor disaster. A minor disaster is any event that does not fully \ninterrupt business processes but is not handled automatically by the deployed security mecha-\nnisms. Thus, a BCP event is less disastrous than a Disaster Recovery Planning (DRP) event but \nmore disastrous than a simple security violation. BCP focuses on maintaining business opera-\ntions with reduced or restricted infrastructure capabilities or resources. As long as the continu-\nity of the organization’s ability to perform its mission-critical work tasks is maintained, BCP can \nbe used to manage and restore the environment. If the continuity is broken, then business pro-\ncesses have stopped and the organization is in disaster mode, thus DRP takes over.\nThe top priority of BCP and DRP is always people. The primary concern is to get \npeople out of harm’s way; then you can address IT recovery and restoration issues.\nThe overall goal of BCP is to reduce the risk of financial loss and to enhance a company’s \nability to recover from a disruptive event promptly. The BCP process, as defined by (ISC)2, has \nfour main steps:\n\u0002\nProject Scope and Planning\n\u0002\nBusiness Impact Assessment\n" }, { "page_number": 556, "text": "Project Scope and Planning\n511\n\u0002\nContinuity Planning\n\u0002\nApproval and Implementation\nThe next three sections of this chapter cover each of these phases in detail. The last portion \nof this chapter will introduce some of the critical elements you should take under consideration \nwhen compiling documentation of your organization’s business continuity plan.\nProject Scope and Planning\nAs with any formalized business process, the development of a strong business continuity plan \nrequires the use of a proven methodology. This requires a structured analysis of the business’s \norganization from a crisis planning point of view, the creation of a BCP team with the approval \nof senior management, an assessment of the resources available to participate in business con-\ntinuity activities, and an analysis of the legal and regulatory landscape that governs an organi-\nzation’s response to a catastrophic event.\nBusiness Organization Analysis\nOne of the first responsibilities of the individuals responsible for business continuity planning \nis to perform an analysis of the business organization to identify all departments and individuals \nwho have a stake in the Business Continuity Planning process. Some areas to consider are \nincluded in the following list:\n\u0002\nOperational departments that are responsible for the core services the business provides to \nits clients\n\u0002\nCritical support services, such as the information technology department, plant mainte-\nnance department, and other groups responsible for the upkeep of systems that support the \noperational departments\n\u0002\nSenior executives and other key individuals essential for the ongoing viability of the organization\nThis identification process is critical for two reasons. First, it provides the groundwork nec-\nessary to help identify potential members of the Business Continuity Planning team (see the next \nsection). Second, it provides the foundation for the remainder of the BCP process.\nNormally, the business organization analysis is performed by the one or two individuals \nspearheading the BCP effort. This is acceptable, given the fact that they normally use the output \nof the analysis to assist with the selection of the remaining BCP team members. However, a \nthorough review of this analysis should be one of the first tasks assigned to the full BCP team \nwhen it is convened. This step is critical because the individuals performing the original analysis \nmay have overlooked critical business functions known to BCP team members that represent \nother parts of the organization. If the team were to continue without revising the organizational \nanalysis, the entire BCP process may become corrupted and result in the development of a plan \nthat does not fully address the emergency response needs of the organization as a whole.\n" }, { "page_number": 557, "text": "512\nChapter 15\n\u0002 Business Continuity Planning\nEach location of an organization should have its own distinct plan. A single plan \nshould not cover multiple geographic locations.\nBCP Team Selection\nIn many organizations, the IT and/or security departments are given sole responsibility for Busi-\nness Continuity Planning. Operational and other support departments are given no input in the \ndevelopment of the plan and may not even know of its existence until disaster strikes or is immi-\nnent. This is a critical flaw! The independent development of a business continuity plan can spell \ndisaster in two ways. First, the plan itself may not take into account knowledge possessed only \nby the individuals responsible for the day-to-day operation of the business. Second, it keeps \noperational elements “in the dark” about plan specifics until implementation becomes neces-\nsary. This reduces the possibility that operational elements will agree with the provisions of the \nplan and work effectively to implement it. It also denies organizations the benefits achieved by \na structured training and testing program for the plan.\nTo prevent these events from adversely impacting the Business Continuity Planning process, \nthe individuals responsible for the effort should take special care when selecting the BCP team. \nThe team should include, as a minimum, the following individuals:\n\u0002\nRepresentatives from each of the organization’s departments responsible for the core ser-\nvices performed by the business\n\u0002\nRepresentatives from the key support departments identified by the organizational analysis\n\u0002\nIT representatives with technical expertise in areas covered by the BCP\n\u0002\nSecurity representatives with knowledge of the BCP process\n\u0002\nLegal representatives familiar with corporate legal, regulatory, and contractual responsibilities\n\u0002\nRepresentatives from senior management\nSelect your team carefully! You need to strike a balance between representing \ndifferent points of view and creating a team with explosive personality differ-\nences. Your goal should be to create a group that is as diverse as possible and \nstill operates in harmony.\nEach one of the individuals mentioned in the preceding list brings a unique perspective to the BCP \nprocess and will have individual biases. For example, the representatives from each of the opera-\ntional departments will often consider their department the most critical to the organization’s con-\ntinued viability. Although these biases may at first seem divisive, the leader of the BCP effort should \nembrace them and harness them in a productive manner. If used effectively, the biases will help \nachieve a healthy balance in the final plan as each representative advocates the needs of their depart-\nment. On the other hand, if proper leadership isn’t provided, these biases may devolve into destruc-\ntive turf battles that derail the BCP effort and harm the organization as a whole.\n" }, { "page_number": 558, "text": "Project Scope and Planning\n513\nResource Requirements\nAfter the team validates the business organization analysis, they should turn to an assessment \nof the resources required by the BCP effort. This involves the resources required by three distinct \nBCP phases:\nBCP development\nThe BCP team will require some resources to perform the four elements of \nthe BCP process (Project Scope and Planning, Business Impact Assessment, Continuity Plan-\nning, and Approval and Implementation). It’s more than likely that the major resource con-\nsumed by this BCP phase will be manpower expended by members of the BCP team and the \nsupport staff they call upon to assist in the development of the plan.\nBCP testing, training, and maintenance\nThe testing, training, and maintenance phases of BCP \nwill require some hardware and software commitments, but once again, the major commitment \nin this phase will be manpower on the part of the employees involved in those activities.\nBCP implementation\nWhen a disaster strikes and the BCP team deems it necessary to conduct \na full-scale implementation of the business continuity plan, significant resources will be \nrequired. This includes a large amount of manpower (BCP will likely become the focus of a large \npart, if not all, of the organization) and the utilization of “hard” resources. For this reason, it’s \nimportant that the team uses its BCP implementation powers judiciously, yet decisively.\nAn effective business continuity plan requires the expenditure of a large amount of corporate \nresources, ranging all the way from the purchase and deployment of redundant computing facil-\nities to the pencils and paper used by team members scratching out the first drafts of the plan. \nSenior Management and BCP\nThe role of senior management in the BCP process varies widely from organization to organi-\nzation and depends upon the internal culture of the business, interest in the plan from above, \nand the legal and regulatory environment in which the business operates. It’s very important \nthat you, as the BCP team leader, seek and obtain as active a role as possible from a senior \nexecutive. This conveys the importance of the BCP process to the entire organization and fos-\nters the active participation of individuals who might otherwise write BCP off as a waste of time \nbetter spent on operational activities. Furthermore, laws and regulations might require the \nactive participation of those senior leaders in the planning process. If you work for a publicly \ntraded company, you may wish to remind executives that the officers and directors of the firm \nmight be found personally liable if a disaster cripples the business and they are found not to \nhave exercised due diligence in their contingency planning. You may also have to convince \nmanagement that BCP and DRP spending should not be viewed as a discretionary expense. \nManagement’s fiduciary responsibilities to the organization’s shareholders and board of direc-\ntors require them to at least ensure that adequate BCP measures are in place, even if they don’t \ntake an active role in their development.\n" }, { "page_number": 559, "text": "514\nChapter 15\n\u0002 Business Continuity Planning\nHowever, as you saw earlier, one of the most significant resources consumed by the BCP process \nis personnel. Many security professionals overlook the importance of accounting for labor. \nHowever, you can rest assured that senior management will not. Business leaders are keenly \naware of the effect that time-consuming side activities have on the operational productivity of \ntheir organizations and the real cost of personnel in terms of salary, benefits, and lost opportu-\nnities. These concerns become especially paramount when you are requesting the time of senior \nexecutives. You should expect that leaders responsible for resource utilization management will \nput your BCP proposal under a microscope, and you should be prepared to defend the necessity \nof your plan with coherent, logical arguments that address the business case for BCP.\nLegal and Regulatory Requirements\nMany industries may find themselves bound by federal, state, and local laws or regulations that \nrequire them to implement various degrees of Business Continuity Planning. We’ve already dis-\ncussed one example in this chapter—the officers and directors of publicly traded firms have a \nfiduciary responsibility to exercise due diligence in the execution of their business continuity \nduties. In other circumstances, the requirements (and consequences of failure) might be more \nsevere. Emergency services, such as police, fire, and emergency medical operations, have a \nresponsibility to the community to continue operations in the event of a disaster. Indeed, their \nservices become even more critical in an emergency when the public safety is threatened. Failure \non their part to implement a solid BCP could result in the loss of life and/or property and the \ndecreased confidence of the population in their government.\nIn many countries, financial institutions, such as banks, brokerages, and the firms that pro-\ncess their data, are governed by strict government and international banking and securities reg-\nulations designed to facilitate their continued operation to ensure the viability of the national \neconomy. When pharmaceutical manufacturers must produce products in less-than-optimal cir-\ncumstances following a disaster, they are required to certify the purity of their products to gov-\nernment regulators. There are countless other examples of industries that are required to \ncontinue operating in the event of an emergency by various laws and regulations.\nExplaining the Benefits of BCP\nOne of the most common arguments against committing resources to BCP is the planned use \nof “seat of the pants” continuity planning, or the attitude that the business has always survived \nand the key leaders will figure something out in the event of a disaster. If you encounter this \nobjection, you might want to point out to management the costs that will be incurred by the \nbusiness (both direct costs and the indirect cost of lost opportunities) for each day that the busi-\nness is down. Then ask them to consider how long a “seat of the pants” recovery might take \nwhen compared to an orderly, planned continuity of operations.\n" }, { "page_number": 560, "text": "Business Impact Assessment\n515\nEven if you’re not bound by any of these considerations, you might have contractual obli-\ngations to your clients that require you to implement sound BCP practices. If your contracts \ninclude some type of service level agreement (SLA), you might find yourself in breach of those \ncontracts if a disaster interrupts your ability to service your clients. Many clients may feel sorry \nfor you and want to continue using your products/services, but their own business requirements \nmight force them to sever the relationship and find new suppliers.\nOn the flip side of the coin, developing a strong, documented business continuity plan can \nhelp your organization win new clients and additional business from existing clients. If you can \nshow your customers the sound procedures you have in place to continue serving them in the \nevent of a disaster, they’ll place greater confidence in your firm and might be more likely to \nchoose you as their preferred vendor. Not a bad position to be in!\nAll of these concerns point to one conclusion—it’s essential to include your organization’s \nlegal counsel in the Business Continuity Planning process. They are intimately familiar with the \nlegal, regulatory, and contractual obligations that apply to your organization and can help your \nteam implement a plan that meets those requirements while ensuring the continued viability of \nthe organization to the benefit of all—employees, shareholders, suppliers, and customers alike.\nLaws regarding computing systems, business practices, and disaster manage-\nment change frequently and vary from jurisdiction to jurisdiction. Be sure to \nkeep your attorneys involved throughout the lifetime of your BCP, including the \ntesting and maintenance phases. If you restrict their involvement to a pre-\nimplementation review of the plan, you may not become aware of the impact \nthat changing laws and regulations have on your corporate responsibilities.\nBusiness Impact Assessment\nOnce your BCP team completes the four stages of preparing to create a business continuity plan, \nit’s time to dive into the heart of the work—the Business Impact Assessment (BIA). The BIA \nidentifies the resources that are critical to an organization’s ongoing viability and the threats \nposed to those resources. It also assesses the likelihood that each threat will actually occur and \nthe impact those occurrences will have on the business. The results of the BIA provide you with \nquantitative measures that can help you prioritize the commitment of business continuity \nresources to the various risks your organization faces.\nIt’s important to realize that there are two different types of analyses that business planners \nuse when facing a decision:\nQuantitative decision making\nQuantitative decision making involves the use of numbers and \nformulas to reach a decision. This type of data often expresses options in terms of the dollar \nvalue to the business.\nQualitative decision making\nQualitative decision making takes nonnumerical factors, such as \nemotions, investor/customer confidence, workforce stability, and other concerns, into account. \nThis type of data often results in categories of prioritization (such as high, medium, and low).\n" }, { "page_number": 561, "text": "516\nChapter 15\n\u0002 Business Continuity Planning\nQuantitative analysis and qualitative analysis both play an important role in the \nBusiness Continuity Planning process. However, most people tend to favor one \ntype of analysis over the other. When selecting the individual members of the \nBCP team, try to achieve a balance between people who prefer each strategy. \nThis will result in the development of a well-rounded BCP and benefit the orga-\nnization in the long run.\nThe BIA process described in this chapter approaches the problem from both quantitative \nand qualitative points of view. However, it’s very tempting for a BCP team to “go with the num-\nbers” and perform a quantitative assessment while neglecting the somewhat more difficult qual-\nitative assessment. It’s important that the BCP team perform a qualitative analysis of the factors \naffecting your BCP process. For example, if your business is highly dependent upon a few very \nimportant clients, your management team is probably willing to suffer significant short-term \nfinancial loss in order to retain those clients in the long term. The BCP team must sit down and \ndiscuss (preferably with the involvement of senior management) qualitative concerns to develop \na comprehensive approach that satisfies all stakeholders.\nIdentify Priorities\nThe first BIA task facing the Business Continuity Planning team is the identification of business pri-\norities. Depending upon your line of business, there will be certain activities that are most essential \nto your day-to-day operations when disaster strikes. The priority identification task, or criticality \nprioritization, involves creating a comprehensive list of business processes and ranking them in order \nof importance. Although this task may seem somewhat daunting, it’s not as hard as it seems. A great \nway to divide the workload of this process among the team members is to assign each participant \nresponsibility for drawing up a prioritized list that covers the business functions that their depart-\nment is responsible for. When the entire BCP team convenes, team members can use those prioritized \nlists to create a master prioritized list for the entire organization.\nThis process helps identify business priorities from a qualitative point of view. Recall that \nwe’re describing an attempt to simultaneously develop both qualitative and quantitative BIAs. \nTo begin the quantitative assessment, the BCP team should sit down and draw up a list of orga-\nnization assets and then assign an asset value (AV) in monetary terms to each asset. These num-\nbers will be used in the remaining BIA steps to develop a financially based BIA. The second \nquantitative measure that the team must develop is the maximum tolerable downtime (MTD), \nor recovery time objective (RTO), for each business function. This is the maximum length of \ntime a business function can be inoperable without causing irreparable harm to the business. \nThe MTD provides valuable information when performing both BCP and DRP planning.\nRisk Identification\nThe next phase of the Business Impact Assessment is the identification of risks posed to your \norganization. Some elements of this organization-specific list may come to mind immediately. \n" }, { "page_number": 562, "text": "Business Impact Assessment\n517\nThe identification of other, more obscure risks might take a little creativity on the part of the \nBCP team.\nRisks come in two forms: natural risks and man-made risks. The following list includes some \nevents that pose natural threats:\n\u0002\nViolent storms/hurricanes/tornadoes/blizzards\n\u0002\nEarthquakes\n\u0002\nMudslides/avalanches\n\u0002\nVolcanic eruptions\nMan-made threats include the following events:\n\u0002\nTerrorist acts/wars/civil unrest\n\u0002\nTheft/vandalism\n\u0002\nFires/explosions\n\u0002\nProlonged power outages\n\u0002\nBuilding collapses\n\u0002\nTransportation failures\nRemember, these are by no means all-inclusive lists. They merely identify some common \nrisks that many organizations face. You may wish to use them as a starting point, but a full list-\ning of risks facing your organization will require input from all members of the BCP team.\nThe risk identification portion of the process is purely qualitative in nature. At this point in \nthe process, the BCP team should not be concerned about the likelihood that each type of risk \nwill actually materialize or the amount of damage such an occurrence would inflict upon the \ncontinued operation of the business. The results of this analysis will drive both the qualitative \nand quantitative portions of the remaining BIA tasks.\nLikelihood Assessment\nThe preceding step consisted of the BCP team drawing up a comprehensive list of the events that \ncan be a threat to an organization. You probably recognized that some events are much more \nlikely to happen than others. For example, a business in Southern California is much more likely \nto face the risk of an earthquake than that posed by a volcanic eruption. A business based in \nHawaii might have the exact opposite likelihood that each risk would occur.\nTo account for these differences, the next phase of the Business Impact Assessment identifies \nthe likelihood that each risk will occur. To keep calculations consistent, this assessment is usu-\nally expressed in terms of an annualized rate of occurrence (ARO) that reflects the number of \ntimes a business expects to experience a given disaster each year.\nThe BCP team should sit down and determine an ARO for each risk identified in the previous \nsection. These numbers should be based upon corporate history, professional experience of \nteam members, and advice from experts, such as meteorologists, seismologists, fire prevention \nprofessionals, and other consultants, as needed.\n" }, { "page_number": 563, "text": "518\nChapter 15\n\u0002 Business Continuity Planning\nImpact Assessment\nAs you may have surmised based upon its name, the impact assessment is one of the most critical \nportions of the Business Impact Assessment. In this phase, you analyze the data gathered during \nrisk identification and likelihood assessment and attempt to determine what impact each one of \nthe identified risks would have upon the business if it were to occur.\nFrom a quantitative point of view, there are three specific metrics we will examine: the expo-\nsure factor, the single loss expectancy, and the annualized loss expectancy. Each one of these val-\nues is computed for each specific risk/asset combination evaluated during the previous phases.\nThe exposure factor (EF) is the amount of damage that the risk poses to the asset, expressed \nas a percentage of the asset’s value. For example, if the BCP team consults with fire experts and \ndetermines that a building fire would cause 70 percent of the building to be destroyed, the expo-\nsure factor of the building to fire is 70 percent.\nThe single loss expectancy (SLE) is the monetary loss that is expected each time the risk \nmaterializes. It is computed as the product of the exposure factor (EF) and the asset value (AV). \nContinuing with the preceding example, if the building is worth $500,000, the single loss \nexpectancy would be 70 percent of $500,000, or $350,000. You can interpret this figure to \nmean that a single fire in the building would be expected to cause $350,000 worth of damage.\nThe annualized loss expectancy (ALE) is the monetary loss that the business expects to occur \nas a result of the risk harming the asset over the course of a year. It is computed as the product \nof the annualized rate of occurrence (ARO from the previous section) and the asset value (AV). \nReturning once again to our building example, if fire experts predict that a fire will occur in the \nbuilding once every 30 years, the ARO is 1/30, or 0.03. The ALE is then 3 percent of the \n$350,000 SLE, or $11,667. You can interpret this figure to mean that the business should \nexpect to lose $11,667 each year due to a fire in the building. Obviously, a fire will not occur \neach year—this figure represents the average cost over the 30 years between fires. It’s not espe-\ncially useful for budgeting considerations but proves invaluable when attempting to prioritize \nthe assignment of BCP resources to a given risk. These concepts were also covered in Chapter 6, \n“Asset Value, Policies, and Roles.”\nBe certain you’re familiar with the quantitative formulas contained in this chap-\nter and the concepts of asset value (AV), exposure factor (EF), annualized rate \nof occurrence (ARO), single loss expectancy (SLE), and annualized loss expect-\nancy (ALE). Know the formulas and be able to work through a scenario. The for-\nmula for figuring the single loss expectancy is SLE=AV*EF. The formula for \nfiguring the annualized loss expectancy is ALE=SLE*ARO.\nFrom a qualitative point of view, you must consider the nonmonetary impact that interrup-\ntions might have on your business. For example, you might want to consider the following:\n\u0002\nLoss of goodwill among your client base\n\u0002\nLoss of employees after prolonged downtime\n\u0002\nSocial/ethical responsibilities to the community\n\u0002\nNegative publicity\n" }, { "page_number": 564, "text": "Continuity Strategy\n519\nIt’s difficult to put dollar values on items like these in order to include them in the quantitative \nportion of the impact assessment, but they are equally important. After all, if you decimate your \nclient base, you won’t have a business to return to when you’re ready to resume operations!\nResource Prioritization\nThe final step of the BIA is to prioritize the allocation of business continuity resources to the var-\nious risks that you identified and assessed in the preceding tasks of the BIA.\nFrom a quantitative point of view, this process is relatively straightforward. You simply cre-\nate a list of all of the risks you analyzed during the BIA process and sort them in descending \norder by the order by the ALE computed during computed during the impact assessment phase. \nThis provides you with a prioritized list of the risks that you should address. Simply select as \nmany items as you’re willing and able to address simultaneously from the top of the list and \nwork your way down, adding another item to the working plate as you are satisfied that you are \nprepared to address an existing item. Eventually, you’ll reach a point at which you’ve exhausted \neither the list of risks (unlikely!) or all of your available resources (much more likely!).\nRecall from the previous section that we also stressed the importance of addressing qualita-\ntively important concerns as well. In previous sections about the BIA, we treated quantitative \nand qualitative analysis as mainly separate functions with some overlap in the analysis. Now it’s \ntime to merge the two prioritized lists, which is more of an art than a science. You must sit down \nwith the BCP team and (hopefully) representatives from the senior management team and com-\nbine the two lists into a single prioritized list. Qualitative concerns may justify elevating or low-\nering the priority of risks that already exist on the ALE-sorted quantitative list. For example, if \nyou run a fire suppression company, your number one priority might be the prevention of a fire \nin your principal place of business, despite the fact that an earthquake might cause more phys-\nical damage. The potential loss of face within the business community resulting from the \ndestruction of a fire suppression company by fire might be too difficult to overcome and result \nin the eventual collapse of the business, justifying the increased priority.\nContinuity Strategy\nThe first two phases of the BCP process (Project Scope and Planning and the Business Impact \nAssessment) are focused on determining how the BCP process will work and the prioritization \nof the business assets that must be protected against interruption. The next phase of BCP devel-\nopment, Continuity Planning, focuses on the development and implementation of a continuity \nstrategy to minimize the impact realized risks might have on protected assets.\nStrategy Development\nThe strategy development phase of continuity planning bridges the gap between the Business \nImpact Assessment and the Continuity Planning phases of BCP development. The BCP team \n" }, { "page_number": 565, "text": "520\nChapter 15\n\u0002 Business Continuity Planning\nmust now take the prioritized list of concerns raised by the quantitative and qualitative resource \nprioritization exercises and determine which risks will be addressed by the business continuity \nplan. Fully addressing all of the contingencies would require the implementation of provisions \nand processes that maintain a zero-downtime posture in the face of each and every possible risk. \nFor obvious reasons, implementing a policy this comprehensive is simply impossible.\nThe BCP team should look back to the maximum tolerable downtime (MTD) estimates cre-\nated during the early stages of the BIA and determine which risks are deemed acceptable and \nwhich must be mitigated by BCP continuity provisions. Some of these decisions are obvious—\nthe risk of a blizzard striking an operations facility in Egypt is negligible and would be deemed \nan acceptable risk. The risk of a monsoon in New Delhi is serious enough that it must be mit-\nigated by BCP provisions.\nKeep in mind that there are four possible responses to a risk: reduce, assign, accept, \nand reject. Each may be an acceptable response based upon the circumstances.\nOnce the BCP team determines which risks require mitigation and the level of resources that \nwill be committed to each mitigation task, they are ready to move on to the provisions and pro-\ncesses phase of continuity planning.\nProvisions and Processes\nThe provisions and processes phase of continuity planning is the meat of the entire business con-\ntinuity plan. In this task, the BCP team designs the specific procedures and mechanisms that will \nmitigate the risks deemed unacceptable during the strategy development stage. There are three \ncategories of assets that must be protected through BCP provisions and processes: people, build-\nings/facilities, and infrastructure. In the next three sections, we’ll explore some of the techniques \nyou can use to safeguard each of these categories.\nPeople\nFirst and foremost, you must ensure that the people within your organization are safe before, \nduring, and after an emergency. Once you’ve achieved that goal, you must make provisions to \nallow your employees to conduct both their BCP and operational tasks in as normal a manner \nas possible given the circumstances.\nDon’t lose sight of the fact that people are truly your most valuable asset. In \nalmost every line of business, the safety of people must always come before \nthe organization’s business goals. Make sure that your business continuity \nplan makes adequate provisions for the security of your employees, custom-\ners, suppliers, and any other individuals who may be affected!\nPeople should be provided with all of the resources they need to complete their assigned \ntasks. At the same time, if circumstances dictate that people be present in the workplace for \n" }, { "page_number": 566, "text": "Continuity Strategy\n521\nextended periods of time, arrangements must be made for shelter and food. Any continuity plan \nthat requires these provisions should include detailed instructions for the BCP team in the event \nof a disaster. Stockpiles of provisions sufficient to feed the operational and support teams for \nan extended period of time should be maintained in an accessible location and rotated period-\nically to prevent spoilage.\nBuildings/Facilities\nMany businesses require specialized facilities in order to carry out their critical operations. \nThese might include standard office facilities, manufacturing plants, operations centers, ware-\nhouses, distribution/logistics centers, and repair/maintenance depots, among others. When you \nperform your BIA, you will identify those facilities that play a critical role in your organization’s \ncontinued viability. Your continuity plan should address two areas for each critical facility:\nHardening provisions\nYour BCP should outline mechanisms and procedures that can be put \ninto place to protect your existing facilities against the risks defined in the strategy development \nphase. This might include steps as simple as patching a leaky roof or as complex as installing \nreinforced hurricane shutters and fireproof walls.\nAlternate sites\nIn the event that it’s not possible to harden a facility against a risk, your BCP \nshould identify alternate sites where business activities can resume immediately (or at least in a \nperiod of time that’s shorter than the maximum tolerable downtime for all affected critical busi-\nness functions). The next chapter, “Disaster Recovery Planning,” describes a few of the facility \ntypes that might be useful in this stage.\nInfrastructure\nEvery business depends upon some sort of infrastructure for its critical processes. For many \nbusinesses, a critical part of this infrastructure is an IT backbone of communications and com-\nputer systems that process orders, manage the supply chain, handle customer interaction, and \nperform other business functions. This backbone comprises a number of servers, workstations, \nand critical communications links between sites. The BCP must address how these systems will \nbe protected against risks identified during the strategy development phase. As with buildings \nand facilities, there are two main methods of providing this protection:\nHardening systems\nYou can protect systems against the risks by introducing protective mea-\nsures such as computer-safe fire suppression systems and uninterruptible power supplies.\nAlternative systems\nYou can also protect business functions by introducing redundancy \n(either redundant components or completely redundant systems/communications links that rely \non different facilities).\nThese same principles apply to whatever infrastructure components serve your critical busi-\nness processes—transportation systems, electrical power grids, banking and financial systems, \nwater supplies, and so on.\n" }, { "page_number": 567, "text": "522\nChapter 15\n\u0002 Business Continuity Planning\nPlan Approval\nOnce the BCP team completes the design phase of the BCP document, it’s time to gain top-level \nmanagement endorsement of the plan. If you were fortunate enough to have senior management \ninvolvement throughout the development phases of the plan, this should be a relatively straight-\nforward process. On the other hand, if this is your first time approaching management with the \nBCP document, you should be prepared to provide a lengthy explanation of the plan’s purpose \nand specific provisions.\nSenior management approval and buy-in is essential to the success of the over-\nall BCP effort.\nIf possible, you should attempt to have the plan endorsed by the top executive in your busi-\nness—the chief executive officer, chairman, president, or similar business leader. This move \ndemonstrates the importance of the plan to the entire organization and showcases the business \nleader’s commitment to business continuity. The signature of such an individual on the plan also \ngives it much greater weight and credibility in the eyes of other senior managers, who might oth-\nerwise brush it off as a necessary but trivial IT initiative.\nPlan Implementation\nOnce you’ve received approval from senior management, it’s time to dive in and start imple-\nmenting your plan. The BCP team should get together and develop an implementation schedule \nthat utilizes the resources dedicated to the program to achieve the stated process and provision \ngoals in as prompt a manner as possible given the scope of the modifications and the organiza-\ntional climate.\nAfter all of the resources are fully deployed, the BCP team should supervise the conduct of \nan appropriate BCP maintenance program to ensure that the plan remains responsive to evolv-\ning business needs.\nTraining and Education\nTraining and education are essential elements of the BCP implementation. All personnel who \nwill be involved in the plan (either directly or indirectly) should receive some sort of training on \nthe overall plan and their individual responsibilities. Everyone in the organization should \nreceive at least a plan overview briefing to provide them with the confidence that business lead-\ners have considered the possible risks posed to continued operation of the business and have put \na plan in place to mitigate the impact on the organization should business be disrupted. People \nwith direct BCP responsibilities should be trained and evaluated on their specific BCP tasks to \nensure that they are able to complete them efficiently when disaster strikes. Furthermore, at \nleast one backup person should be trained for every BCP task to ensure redundancy in the event \npersonnel are injured or cannot reach the workplace during an emergency.\n" }, { "page_number": 568, "text": "BCP Documentation\n523\nTraining and education are important parts of any security-related plan and the \nBCP process is no exception. Ensure that personnel within your organization \nare fully aware of their BCP responsibilities before disaster strikes!\nBCP Documentation\nDocumentation is a critical step in the Business Continuity Planning process. Committing your \nBCP methodology to paper provides several important benefits:\n\u0002\nIt ensures that BCP personnel have a written continuity document to reference in the event \nof an emergency, even if senior BCP team members are not present to guide the effort.\n\u0002\nIt provides an historical record of the BCP process that will be useful to future personnel \nseeking to both understand the reasoning behind various procedures and implement nec-\nessary changes in the plan.\n\u0002\nIt forces the team members to commit their thoughts to paper—a process that often facil-\nitates the identification of flaws in the plan. Having the plan on paper also allows draft doc-\numents to be distributed to individuals not on the BCP team for a “sanity check.”\nIn the following sections, we’ll explore some of the important components of the written \nbusiness continuity plan.\nContinuity Planning Goals\nFirst and foremost, the plan should describe the goals of continuity planning as set forth by the \nBCP team and senior management. These goals should be decided upon at or before the first BCP \nteam meeting and will most likely remain unchanged throughout the life of the BCP.\nThe most common goal of the BCP is quite simple: to ensure the continuous operation of the \nbusiness in the face of an emergency situation. Other goals may also be inserted in this section \nof the document to meet organizational needs.\nStatement of Importance\nThe statement of importance reflects the criticality of the BCP to the organization’s continued \nviability. This document commonly takes the form of a letter to the organization’s employees \nstating the reason that the organization devoted significant resources to the BCP development \nprocess and requesting the cooperation of all personnel in the BCP implementation phase. \nHere’s where the importance of senior executive buy-in comes into play. If you can put out this \nletter under the signature of the CEO or an officer at a similar level, the plan itself will carry tre-\nmendous weight as you attempt to implement changes throughout the organization. If you have \n" }, { "page_number": 569, "text": "524\nChapter 15\n\u0002 Business Continuity Planning\nthe signature of a lower-level manager, you may encounter resistance as you attempt to work \nwith portions of the organization outside of that individual’s direct control.\nStatement of Priorities\nThe statement of priorities flows directly from the identify priorities phase of the Business \nImpact Assessment. It simply involves listing the functions considered critical to continued \nbusiness operations in a prioritized order. When listing these priorities, you should also \ninclude a statement that they were developed as part of the BCP process and reflect the impor-\ntance of the functions to continued business operations in the event of an emergency and \nnothing more. Otherwise, the list of priorities could be used for unintended purposes and \nresult in a political turf battle between competing organizations to the detriment of the busi-\nness continuity plan.\nStatement of Organizational Responsibility\nThe statement of organizational responsibility also comes from a senior-level executive and can \nbe incorporated into the same letter as the statement of importance. It basically echoes the sen-\ntiment that “Business Continuity Is Everyone’s Responsibility!” The statement of organiza-\ntional responsibility restates the organization’s commitment to Business Continuity Planning \nand informs the organization’s employees, vendors, and affiliates that they are individually \nexpected to do everything they can to assist with the BCP process.\nStatement of Urgency and Timing\nThe statement of urgency and timing expresses the criticality of implementing the BCP and out-\nlines the implementation timetable decided upon by the BCP team and agreed to by upper man-\nagement. The wording of this statement will depend upon the actual urgency assigned to the \nBCP process by the organization’s leadership. If the statement itself is included in the same letter \nas the statement of priorities and statement of organizational responsibility, the timetable \nshould be included as a separate document. Otherwise, the timetable and this statement can \nbe put into the same document.\nRisk Assessment\nThe risk assessment portion of the BCP documentation essentially recaps the decision-making \nprocess undertaken during the Business Impact Assessment. It should include a discussion of all \nof the risks considered during the BIA as well as the quantitative and qualitative analyses per-\nformed to assess these risks. For the quantitative analysis, the actual AV, EF, ARO, SLE, and \nALE figures should be included. For the qualitative analysis, the thought process behind the risk \nanalysis should be provided to the reader.\n" }, { "page_number": 570, "text": "BCP Documentation\n525\nRisk Acceptance/Mitigation\nThe risk acceptance/mitigation section of the BCP documentation contains the outcome of the \nstrategy development portion of the BCP process. It should cover each risk identified in the risk \nanalysis portion of the document and outline one of two thought processes:\n\u0002\nFor risks that were deemed acceptable, it should outline the reasons the risk was considered \nacceptable as well as potential future events that might warrant reconsideration of this \ndetermination.\n\u0002\nFor risks that were deemed unacceptable, it should outline the risk mitigation provisions \nand processes put into place to reduce the risk to the organization’s continued viability.\nVital Records Program\nThe BCP documentation should also outline a vital records program for the organization. This \ndocument states where critical business records will be stored and the procedures for making \nand storing backup copies of those records. This is also a critical portion of the disaster recovery \nplan and is discussed in Chapter 16’s coverage of that topic.\nEmergency Response Guidelines\nThe emergency response guidelines outline the organizational and individual responsibilities for \nimmediate response to an emergency situation. This document provides the first employees to \ndetect an emergency with the steps that should be taken to activate provisions of the BCP that \ndo not automatically activate. These guidelines should include the following:\n\u0002\nImmediate response procedures (security procedures, fire suppression procedures, notifica-\ntion of appropriate emergency response agencies, etc.)\n\u0002\nWhom to notify (executives, BCP team members, etc.)\n\u0002\nSecondary response procedures to take while waiting for the BCP team to assemble\nMaintenance\nThe BCP documentation and the plan itself must be living documents. Every organization \nencounters nearly constant change, and this dynamic nature ensures that the business’s conti-\nnuity requirements will also evolve. The BCP team should not be disbanded after the plan is \ndeveloped but should still meet periodically to discuss the plan and review the results of plan \ntests to ensure that it continues to meet organizational needs. Obviously, minor changes to the \nplan do not require conducting the full BCP development process from scratch; they can simply \nbe made at an informal meeting of the BCP team by unanimous consent. However, keep in mind \nthat drastic changes in an organization’s mission or resources may require going back to the \nBCP drawing board and beginning again. All older versions of the BCP should be physically \ndestroyed and replaced by the most current version so that there is never any confusion as to the \ncorrect implementation of the BCP. It is also a good practice to include BCP components into \njob descriptions to ensure that the BCP remains fresh and correctly performed.\n" }, { "page_number": 571, "text": "526\nChapter 15\n\u0002 Business Continuity Planning\nTesting\nThe BCP documentation should also outline a formalized testing program to ensure that the \nplan remains current and that all personnel are adequately trained to perform their duties in the \nevent of an actual disaster. The testing process is actually quite similar to that used for the disas-\nter recovery plan, so discussion of the specific test types will be reserved for Chapter 16.\nSummary\nEvery organization dependent upon technological resources for its survival should have a com-\nprehensive business continuity plan in place to ensure the sustained viability of the organization \nwhen unforeseen emergencies take place. There are a number of the important concepts that \nunderlie solid Business Continuity Planning (BCP) practices, including Project Scope and Plan-\nning, Business Impact Assessment, Continuity Planning, and Approval and Implementation. \nEvery organization must have plans and procedures in place to help mitigate the effects a disas-\nter has on continuing operations and to speed the return to normal operations. To determine the \nrisks that your business faces and that require mitigation, you must conduct a Business Impact \nAssessment from both quantitative and qualitative points of view. You must take the appropri-\nate steps in developing a continuity strategy for your organization and know what to do to \nweather future disasters.\nFinally, you must create the documentation required to ensure that your plan is effectively \ncommunicated to present and future BCP team participants. Such documentation must include \ncontinuity planning guidelines. The business continuity plan must also contain statements of \nimportance, priorities, organizational responsibility, and urgency and timing. In addition, the \ndocumentation should include plans for risk assessment, acceptance, and mitigation, a vital \nrecords program, emergency response guidelines, and plans for maintenance and testing.\nThe next chapter will take this planning to the next step—developing and implementing a \ndisaster recovery plan. The disaster recovery plan kicks in where the business continuity plan \nleaves off. When an emergency occurs that interrupts your business in spite of the BCP mea-\nsures, the disaster recovery plan guides the recovery efforts necessary to restore your business \nto normal operations as quickly as possible.\nExam Essentials\nUnderstand the four steps of the Business Continuity Planning process.\nBusiness Continuity \nPlanning (BCP) involves four distinct phases: Project Scope and Planning, Business Impact \nAssessment, Continuity Planning, and Approval and Implementation. Each task contributes to \nthe overall goal of ensuring that business operations continue uninterrupted in the face of an \nemergency situation.\n" }, { "page_number": 572, "text": "Exam Essentials\n527\nDescribe how to perform the business organization analysis.\nIn the business organization \nanalysis, the individuals responsible for leading the BCP process determine which departments \nand individuals have a stake in the business continuity plan. This analysis is used as the foun-\ndation for BCP team selection and, after validation by the BCP team, is used to guide the next \nstages of BCP development.\nList the necessary members of the Business Continuity Planning team.\nThe BCP team should \ncontain, as a minimum, representatives from each of the operational and support departments; \ntechnical experts from the IT department; security personnel with BCP skills; legal representa-\ntives familiar with corporate legal, regulatory, and contractual responsibilities; and representatives \nfrom senior management. Additional team members depend upon the structure and nature of \nthe organization.\nKnow the legal and regulatory requirements that face business continuity planners.\nBusi-\nness leaders must exercise due diligence to ensure that shareholders’ interests are protected in \nthe event disaster strikes. Some industries are also subject to federal, state, and local regulations \nthat mandate specific BCP procedures. Many businesses also have contractual obligations to \ntheir clients that must be met, before and after a disaster.\nExplain the steps of the Business Impact Assessment process.\nThe five steps of the Business \nImpact Assessment process are identification of priorities, risk identification, likelihood assess-\nment, impact assessment, and resource prioritization.\nDescribe the process used to develop a continuity strategy.\nDuring the strategy development \nphase, the BCP team determines which risks will be mitigated. In the provisions and processes phase, \nmechanisms and procedures that will actually mitigate the risks are designed. The plan must \nthen be approved by senior management and implemented. Personnel must also receive training \non their roles in the BCP process.\nExplain the importance of fully documenting an organization’s business continuity plan.\nCommitting the plan to writing provides the organization with a written record of the proce-\ndures to follow when disaster strikes. It prevents the “it’s in my head” syndrome and ensures \nthe orderly progress of events in an emergency.\n" }, { "page_number": 573, "text": "528\nChapter 15\n\u0002 Business Continuity Planning\nReview Questions\n1.\nWhat is the first step that individuals responsible for the development of a business continuity \nplan should perform?\nA. BCP team selection\nB. Business organization analysis\nC. Resource requirements analysis\nD. Legal and regulatory assessment\n2.\nOnce the BCP team is selected, what should be the first item placed on the team’s agenda?\nA. Business Impact Assessment\nB. Business organization analysis\nC. Resource requirements analysis\nD. Legal and regulatory assessment\n3.\nWhat is the term used to describe the responsibility of a firm’s officers and directors to ensure \nthat adequate measures are in place to minimize the effect of a disaster on the organization’s con-\ntinued viability?\nA. Corporate responsibility\nB. Disaster requirement\nC. Due diligence\nD. Going concern responsibility\n4.\nWhat will be the major resource consumed by the BCP process during the BCP phase?\nA. Hardware\nB. Software\nC. Processing time\nD. Personnel\n5.\nWhat unit of measurement should be used to assign quantitative values to assets in the priority \nidentification phase of the Business Impact Assessment?\nA. Monetary\nB. Utility\nC. Importance\nD. Time\n" }, { "page_number": 574, "text": "Review Questions\n529\n6.\nWhich one of the following BIA terms identifies the amount of money a business expects to lose \nto a given risk each year?\nA. ARO\nB. SLE\nC. ALE\nD. EF\n7.\nWhat BIA metric can be used to express the longest time a business function can be unavailable \nwithout causing irreparable harm to the organization?\nA. SLE\nB. EF\nC. MTD\nD. ARO\n8.\nYou are concerned about the risk that an avalanche poses to your $3 million shipping facility. \nBased upon expert opinion, you determine that there is a 5 percent chance that an avalanche will \noccur each year. Experts advise you that an avalanche would completely destroy your building \nand require you to rebuild on the same land. Ninety percent of the $3 million value of the facility \nis attributed to the building and 10 percent is attributed to the land itself. What is the single loss \nexpectancy of your shipping facility to avalanches?\nA. $3,000,000\nB. $2,700,000\nC. $270,000\nD. $135,000\n9.\nReferring to the scenario in question 8, what is the annualized loss expectancy?\nA. $3,000,000\nB. $2,700,000\nC. $270,000\nD. $135,000\n10. Your manager is concerned that the Business Impact Assessment recently completed by the BCP \nteam doesn’t adequately take into account the loss of goodwill among customers that might \nresult from a particular type of disaster. Where should items like this be addressed?\nA. Continuity strategy\nB. Quantitative analysis\nC. Likelihood assessment\nD. Qualitative analysis\n" }, { "page_number": 575, "text": "530\nChapter 15\n\u0002 Business Continuity Planning\n11. Which task of BCP bridges the gap between the Business Impact Assessment and the Continuity \nPlanning phases?\nA. Resource prioritization\nB. Likelihood assessment\nC. Strategy development\nD. Provisions and processes\n12. Which resource should you protect first when designing continuity plan provisions and processes?\nA. Physical plant\nB. Infrastructure\nC. Financial\nD. People\n13. Which one of the following concerns is not suitable for quantitative measurement during the \nBusiness Impact Assessment?\nA. Loss of a plant\nB. Damage to a vehicle\nC. Negative publicity\nD. Power outage\n14. Lighter Than Air Industries expects that it would lose $10 million if a tornado struck its aircraft \noperations facility. It expects that a tornado might strike the facility once every 100 years. What \nis the single loss expectancy for this scenario?\nA. 0.01\nB. $10,000,000\nC. $100,000\nD. 0.10\n15. Referring to the scenario in question 13, what is the annualized loss expectancy?\nA. 0.01\nB. $10,000,000\nC. $100,000\nD. 0.10\n16. In which Business Continuity Planning task would you actually design procedures and mecha-\nnisms to mitigate risks deemed unacceptable by the BCP team?\nA. Strategy development\nB. Business Impact Assessment\nC. Provisions and processes\nD. Resource prioritization\n" }, { "page_number": 576, "text": "Review Questions\n531\n17.\nWhat type of mitigation provision is utilized when redundant communications links are \ninstalled?\nA. Hardening systems\nB. Defining systems\nC. Reducing systems\nD. Alternative systems\n18. What type of plan outlines the procedures to follow when a disaster interrupts the normal oper-\nations of a business?\nA. Business continuity plan\nB. Business Impact Assessment\nC. Disaster recovery plan\nD. Vulnerability assessment\n19. What is the formula used to compute the single loss expectancy for a risk scenario?\nA. SLE=AV*EF\nB. SLE= RO*EF\nC. SLE=AV*ARO\nD. SLE=EF*ARO\n20. When computing an annualized loss expectancy, what is the scope of the output number?\nA. All occurrences of a risk across an organization during the life of the organization\nB. All occurrences of a risk across an organization during the next year\nC. All occurrences of a risk affecting a single organizational asset during the life of the asset\nD. All occurrences of a risk affecting a single organizational asset during the next year\n" }, { "page_number": 577, "text": "532\nChapter 15\n\u0002 Business Continuity Planning\nAnswers to Review Questions\n1.\nB. The business organization analysis helps the initial planners select appropriate BCP team \nmembers and then guides the overall BCP process.\n2.\nB. The first task of the BCP team should be the review and validation of the business organiza-\ntion analysis initially performed by those individuals responsible for spearheading the BCP \neffort. This ensures that the initial effort, undertaken by a small group of individuals, reflects the \nbeliefs of the entire BCP team.\n3.\nC. A firm’s officers and directors are legally bound to exercise due diligence in conducting their \nactivities. This concept creates a fiduciary responsibility on their part to ensure that adequate \nbusiness continuity plans are in place.\n4.\nD. During the planning phase, the most significant resource utilization will be the time dedicated \nby members of the BCP team to the planning process itself. This represents a significant use of \nbusiness resources and is another reason that buy-in from senior management is essential.\n5.\nA. The quantitative portion of the priority identification should assign asset values in monetary units.\n6.\nC. The annualized loss expectancy (ALE) represents the amount of money a business expects to \nlose to a given risk each year. This figure is quite useful when performing a quantitative prior-\nitization of business continuity resource allocation.\n7.\nC. The maximum tolerable downtime (MTD) represents the longest period a business function \ncan be unavailable before causing irreparable harm to the business. This figure is very useful when \ndetermining the level of business continuity resources to assign to a particular function.\n8.\nB. The SLE is the product of the AV and the EF. From the scenario, you know that the AV is \n$3,000,000 and the EF is 90 percent, based upon the fact that the same land can be used to \nrebuild the facility. This yields an SLE of $2,700,000.\n9.\nD. This problem requires you to compute the ALE, which is the product of the SLE and the \nARO. From the scenario, you know that the ARO is 0.05 (or 5 percent). From question 8, you \nknow that the SLE is $2,700,000. This yields an SLE of $135,000.\n10. D. The qualitative analysis portion of the BIA allows you to introduce intangible concerns, such \nas loss of customer goodwill, into the BIA planning process.\n11. C. The strategy development task bridges the gap between Business Impact Assessment and \nContinuity Planning by analyzing the prioritized list of risks developed during the BIA and deter-\nmining which risks will be addressed by the BCP.\n12. D. The safety of human life must always be the paramount concern in Business Continuity Plan-\nning. Be sure that your plan reflects this priority, especially in the written documentation that is \ndisseminated to your organization’s employees!\n13. C. It is very difficult to put a dollar figure on the business lost due to negative publicity. There-\nfore, this type of concern is better evaluated through a qualitative analysis.\n" }, { "page_number": 578, "text": "Answers to Review Questions\n533\n14. B. The single loss expectancy (SLE) is the amount of damage that would be caused by a single \noccurrence of the risk. In this case, the SLE is $10 million, the expected damage from one tor-\nnado. The fact that a tornado occurs only once every 100 years is not reflected in the SLE but \nwould be reflected in the annualized loss expectancy (ALE).\n15. C. The annualized loss expectancy (ALE) is computed by taking the product of the single loss \nexpectancy (SLE), which was $10 million in this scenario, and the annualized rate of occurrence \n(ARO), which was 0.01 in this example. These figures yield an ALE of $100,000.\n16. C. In the provisions and processes phase, the BCP team actually designs the procedures and mech-\nanisms to mitigate risks that were deemed unacceptable during the strategy development phase.\n17.\nD. Redundant communications links are a type of alternative system put in place to provide \nbackup circuits in the event a primary communications link fails.\n18. C. Disaster recovery plans pick up where business continuity plans leave off. After a disaster \nstrikes and the business is interrupted, the disaster recovery plan guides response teams in their \nefforts to quickly restore business operations to normal levels.\n19. A. The single loss expectancy (SLE) is computed as the product of the asset value (AV) and the \nexposure factor (EF). The other formulas displayed here do not accurately reflect this calculation.\n20. D. The annualized loss expectancy, as its name implies, covers the expected loss due to a risk dur-\ning a single year. ALE numbers are computed individually for each asset within an organization.\n" }, { "page_number": 579, "text": "" }, { "page_number": 580, "text": "Chapter\n16\nDisaster Recovery \nPlanning\nTHE CISSP EXAM TOPICS COVERED IN THIS \nCHAPTER INCLUDE:\n\u0001 Recovery Strategy\n\u0001 Recovery Plan Development\n\u0001 Implementation\n\u0001 Work Group Recovery\n\u0001 Training/Testing/Maintenance\n\u0001 BCP/DRP Events\n" }, { "page_number": 581, "text": "In the previous chapter, you learned the essential elements of \nBusiness Continuity Planning (BCP)—the art of helping your \norganization avoid being interrupted by the devastating effects of \nan emergency. Recall that one of the main BCP principles was risk management—you must \nassess the likelihood that a vulnerability will be exploited and use that likelihood to determine \nthe appropriate allocation of resources to combat the threat.\nBecause of this risk management principle, business continuity plans are not intended to pre-\nvent every possible disaster from affecting an organization—this would be an impossible goal. \nOn the contrary, they are designed to limit the effects of commonly occurring disasters. Natu-\nrally, this leaves an organization vulnerable to interruption from a number of threats—those \nthat were judged to be not worthy of mitigation or those that were unforeseen.\nDisaster Recovery Planning (DRP) steps in where BCP leaves off. When a disaster strikes and \nthe business continuity plan fails to prevent interruption of the business, the disaster recovery \nplan kicks into effect and guides the actions of emergency response personnel until the end goal \nis reached—the business is restored to full operating capacity in its primary operations facilities.\nWhile reading this chapter, you may notice many areas of overlap between the BCP and DRP \nprocesses. Indeed, our discussion of specific disasters provides information on how to handle \nthem from both BCP and DRP points of view. This serves to illustrate the close linkage between \nthe two processes. In fact, although the (ISC)2 CISSP curriculum draws a distinction between the \ntwo, most organizations simply have a single team/plan that addresses both business continuity \nand disaster recovery concerns in an effort to consolidate responsibilities.\nDisaster Recovery Planning\nDisaster recovery planning brings order to the chaotic events surrounding the interruption of an \norganization’s normal activities. By its very nature, the disaster recovery plan is implemented \nonly when tension is high and cooler heads might not naturally prevail. Picture the circum-\nstances in which you might find it necessary to implement DRP measures—a hurricane just \ndestroyed your main operations facility, a fire devastated your main processing center, terrorist \nactivity closed off access to a major metropolitan area. Any event that stops, prevents, or inter-\nrupts your organization’s ability to perform its work tasks is considered a disaster. The moment \nyou are unable to support your mission-critical process is the moment DRP is needed to manage \nthe restoration and recovery procedures.\nThe disaster recovery plan should be set up in a manner such that it can almost run on auto-\npilot. The DRP should be designed to eliminate decision-making activities during a disaster as \n" }, { "page_number": 582, "text": "Disaster Recovery Planning\n537\nmuch as possible. Essential personnel should be well trained in their duties and responsibilities \nin the wake of a disaster and also know the steps they need to take to get the organization up \nand running as soon as possible. We’ll begin by analyzing some of the possible disasters that \nmight strike your organization and the particular threats that they pose. Many of these were \nmentioned in the previous chapter, but we will now explore them in further detail.\nNatural Disasters\nNatural disasters represent the fury of our habitat—violent occurrences that take place due to \nchanges in the earth’s surface or atmosphere that are beyond the control of mankind. In some \ncases, such as hurricanes, scientists have developed sophisticated prediction techniques that \nprovide ample warning before a disaster strikes. Others, such as earthquakes, can bring unpre-\ndictable destruction at a moment’s notice. Your disaster recovery plan should provide mecha-\nnisms for responding to both types of disasters, either with a gradual buildup of response forces \nor as an immediate reaction to a rapidly emerging crisis.\nEarthquakes\nEarthquakes are caused by the shifting of seismic plates and can occur almost anywhere in the \nworld without warning. However, they are much more likely to occur along the known fault \nlines that exist in many areas of the world. A well-known example is the San Andreas fault, \nwhich poses a significant risk to portions of the western United States. If you live in a region \nalong a fault line where earthquakes are likely, your DRP should address the procedures your \nbusiness will implement if a seismic event interrupts your normal activities.\nYou might be surprised by some of the regions of the world where earthquakes are consid-\nered possible. Table 16.1 shows the parts of the United States that the Federal Emergency Man-\nagement Agency (FEMA) considers moderate, high, or very high seismic hazards. Note that the \nstates in the table comprise 80 percent of the 50 states, meaning that the majority of the country \nhas at least a moderate risk of seismic activity.\nFloods\nFlooding can occur almost anywhere in the world at any time of the year. Some flooding results \nfrom the gradual accumulation of rainwater in rivers, lakes, and other bodies of water that then \noverflow their banks and flood the community. Other floods, known as flash floods, strike \nwhen a sudden severe storm dumps more rainwater on an area than the ground can absorb in \na short period of time. Floods can also occur when dams are breached. Large waves caused by \nseismic activity, or tsunamis, combine the awesome power and weight of water with flooding, \nas we saw during the December 2004 tsunami disaster. The tsunamis obviously demonstrated \nthe enormous destructive capabilities of water and the impact it can have on various businesses \nand economies.\nAccording to government statistics, flooding is responsible for over $1 billion (that’s billion with \na b!) of damage to businesses and homes each year in the United States. It’s important that your DRP \nmake appropriate response plans for the eventuality that a flood may strike your facilities.\n" }, { "page_number": 583, "text": "538\nChapter 16\n\u0002 Disaster Recovery Planning\nT A B L E\n1 6 . 1\nSeismic Hazard Level by State\nModerate Seismic Hazard\nHigh Seismic Hazard\nVery High Seismic Hazard\nAlabama\nAmerican Samoa\nAlaska\nColorado\nArizona\nCalifornia\nConnecticut\nArkansas\nGuam\nDelaware\nIllinois\nHawaii\nGeorgia\nIndiana\nIdaho\nMaine\nKentucky\nMontana\nMaryland\nMissouri\nNevada\nMassachusetts\nNew Mexico\nOregon\nMississippi\nSouth Carolina\nPuerto Rico\nNew Hampshire\nTennessee\nVirgin Islands\nNew Jersey\nUtah\nWashington\nNew York\n \nWyoming\nNorth Carolina\n \n \nOhio\n \n \nOklahoma\n \n \nPennsylvania\n \n \nRhode Island\n \n \nTexas\n \n \nVermont\n \n \nVirginia\n \n \nWest Virginia\n \n \n" }, { "page_number": 584, "text": "Disaster Recovery Planning\n539\nWhen you evaluate your firm’s risk of damage from flooding to develop your \nbusiness continuity and disaster recovery plans, it’s also a good idea to check \nwith responsible individuals and ensure that your organization has sufficient \ninsurance in place to protect it from the financial impact of a flood. In the United \nStates, most general business policies do not cover flood damage, and you \nshould investigate obtaining specialized government-backed flood insurance \nunder FEMA’s National Flood Insurance Program.\nAlthough flooding is theoretically possible in almost any region of the world, it is much more \nlikely to occur in certain areas. FEMA’s National Flood Insurance Program is responsible for com-\npleting a flood risk assessment for the entire United States and providing this data to citizens in \ngraphical form. You can view flood maps online at www.esri.com/hazards/. This site also pro-\nvides valuable information on historic earthquakes, hurricanes, wind storms, hail storms, and other \nnatural disasters to help you in preparing your organization’s risk assessment. When viewing the \nflood maps, like the one shown in Figure 16.1, you’ll find that the two risks often assigned to an area \nare the “100-year flood plain” and the “500-year flood plain.” These evaluations mean that the gov-\nernment expects these areas to flood at least once every 100 and 500 years, respectively. For a more \ndetailed tutorial on reading flood maps, visit www.fema.gov/mit/tsd/ot_firmr.htm.\nStorms\nStorms come in many forms and pose diverse risks to a business. Prolonged periods of intense \nrainfall bring the risk of flash flooding described in the previous section. Hurricanes and tor-\nnadoes come with the threat of severe winds exceeding 100 miles per hour that threaten the \nstructural integrity of buildings and turn everyday objects like trees, lawn furniture, and even \nvehicles into deadly missiles. Hail storms bring a rapid onslaught of destructive ice chunks fall-\ning from the sky. Many storms also bring the risk of lightning, which can cause severe damage \nto sensitive electronic components. For this reason, your business continuity plan should detail \nappropriate mechanisms to protect against lightning-induced damage and your disaster recov-\nery plan should provide adequate provisions for the power outages and equipment damage that \nmight result from a lightning strike. Never underestimate the magnitude of damage that a single \nstorm can bring.\nIf you live in an area susceptible to a certain type of severe storm, it’s important \nthat you regularly monitor weather forecasts from the responsible government \nagencies. For example, disaster recovery specialists in hurricane-prone areas \nshould periodically check the website of the National Weather Service’s Trop-\nical Prediction Center (www.nhc.noaa.gov) during the hurricane season. This \nwebsite allows you to monitor Atlantic and Pacific storms that may pose a risk \nto your region before word of them hits the local news. This allows you to begin \na gradual response to the storm before time runs out.\n" }, { "page_number": 585, "text": "540\nChapter 16\n\u0002 Disaster Recovery Planning\nF I G U R E\n1 6 . 1\nFlood hazard map for Miami-Dade County, Florida\nFires\nFires can start for a variety of reasons, both natural and man-made, but both forms can be \nequally devastating. During the BCP/DRP process, you should evaluate the risk of fire and \nimplement at least basic measures to mitigate that risk and prepare the business for recovery \nfrom a catastrophic fire in a critical facility.\nSome regions of the world are susceptible to wildfires during the warm season. These fires, \nonce started, spread in somewhat predictable patterns, and fire experts in conjunction with \nmeteorologists can produce relatively accurate forecasts of a wildfire’s potential path.\nAs with many other types of large-scale natural disasters, you can obtain valu-\nable information about impending threats on the Web. In the United States, the \nNational Interagency Fire Center posts daily fire updates and forecasts on its \nwebsite: www.nifc.gov/firemaps.html. Other countries have similar warning \nsystems in place.\nOther Regional Events\nSome regions of the world are prone to localized types of natural disasters. During the BCP/DRP \nprocess, your assessment team should analyze all of your organization’s operating locations and \n" }, { "page_number": 586, "text": "Disaster Recovery Planning\n541\ngauge the impact that these types of events might have on your business. For example, many \nregions of the world are prone to volcanic eruptions. If you conduct operations in an area in \nclose proximity to an active or dormant volcano, your DRP should probably address this even-\ntuality. Other localized natural occurrences include monsoons in Asia, tsunamis in the South \nPacific, avalanches in mountainous regions, and mudslides in the western United States.\nIf your business is geographically diverse, it would be prudent to include area natives on your \nplanning team. At the very least, make use of local resources like government emergency pre-\nparedness teams, civil defense organizations, and insurance claim offices to help guide your \nefforts. These organizations possess a wealth of knowledge and will usually be more than happy \nto help you prepare your organization for the unexpected—after all, every organization that \nsuccessfully weathers a natural disaster is one less organization that requires a portion of their \nvaluable recovery resources after disaster strikes.\nMan-Made Disasters\nThe advanced civilization built by mankind over the centuries has become increasingly dependent \nupon complex interactions between technological, logistical, and natural systems. The same com-\nplex interactions that make our sophisticated society possible also present a number of potential \nvulnerabilities from both intentional and unintentional man-made disasters. In the following sec-\ntions, we’ll examine a few of the more common disasters to help you analyze your organization’s \nvulnerabilities when preparing a business continuity plan and disaster recovery plan.\nFires\nIn the previous section, we explored how large-scale wildfires spread due to natural reasons. \nMany smaller-scale fires occur due to man-made causes—be it carelessness, faulty electrical wir-\ning, improper fire protection practices, or other reasons. Studies from the Insurance Informa-\ntion Institute indicate that there are at least 1,000 building fires in the United States every day. \nIf one of those fires struck your organization, would you have the proper preventative measures \nin place to quickly contain it? If the fire destroyed your facilities, how quickly would your disas-\nter recovery plan allow you to resume operations elsewhere?\nBombings/Explosions\nExplosions can result from a variety of man-made occurrences. Explosive gases from leaks \nmight fill a room/building with explosive gases that later ignite and cause a damaging blast. In \nmany areas, bombings are also a cause for concern. From a disaster planning point of view, the \neffects of bombings and explosions are similar to those caused by a large-scale fire. However, \nplanning to avoid the impact of a bombing is much more difficult and relies upon physical secu-\nrity measures such as those discussed in Chapter 19, “Physical Security Requirements.”\nActs of Terrorism\nSince the terrorist attacks on September 11, 2001, businesses are increasingly concerned about \nthe risks posed by a terrorist threat. The attacks on September 11 caused many small businesses \nto simply fold because they did not have in place business continuity/disaster recovery plans that \n" }, { "page_number": 587, "text": "542\nChapter 16\n\u0002 Disaster Recovery Planning\nwere adequate to ensure their continued viability. Many larger businesses experienced signifi-\ncant losses that caused severe long-term damage. The Insurance Information Institute issued a \nstudy one year after the attacks that estimated the total damage from the attacks in New York \nCity at $40 billion (yes, that’s with a b again!).\nYour general business insurance may not properly cover your organization against \nacts of terrorism. Prior to the September 11, 2001 terrorist attacks, most policies \neither covered acts of terrorism or didn’t explicitly mention them. After suffering \nthat catastrophic loss, many insurance companies responded by quickly amending \npolicies to exclude losses from terrorist activity. Policy riders and endorsements are \nsometimes available, but often at an extremely high cost. If your business continu-\nity or disaster recovery plan includes insurance as a means of financial recovery (as \nit probably should!), you’d be well advised to check your policies and contact your \ninsurance professional to ensure that you’re still covered.\nTerrorist acts pose a unique challenge to DRP teams due to their unpredictable nature. Prior \nto the September 11, 2001 terrorist attacks in New York and Washington, D.C., few DRP \nteams considered the threat of an airplane crashing into their corporate headquarters significant \nenough to merit mitigation. Many companies are now asking themselves a number of new \n“what if” questions regarding terrorist activities. In general, these types of questions are healthy \nin that they promote dialog between business elements regarding potential threats. On the other \nhand, disaster recovery planners must emphasize solid risk-management principles and ensure \nthat resources aren’t over allocated to a terrorist threat to the detriment of those DRP/BCP \nactivities that protect against threats more likely to materialize.\nPower Outages\nEven the most basic disaster recovery plan contains provisions to deal with the threat of a short \npower outage. Critical business systems are often protected by uninterruptible power supply \n(UPS) devices capable of running them at least long enough to shut down or long enough to get \nemergency generators up and running. However, is your organization capable of operating in \nthe face of a sustained power outage? After Hurricane Andrew struck South Florida in 1992, \nmany areas were without power for weeks. Does your business continuity plan include provi-\nsions to keep your business a viable going concern during such a prolonged period without \npower? Does your disaster recovery plan make ample preparations for the timely restoration of \npower even if the commercial power grid remains unavailable?\nCheck your UPSs regularly! These critical devices are often overlooked until \nthey become necessary. Many UPSs contain self-testing mechanisms that \nreport problems automatically, but it’s still a good idea to subject them to reg-\nular testing. Also, be sure to audit the number/type of devices plugged in to \neach UPS. It’s amazing how many people think it’s OK to add “just one more \nsystem” to a UPS, and you don’t want to be surprised when the device can’t \nhandle the load during a real power outage!\n" }, { "page_number": 588, "text": "Disaster Recovery Planning\n543\nToday’s technology-driven organizations are increasingly dependent upon electric power, \nand your BCP/DRP team should consider the provisioning of alternative power sources capable \nof running business systems for an indefinite period of time. An adequate backup generator \ncould mean the difference when the survival of your business is at stake.\nOther Utility and Infrastructure Failures\nWhen planners consider the impact that utility outages may have on their organizations, they nat-\nurally think first about the impact of a power outage. However, keep other utilities in mind also. \nDo you have critical business systems that rely on water, sewers, natural gas, or other utilities? \nAlso consider regional infrastructure such as highways, airports, and railroads. Any of these sys-\ntems can suffer failures that might not be related to weather or other conditions described in this \nchapter. Many businesses depend on one or more of these infrastructure services to move people \nor materials. A failure can paralyze your business’ ability to continue functioning.\nIf you quickly answered no when asked if you have critical business systems \nthat rely on water, sewers, natural gas, or other utilities, think a little more care-\nfully. Do you consider people a critical business system? If a major storm \nknocked out the water supply to your facilities and you needed to keep the facil-\nities up and running, would you be able to supply your employees with ade-\nquate drinking water to meet their biological needs?\nWhat about your fire protection systems? If any of them are water based, is there a holding \ntank system in place that contains ample water to extinguish a serious building fire if the public \nwater system were unavailable? Fires often cause serious damage in areas ravaged by storms, \nearthquakes, and other disasters that might also interrupt the delivery of water.\nHardware/Software Failures\nLike it or not, computer systems fail. Hardware failures are the most common cause of unplanned \ndowntime. Hardware components simply wear out and refuse to continue performing or suffer \nfrom physical damage. Software systems contain bugs or are given improper/unexpected operat-\ning instructions. For this reason, BCP/DRP teams must provide adequate redundancy in their sys-\ntems. If zero downtime is a mandatory requirement, the best solution is to use fully redundant \nfailover servers in separate locations attached to separate communications links and infrastruc-\ntures. If one server is damaged or destroyed, the other will instantly take over the processing load. \nFor more information on this concept, see the section “Remote Mirroring” later in this chapter.\nDue to financial constraints, maintaining fully redundant systems is not always possible. In \nthose circumstances, the BCP/DRP team should address how replacement parts will be quickly \nobtained and installed. As many parts as possible should be maintained in a local parts inven-\ntory for quick replacement; this is especially true for hard-to-find parts that must be shipped in. \nAfter all, how many organizations could do without telephones for three days while a critical \nPBX component is shipped from an overseas location and installed on site?\n" }, { "page_number": 589, "text": "544\nChapter 16\n\u0002 Disaster Recovery Planning\nStrikes\nWhen designing your business continuity and disaster recovery plans, don’t forget about the \nimportance of the human factor in emergency planning. One form of man-made disaster that \nis often overlooked is the possibility of a strike or other labor crisis. If a large segment of your \nemployees walked out at the same time, what impact would that have on your business? How \nlong would you be able to sustain operations without the regular full-time employees that staff \na certain area? Your BCP and DRP teams should address these concerns, providing alternative \nplans if a labor crisis occurs.\nTheft/Vandalism\nIn a previous section, we looked at the threat that terrorist activities pose to an organization. \nTheft and vandalism represent the same kind of activity on a much smaller scale. In most cases, \nhowever, there’s a far greater chance that your organization will be affected by theft or vandal-\nism than by a terrorist attack. Insurance provides some financial protection against these events \n(subject to deductibles and limitations of coverage), but acts of this nature can cause serious \nNYC Blackout\nOn August 14, 2003, the lights went out in New York City and large portions of the northeastern \nand midwestern United States when a series of cascading failures caused the collapse of a \nmajor power grid.\nFortunately, security professionals in the New York area were ready. Spurred to action by the \nSeptember 11, 2001 terrorist attacks, many businesses updated their disaster recovery plans \nand took measures to ensure their continued operations in the wake of another disaster. The \nblackout served as that test, as many organizations were able to continue operating on alter-\nnate power sources or transferred control seamlessly to offsite data processing centers.\nThere were a few important lessons learned during the blackout that provide insight for BCP/\nDRP teams around the world:\nEnsure that your alternate processing sites are located sufficiently far away from your main site \nthat they won’t likely be affected by the same disaster.\nRemember that the threats facing your organization are both internal and external. Your next \ndisaster may come from a terrorist attack, building fire, or malicious code running loose on \nyour network. Take steps to ensure that your alternate sites are segregated from the main facil-\nity in a manner that protects against all of these threats.\nDisasters don’t usually come with advance warning. If real-time operations are critical to your orga-\nnization, be sure that your backup sites are ready to assume primary status at a moment’s notice.\n" }, { "page_number": 590, "text": "Recovery Strategy\n545\ndamage to your business, on both a short-term and long-term basis. Your business continuity \nand disaster recovery plans should include adequate preventative measures to control the fre-\nquency of these occurrences as well as contingency plans to mitigate the effects theft and van-\ndalism have on your ongoing operations.\nKeep the impact that theft may have on your operations in mind when planning \nyour parts inventory. It would be a good idea to keep an extra inventory of \nitems with a high pilferage rate, such as RAM chips and laptops.\nRecovery Strategy\nWhen a disaster interrupts your business, your disaster recovery plan should be able to kick in \nnearly automatically and begin providing support to recovery operations. The disaster recovery \nplan should be designed in such a manner that the first employees on the scene can immediately \nbegin the recovery effort in an organized fashion, even if members of the official DRP team have \nnot yet arrived on site. In the following sections, we’ll examine the critical subtasks involved in \ncrafting an effective disaster recovery plan that will guide the rapid restoration of normal busi-\nness processes and the resumption of activity at the primary business location.\nIn addition to improving your response capabilities, purchasing insurance can reduce risk of \nfinancial losses. When selecting insurance, be sure to purchase sufficient coverage to enable you \nto recover from a disaster. Simple value coverage may be insufficient to encompass actual \nreplacement costs. If your property insurance includes an Actual Cost Evaluation (ACV) clause, \nthen your damaged property will be compensated based on the value of the items on the date \nof loss plus 10 percent.\nValuable paper insurance coverage provides protection for inscribed, printed, and written \ndocuments and manuscripts and other printed business records. However, it does not cover \ndamage to paper money and printed security certificates.\nBusiness Unit Priorities\nIn order to recover your business operations with the greatest possible efficiency, you must engi-\nneer your disaster recovery plan so that the business units with the highest priority are recovered \nfirst. To achieve this goal, the DRP team must first identify those business units and agree on \nan order of prioritization. If this process sounds familiar, it should! This is very similar to the \nprioritization task the BCP team performed during the Business Impact Assessment, discussed \nin the previous chapter. In fact, if you have a completed BIA, you should use the resulting doc-\numentation as the basis for this prioritization task.\nAs a minimum requirement, the output from this task should be a simple listing of busi-\nness units in prioritized order. However, a much more useful deliverable would be a more \ndetailed list broken down into specific business processes listed in order of priority. This \n" }, { "page_number": 591, "text": "546\nChapter 16\n\u0002 Disaster Recovery Planning\nbusiness process–oriented list is much more reflective of real-world conditions, but it \nrequires considerable additional effort. It will, however, greatly assist in the recovery \neffort—after all, not every task performed by your highest-priority business unit will be of \nthe highest priority. You might find that it would be best to restore the highest-priority unit \nto 50 percent capacity and then move on to lower-priority units to achieve some minimum \noperating capacity across the organization before attempting a full recovery effort.\nCrisis Management\nIf a disaster strikes your organization, it is likely that panic will set in. The best way to combat \nthis is with an organized disaster recovery plan. The individuals in your business who are most \nlikely to first notice an emergency situation (i.e., security guards, technical personnel, etc.) \nshould be fully trained in disaster recovery procedures and know the proper notification pro-\ncedures and immediate response mechanisms.\nMany things that normally seem like common sense (such as calling 911 in the event of a fire) \nmay slip the minds of panicked employees seeking to flee an emergency. The best way to combat \nthis is with continuous training on disaster recovery responsibilities. Returning to the fire exam-\nple, all employees should be trained to activate the fire alarm or contact emergency officials \nwhen they spot a fire (after, of course, taking appropriate measures to protect themselves). After \nall, it’s better that the fire department receives 10 different phone calls reporting a fire at your \norganization than it is for everyone to assume that someone else already took care of it.\nCrisis management is a science and an art form. If your training budget permits, investing in \ncrisis training for your key employees would be a good idea. This will ensure that at least some \nof your employees know the proper way to handle emergency situations and can provide the all-\nimportant “on the scene” leadership to panic-stricken coworkers.\nEmergency Communications\nWhen a disaster strikes, it is important that the organization be able to communicate internally as \nwell as with the outside world. A disaster of any significance is easily noticed, and if the organi-\nzation is unable to keep the outside world informed of its recovery status, the public is apt to fear \nthe worst and assume that the organization is unable to recover. It is also essential that the orga-\nnization be able to communicate internally during a disaster so that employees know what is \nexpected of them—whether they are to return to work or report to another location, for instance.\nIn some cases, the circumstances that brought about the disaster to begin with may have also \ndamaged some or all normal means of communications. A violent storm or an earthquake may \nhave also knocked out telecommunications systems; at that point it’s too late to try to figure out \nother means of communicating both internally and externally.\nWork Group Recovery\nWhen designing your disaster recovery plan, it’s important to keep your goal in mind—the res-\ntoration of work groups to the point that they can resume their activities in their usual work \n" }, { "page_number": 592, "text": "Recovery Strategy\n547\nlocations. It’s very easy to get sidetracked and think of disaster recovery as purely an IT effort \nfocused on restoring systems and processes to working order.\nTo facilitate this effort, it’s sometimes best to develop separate recovery facilities for different \nwork groups. For example, if you have several subsidiary organizations that are in different \nlocations and that perform tasks similar to the tasks that work groups at your office perform, \nyou may wish to consider temporarily relocating those work groups to the other facility and \nhaving them communicate electronically and via telephone with other business units until \nthey’re ready to return to the main operations facility.\nLarger organizations may have difficulty finding recovery facilities capable of handling the \nentire business operation. This is another example of a circumstance in which independent \nrecovery of different work groups is appropriate.\nAlternate Processing Sites\nOne of the most important elements of the disaster recovery plan is the selection of alternate \nprocessing sites to be used when the primary sites are unavailable. There are many options avail-\nable when considering recovery facilities, limited only by the creative minds of disaster recovery \nplanners and service providers. In the following sections, we’ll take a look at the several types \nof sites commonly used in disaster recovery planning: cold sites, warm sites, hot sites, mobile \nsites, service bureaus, and multiple sites.\nWhen choosing any type of alternate processing site, be sure to place it far \naway enough from your primary location that it won’t likely be affected by the \nsame disaster that disables your primary site!\nCold Sites\nCold sites are simply standby facilities large enough to handle the processing load of an orga-\nnization and with appropriate electrical and environmental support systems. They may be large \nwarehouses, empty office buildings, or other similar structures. However, the cold site has no \ncomputing facilities (hardware or software) preinstalled and does not have activated broadband \ncommunications links. Many cold sites do have at least a few copper telephone lines, and some \nsites may have standby links that can be activated with minimal notification.\nThe major advantage of a cold site is its relatively inexpensive cost—there is no computing \nbase to maintain and no monthly telecommunications bill when the site is not in use. However, \nthe drawbacks of such a site are obvious—there is a tremendous lag time between the time the \ndecision is made to activate the site and the time the site is actually ready to support business \noperations. Servers and workstations must be brought in and configured. Data must be restored \nfrom backup tapes. Communications links must be activated or established. The time to acti-\nvate a cold site is often measured in weeks, making timely recovery close to impossible and often \nyielding a false sense of security.\n" }, { "page_number": 593, "text": "548\nChapter 16\n\u0002 Disaster Recovery Planning\nHot Sites\nThe hot site is the exact opposite of the cold site. In this type of configuration, a backup facility is \nmaintained in constant working order, with a full complement of servers, workstations, and com-\nmunications links ready to assume primary operations responsibilities. The servers and workstations \nare all preconfigured and loaded with appropriate operating system and application software.\nWhen choosing a facility, make sure it is far enough away from the original site \nso as not to be affected by the same disaster and yet close enough that it does \nnot take all day driving to reach the backup site.\nThe data on the primary site servers is periodically or continuously replicated to the corre-\nsponding servers at the hot site, ensuring that the hot site has up-to-date data. Depending upon \nthe bandwidth available between the two sites, the hot site data may be replicated instanta-\nneously. If that is the case, operators could simply move operations to the hot site at a moment’s \nnotice. If it’s not the case, disaster recovery managers have three options to activate the hot site:\n\u0002\nIf there is sufficient time before the primary site must be shut down, they may force repli-\ncation between the two sites right before the transition of operational control.\n\u0002\nIf this is not possible, they may hand-carry backup tapes of the transaction logs from the \nprimary site to the hot site and manually apply any transactions that took place since the \nlast replication.\n\u0002\nIf there aren’t any available backups and it wasn’t possible to force replication, the disaster \nrecovery team may simply accept the loss of a portion of the data.\nThe advantages of a hot site are quite obvious—the level of disaster recovery protection pro-\nvided by this type of site is unsurpassed. However, the cost is extremely high. Maintaining a hot \nsite essentially doubles the organization’s budget for hardware, software, and services and \nrequires the use of additional manpower to maintain the site.\nIf you use a hot site, never forget that it has copies of your production data. Be \nsure to provide that site with the same level of technical and physical security \ncontrols you provide at your primary site!\nIf an organization wishes to maintain a hot site but wants to reduce the expense of equipment \nand maintenance, it might opt to use a shared hot site facility managed by an outside contractor. \nHowever, the inherent danger in these facilities is that they may be overtaxed in the event of a \nwidespread disaster and be unable to service all of their clients simultaneously. If your organi-\nzation considers such an arrangement, be sure to investigate these issues thoroughly, both \nbefore signing the contract and periodically during the contract term.\nWarm Sites\nWarm sites are a middle ground between hot sites and cold sites for disaster recovery specialists. \nThey always contain the equipment and data circuits necessary to rapidly establish operations. \n" }, { "page_number": 594, "text": "Recovery Strategy\n549\nAs it is in hot sites, this equipment is usually preconfigured and ready to run appropriate appli-\ncations to support the organization’s operations. Unlike hot sites, however, warm sites do not \ntypically contain copies of the client’s data. The main requirement in bringing a warm site to full \noperational status is the transportation of appropriate backup media to the site and restoration \nof critical data on the standby servers.\nActivation of a warm site typically takes at least 12 hours from the time a disaster is declared. \nHowever, warm sites avoid the significant telecommunications and personnel costs inherent in \nmaintaining a near-real-time copy of the operational data environment. As with hot sites and \ncold sites, warm sites may also be obtained on a shared facility basis. If you choose this option, \nbe sure that you have a “no lockout” policy written into your contract guaranteeing you the use \nof an appropriate facility even during a period of high demand. It’s a good idea to take this con-\ncept one step further and physically inspect the facilities and the contractor’s operational plan \nto reassure yourself that the facility will indeed be able to back up the “no lockout” guarantee \nwhen push comes to shove.\nMobile Sites\nMobile sites are non-mainstream alternatives to traditional recovery sites. They typically consist \nof self-contained trailers or other easily relocated units. These sites come with all of the environ-\nmental control systems necessary to maintain a safe computing environment. Larger corporations \nsometimes maintain these sites on a “fly-away” basis, ready to deploy them to any operating loca-\ntion around the world via air, rail, sea, or surface transportation. Smaller firms might contract \nwith a mobile site vendor in the local area to provide these services on an as-needed basis.\nIf your disaster recovery plan depends upon a work group recovery strategy, \nmobile sites can be an excellent way to implement that approach. They are \noften large enough to accommodate entire (small!) work groups.\nMobile sites are often configured as cold sites or warm sites, depending upon the disaster \nrecovery plan they are designed to support. It is also possible to configure a mobile site as a hot \nsite, but this is not normally done because it is not often known in advance where a mobile site \nwill be deployed.\nService Bureaus\nA service bureau is a company that leases computer time. Service bureaus own large server farms and \noften fields of workstations. Any organization can purchase a contract with a service bureau to con-\nsume some portion of their processing capacity. Access can be on site or remote. A service bureau \ncan usually provide support for all of your IT needs in the event of a disaster, even desktops for \nworkers to use. Your contract with a service bureau will often include testing and backups as well \nas response time and availability. However, service bureaus regularly oversell their actual capacity \nby gambling that not all of the contracts will be exercised at the same time. Therefore, there is poten-\ntial for resource contention in the event of a major disaster. If your company operates in an industry \ndense locale, this could be an important concern. You may need to select both a local and a distant \nservice bureau in order to ensure that you can gain access to processing facilities.\n" }, { "page_number": 595, "text": "550\nChapter 16\n\u0002 Disaster Recovery Planning\nMultiple Sites\nBy splitting or dividing your outfit into several divisions, branches, offices, and so on, you create \nmultiple sites and reduce the impact of a major disaster. In fact, the more sites you employ, the \nless impact a major disaster on any one site will have. However, for the multiple sites to be effec-\ntive, they must be separated by enough distance that a major disaster cannot affect multiple sites \nsimultaneously. One of the drawbacks of using multiple sites is that it increases the difficulty of \nmanaging and administering the entire company when it’s spread across a large geographic area \nin numerous locations.\nMutual Assistance Agreements\nMutual Assistance Agreements (MAAs) are popular in disaster recovery literature but are rarely \nimplemented in real-world practice. In theory, they provide an excellent alternate processing \noption. Under an MAA, two organizations pledge to assist each other in the event of a disaster by \nsharing computing facilities or other technological resources. They appear to be extremely cost \neffective at first glance—it’s not necessary for either organization to maintain expensive alternate \nprocessing sites (such as the hot sites, warm sites, cold sites, and mobile processing sites described \nin the previous sections). Indeed, many MAAs are structured to provide one of the levels of service \ndescribed. In the case of a cold site, each organization may simply maintain some open space in \ntheir processing facilities for the other organization to use in the event of a disaster. In the case of \na hot site, the organizations may host fully redundant servers for each other.\nHowever, there are many drawbacks to Mutual Assistance Agreements that prevent their \nwidespread use:\n\u0002\nMAAs are difficult to enforce. The parties are placing trust in each other that the support \nwill materialize in the event of a disaster. However, when push comes to shove, the non-\nvictim might renege on the agreement. The victim may have legal remedies available to \nthem, but this won’t help the immediate disaster recovery effort.\nHardware Replacement Locations\nOne thing to consider when determining mobile sites and recovery sites in general is hardware \nreplacement supplies. There are basically two options for hardware replacement supplies. One \noption is to employ “in-house” replacement whereby you warehouse extra and duplicate \nequipment at a different but nearby location (i.e., a warehouse on the other side of town). (In-\nhouse here means you own it already, not that it is necessarily housed under the same roof as \nyour production environment.) If you have a hardware failure or a disaster, you can immedi-\nately pull the appropriate equipment from your stash. The other option is an SLA-type agree-\nment with a vendor to provide quick response and delivery time in the event of a disaster. \nHowever, even a 4-, 12-, 24-, or 48-hour replacement hardware contract from a vendor does not \nprovide a reliable guarantee that the delivery will actually occur. There are too many uncon-\ntrollable variables to rely upon this second option as your sole means of recovery.\n" }, { "page_number": 596, "text": "Recovery Strategy\n551\n\u0002\nCooperating organizations should be located in relatively close proximity to each other to \nfacilitate the transportation of employees between sites. However, this proximity means \nthat both organizations may be vulnerable to the same threats! Your MAA won’t do you \nmuch good if an earthquake levels your city, destroying the processing sites of both partic-\nipating organizations!\n\u0002\nConfidentiality concerns often prevent businesses from placing their data in the hands of \nothers. These may be legal concerns (such as in the handling of healthcare or financial data) \nor business concerns (such as trade secrets or other intellectual property issues).\nDespite these concerns, a Mutual Assistance Agreement may be a good disaster recovery \nsolution for your organization—especially if cost is an overriding factor. If you simply can’t \nafford to implement any other type of alternate processing facility, an MAA might provide a \ndegree of valuable protection in the event a localized disaster strikes your business.\nDatabase Recovery\nMany organizations rely upon databases to process and track operations, sales, logistics, and \nother activities vital to their continued viability. For this reason, it’s essential that you include \ndatabase recovery techniques in your disaster recovery plans. It’s a wise idea to have a database \nspecialist on the DRP team to provide input as to the technical feasibility of various ideas. After \nall, you don’t want to allocate several hours to restore a database backup when it’s technically \nimpossible to complete the restoration in less than half a day!\nIn the following sections, we’ll take a look at the three main techniques used to create offsite \ncopies of database content: electronic vaulting, remote journaling, and remote mirroring. Each \none has specific benefits and drawbacks—you’ll need to analyze your organization’s computing \nrequirements and available resources to select the option best suited to your firm.\nElectronic Vaulting\nIn an electronic vaulting scenario, database backups are transferred to a remote site in a bulk \ntransfer fashion. The remote location may be a dedicated alternative recovery site (such as a hot \nsite) or simply an offsite location managed within the company or by a contractor for the pur-\npose of maintaining backup data. If you use electronic vaulting, keep in mind that there may be \na significant time delay between the time you declare a disaster and the time your database is \nready for operation with current data. If you decide to activate a recovery site, technicians will \nneed to retrieve the appropriate backups from the electronic vault and apply them to the soon-\nto-be production servers at the recovery site.\nBe careful when considering vendors for an electronic vaulting contract. Defi-\nnitions of electronic vaulting vary widely within the industry. Don’t settle for a \nvague promise of “electronic vaulting capability.” Insist upon a written defini-\ntion of the service that will be provided, including the storage capacity, band-\nwidth of the communications link to the electronic vault, and the time \nnecessary to retrieve vaulted data in the event of a disaster.\n" }, { "page_number": 597, "text": "552\nChapter 16\n\u0002 Disaster Recovery Planning\nAs with any type of backup scenario, be certain to periodically test your electronic vaulting \nsetup. A great method for testing backup solutions is to give disaster recovery personnel a “sur-\nprise test,” asking them to restore data from a certain day.\nRemote Journaling\nWith remote journaling, data transfers are performed in a more expeditious manner. Data \ntransfers still occur in a bulk transfer fashion, but they occur on a more frequent basis, usually \nonce every hour or less. Unlike electronic vaulting scenarios, where database backup files are \ntransferred, remote journaling setups transfer copies of the database transaction logs containing \nthe transactions that occurred since the previous bulk transfer.\nRemote journaling is similar to electronic vaulting in that the transaction logs transferred to \nthe remote site are not applied to a live database server but are maintained in a backup device. \nWhen a disaster is declared, technicians retrieve the appropriate transaction logs and apply \nthem to the production database.\nRemote Mirroring\nRemote mirroring is the most advanced database backup solution. Not surprisingly, it’s also the \nmost expensive! Remote mirroring goes beyond the technology used by remote journaling and \nelectronic vaulting; with remote mirroring, a live database server is maintained at the backup \nsite. The remote server receives copies of the database modifications at the same time they are \napplied to the production server at the primary site. Therefore, the mirrored server is ready to \ntake over an operational role at a moment’s notice.\nRemote mirroring is a popular database backup strategy for organizations seeking to imple-\nment a hot site. However, when weighing the feasibility of a remote mirroring solution, be sure \nto take into account the infrastructure and personnel costs required to support the mirrored \nserver as well as the processing overhead that will be added to each database transaction on the \nmirrored server.\nRecovery Plan Development\nOnce you’ve established your business unit priorities and gotten a good idea of the appropriate \nalternative recovery sites for your organization, it’s time to put pen to paper and begin drafting \na true disaster recovery plan. Don’t expect to sit down and write the full plan at one sitting. It’s \nlikely that the DRP team will go through many evolutions of draft documents before reaching \na final written document that satisfies the operational needs of critical business units and falls \nwithin the resource, time, and expense constraints of the disaster recovery budget and available \nmanpower.\nIn the following sections, we’ll explore some of the important items to include in your \ndisaster recovery plan. Depending upon the size of your organization and the number of peo-\nple involved in the DRP effort, it may be a good idea to maintain several different types of \n" }, { "page_number": 598, "text": "Recovery Plan Development\n553\nplan documents, intended for different audiences. The following list includes some types of \ndocuments to consider:\n\u0002\nExecutive summary\n\u0002\nDepartment-specific plans\n\u0002\nTechnical guides for IT personnel responsible for implementing and maintaining critical \nbackup systems\n\u0002\nChecklists for individual members of the disaster recovery team\n\u0002\nFull copies of the plan for critical disaster recovery team members\nThe use of custom-tailored documents becomes especially important when a disaster occurs or is \nimminent. Personnel who need to refresh themselves on the disaster recovery procedures that affect \nvarious parts of the organization will be able to refer to their department-specific plans. Critical disas-\nter recovery team members will have checklists to help guide their actions amid the chaotic atmo-\nsphere of a disaster. IT personnel will have technical guides helping them get the alternate sites up and \nrunning. Finally, managers and public relations personnel will have a simple document that walks \nthem through a high-level picture of the coordinated symphony of an active disaster recovery effort \nwithout requiring interpretation from team members busy with tasks directly related to the effort.\nEmergency Response\nThe disaster recovery plan should contain simple yet comprehensive instructions for essential \npersonnel to follow immediately upon recognition that a disaster is in progress or is imminent. \nThese instructions will vary widely depending upon the nature of the disaster, the type of per-\nsonnel responding to the incident, and the time available before facilities need to be evacuated \nand/or equipment shut down. For example, the instructions for a large-scale fire will be much \nmore concise than the instructions for how to prepare for a hurricane that is still 48 hours away \nfrom a predicted landfall near an operational site. Emergency response plans are often put \ntogether in the form of checklists provided to responders. When designing these checklists, keep \none essential design principle in mind: Arrange the checklist tasks in order of priority, with the \nmost important task first!\nIt’s essential that you keep in mind that these checklists will be executed in the midst of a cri-\nsis. It is extremely likely that responders will not be able to complete the entire checklist, espe-\ncially in the event of a short-notice disaster. For this reason, you should put the most essential \ntasks (i.e., “Activate the building alarm”) first on the checklist. The lower an item on the list, \nthe lower the likelihood that it will be completed before an evacuation/shutdown takes place.\nPersonnel Notification\nThe disaster recovery plan should also contain a list of personnel to contact in the event of a \ndisaster. Normally, this will include key members of the DRP team as well as those personnel \nwho execute critical disaster recovery tasks throughout the organization. This response check-\nlist should include alternate means of contact (i.e., pager numbers, cell phone numbers, etc.) as \nwell as backup contacts for each role in the event the primary contact can not be reached or can \nnot reach the recovery site for one reason or another.\n" }, { "page_number": 599, "text": "554\nChapter 16\n\u0002 Disaster Recovery Planning\nBe sure to consult with the individuals in your organization responsible for pri-\nvacy before assembling and disseminating a telephone notification checklist. \nYou may need to comply with special policies regarding the use of home tele-\nphone numbers and other personal information in the checklist.\nThe notification checklist should be provided to all personnel who might respond to a disas-\nter. This will enable prompt notification of key personnel. Many firms organize their notifica-\ntion checklists in a “telephone tree” style: each member of the tree contacts the person below \nthem, spreading the notification burden among members of the team instead of relying upon \none person to make a number of telephone calls.\nIf you choose to implement a telephone tree notification scheme, be sure to \nadd a safety net. Have the last person in each chain contact the originator to \nconfirm that their entire chain has been notified. This lets you rest assured that \nthe disaster recovery team activation is smoothly underway.\nBackups and Offsite Storage\nYour disaster recovery plan (especially the technical guide) should fully address the backup \nstrategy pursued by your organization. Indeed, this is one of the most important elements of any \nbusiness continuity plan and disaster recovery plan.\nThe Power of Checklists\nChecklists are an invaluable tool in the face of disaster. They provide a sense of order amidst \nthe chaotic events surrounding a disaster. Take the time to ensure that your response checklists \nprovide first responders with a clear plan that will protect life and property and ensure the con-\ntinuity of operations.\nA checklist for response to a building fire might include the following steps:\n1.\nActivate the building alarm system.\n2.\nEnsure that an orderly evacuation is in progress.\n3.\nAfter leaving the building, use a cellular telephone to call 911 to ensure that emergency \nauthorities received the alarm notification. Provide additional information on any required \nemergency response.\n4.\nEnsure that any injured personnel receive appropriate medical treatment.\n5.\nActivate the organization’s disaster recovery plan to ensure continuity of operations.\n" }, { "page_number": 600, "text": "Recovery Plan Development\n555\nMany system administrators are already familiar with the various types of backups, and \nyou’ll benefit by bringing one or more individuals with specific technical expertise in this area \nonto the BCP/DRP team to provide expert guidance. There are three main types of backups:\nFull backups\nAs the name implies, full backups store a complete copy of the data contained \non the protected device. Full backups duplicate every file on the system regardless of the setting \nof the archive bit. Once a full backup is complete, the archive bit on every file is reset, turned \noff, or set to 0.\nIncremental backups\nIncremental backups store only those files that have been modified since \nthe time of the most recent full or incremental backup. Incremental backups duplicate only files \nthat have the archive bit turned on, enabled, or set to 1. Once an incremental backup is com-\nplete, the archive bit on all duplicated files is reset, turned off, or set to 0.\nDifferential backups\nDifferential backups store all files that have been modified since the time \nof the most recent full backup. Differential backups duplicate only files that have the archive bit \nturned on, enabled, or set to 1. However, unlike full and incremental backups, the archive bit \nis not changed by the differential backup process.\nThe most important difference between incremental and differential backups is the time \nneeded to restore data in the event of an emergency. If you use a combination of full and dif-\nferential backups, you will only need to restore two backups—the most recent full backup and \nthe most recent differential backup. On the other hand, if your strategy combines full backups \nwith incremental backups, you will need to restore the most recent full backup as well as all \nincremental backups performed since that full backup. The trade-off is the time required to cre-\nate the backups—differential backups don’t take as long to restore, but they take longer to cre-\nate than incremental backups.\nStorage of the backup media is equally critical. It may be convenient to store backup media \nin or near the primary operations center to easily fulfill user requests for backup data, but you’ll \ndefinitely need to keep copies of the media in at least one offsite location to provide redundancy \nin the event your primary operating location is suddenly destroyed.\nUsing Backups\nIn case of a system failure, many companies use one of two common methods to restore data \nfrom backups. In the first situation, they run a full backup on Monday night and then run dif-\nferential backups every other night of the week. If a failure occurs Saturday morning, they \nrestore Monday’s full backup and then restore only Friday’s differential backup. In the second \nsituation, they run a full backup on Monday night and incremental backups are run every other \nnight of the week. If a failure occurs Saturday morning, they restore Monday’s full backup and \nthen restore each incremental backup in original chronological order (i.e., Wednesday’s, then \nFriday’s, etc.).\n" }, { "page_number": 601, "text": "556\nChapter 16\n\u0002 Disaster Recovery Planning\nMost organizations adopt a backup strategy that utilizes more than one of the three backup \ntypes along with a media rotation scheme. Both allow backup administrators access to a suffi-\nciently large range of backups to complete user requests and provide fault tolerance while mini-\nmizing the amount of money that must be spent on backup media. A common strategy is to \nperform full backups over the weekend and incremental or differential backups on a nightly basis.\nBackup Media Formats\nThe physical characteristics and the rotation cycle are two factors that a worthwhile backup \nsolution should track and manage. The physical characteristics are the type of tape drive in use. \nThis defines the physical wear placed on the media. The rotation cycle is the frequency of back-\nups and retention length of protected data. By overseeing these characteristics, you can be \nassured that valuable data will be retained on serviceable backup media. Backup media has a \nmaximum use limit; perhaps 5, 10, or 20 rewrites may be made before the media begins to lose \nreliability (statistically speaking). There is a wide variety of backup media formats:\n\u0002\nDigital Audio Tape (DAT)\n\u0002\nQuarter Inch Cartridge (QIC), commonly used in SOHO backups\n\u0002\n8mm tape, commonly used in Helical Scan tape drives, but has been superseded by DLT\n\u0002\nDigital Linear Tape (DLT)\n\u0002\nWrite Once, Read Many (WORM), a storage type often used to retain audit trails\n\u0002\nCDR/W media, usually requires faster file access than tape, useful for temporary storage of \nchangeable data\n Writable CDs and DVDs as well as Jaz and Zip drives are considered inappro-\npriate for network backup solutions, primarily because of their limited capacity, \nbut in some cases due to their speed or buffer underflow problems. Buffer \nunderflow problems occurred before the advent of burn-proof software. \nUnderflow is when the write buffer of the drive empties during the writing pro-\ncess, which causes an error on the media rendering it useless. However, these \ntypes of backup media are appropriate for end users to perform backups of lim-\nited sets of data from specific applications or for personal archiving purposes.\nBackup Common Sense\nNo matter what the backup solution, media, or method, there are several common issues with \nbackups that must be addressed. For instance, backup and restoration activities can be bulky \nand slow. Such data movement can significantly affect the performance of a network, especially \nduring normal production hours. Thus, backups should be scheduled during the low peak peri-\nods (e.g., at night).\nThe amount of backup data increases over time. This causes the backup (and restoration) \nprocesses to take longer each time and to consume more space on the backup media. Thus, you \nneed to build sufficient capacity to handle a reasonable amount of growth over a reasonable \n" }, { "page_number": 602, "text": "Recovery Plan Development\n557\namount of time into your backup solution. What is reasonable all depends on your environment \nand budget.\nWith periodic backups (i.e., those backups that are run every 24 hours), there is always the \npotential for data loss up to the length of the period. In fact, Murphy’s law dictates that the \nserver crash is never immediately after a successful backup. Instead, it is always just before the \nnext backup begins. To avoid the problem with periods, you need to deploy some form of real-\ntime continuous backup, such as RAID, clustering, or server mirroring.\nTape Rotation\nThere are several commonly used tape rotation strategies for backups: the Grandfather-Father-\nSon strategy (GFS), the Tower of Hanoi strategy, and the Six Cartridge Weekly Backup strategy. \nThese strategies can be fairly complex, especially with large tape sets. They can be implemented \nmanually using a pencil and a calendar or automatically by using either commercial backup \nsoftware or a fully automated Hierarchical Storage Management (HSM) system. An HSM sys-\ntem is an automated robotic backup jukebox consisting of 32 or 64 optical or tape backup \ndevices. All of the drive elements within an HSM system are configured as a single drive array \n(a bit like RAID).\nDetails about the various tape rotations are beyond the scope of this book, but \nif you want to learn more about them, search by their names on the Internet.\nSoftware Escrow Arrangements\nA software escrow arrangement is a unique tool used to protect a company against the failure of \na software developer to provide adequate support for its products or against the possibility that \nthe developer will go out of business and no technical support will be available for the product.\nFocus your efforts on negotiating software escrow agreements with those sup-\npliers you fear may go out of business due to their size. It’s not likely that you’ll \nbe able to negotiate such an agreement with a firm like Microsoft, unless you \nare responsible for an extremely large corporate account with serious bargain-\ning power. On the other hand, it’s equally unlikely that a firm of Microsoft’s \nmagnitude will go out of business, leaving end users high and dry.\nIf your organization depends upon custom-developed software or software products pro-\nduced by a small firm, you may wish to consider developing this type of arrangement as part of \nyour disaster recovery plan. Under a software escrow agreement, the developer provides copies \nof the application source code to an independent third-party organization. This third party then \nmaintains updated backup copies of the source code in a secure fashion. The agreement between \nthe end user and the developer specifies “trigger events,” such as the failure of the developer to \nmeet terms of a service level agreement (SLA) or the liquidation of the developer’s firm. When \n" }, { "page_number": 603, "text": "558\nChapter 16\n\u0002 Disaster Recovery Planning\na trigger event takes place, the third party releases copies of the application source code to the \nend user. The end user can then analyze the source code to resolve application issues or imple-\nment software updates.\nExternal Communications\nDuring the disaster recovery process, it will be necessary to communicate with various entities out-\nside of your organization. You will need to contact vendors to provide supplies as they are needed \nto support the disaster recovery effort. Your clients will want to contact you for reassurance that \nyou are still in operation. Public relations officials may need to contact the media or investment \nfirms, and managers may need to speak to governmental authorities. For these reasons, it is essen-\ntial that your disaster recovery plan include appropriate channels of communication to the outside \nworld in a quantity sufficient to meet your operational needs. Usually, it is not a sound business \npractice or recovery practice to use the CEO as your spokesperson during a disaster. A media liai-\nson should be hired, trained, and prepared to take on this responsibility.\nUtilities\nAs discussed in previous sections of this chapter, your organization is reliant upon several util-\nities to provide critical elements of your infrastructure—electric power, water, natural gas, \nsewer service, and so on. Your disaster recovery plan should contain contact information and \nprocedures to troubleshoot these services if problems arise during a disaster.\nLogistics and Supplies\nThe logistical problems surrounding a disaster recovery operation are immense. You will sud-\ndenly face the problem of moving large numbers of people, equipment, and supplies to alternate \nrecovery sites. It’s also possible that the people will be actually living at those sites for an extended \nperiod of time, and the disaster recovery team will be responsible for providing them with food, \nwater, shelter and appropriate facilities. Your disaster recovery plan should contain provisions for \nthis type of operation if it falls within the scope of your expected operational needs.\nRecovery vs. Restoration\nIt is sometimes useful to separate disaster recovery tasks from disaster restoration tasks. This is \nespecially true when the recovery effort is expected to take a significant amount of time. A disas-\nter recovery team may be assigned to implement and maintain operations at the recovery site \nwhile a salvage team is assigned to restore the primary site to operational capacity. These allo-\ncations should be made according to the needs of your organization and the types of disasters \nthat you face.\nThe recovery team has a very short time frame in which to operate. They must put the DRP \ninto action and restore IT capabilities as swiftly as possible. If the recovery team fails to restore \nbusiness processes within the MTD/RTO, then the company fails.\n" }, { "page_number": 604, "text": "Training and Documentation\n559\nOnce the original site is deemed safe for people, the salvage team begins their work. Their job \nis to restore the company back to its full original capabilities and, if necessary, to the original \nlocation. If the original location is no longer in existence, then a new primary spot is selected. \nThe salvage team must rebuild or repair the IT infrastructure. Since this activity is basically the \nsame as building a new IT system, the return activity from the alternate/recovery site back to the \nprimary/original site is itself a risky activity. Fortunately, the salvage team has more time to \nwork than the recovery team. The salvage team must ensure the reliability of the new IT infra-\nstructure. This is done by returning the least-mission-critical processes back to the restored orig-\ninal site to stress-test the rebuilt network. As the restored site shows resiliency, more important \nprocesses are transferred. A serious vulnerability exists when mission-critical processes are \nreturned to the original site. The act of returning to the original site could cause a disaster of its \nown. Therefore, the state of emergency cannot be declared over until full normal operations \nhave returned to the restored original site.\nAt the conclusion of any disaster recovery effort, the time will come to restore operations at \nthe primary site and terminate any processing sites operating under the disaster recovery agree-\nment. Your DRP should specify the criteria used to determine when it is appropriate to return \nto the primary site and guide the DRP recovery and salvage teams through an orderly transition.\nTraining and Documentation\nAs with the business continuity plan, it is essential that you provide training to all personnel \nwho will be involved in the disaster recovery effort. The level of training required will vary \naccording to an individual’s role in the effort and their position within the company. When \ndesigning a training plan, you should consider including the following elements:\n\u0002\nOrientation training for all new employees\n\u0002\nInitial training for employees taking on a new disaster recovery role for the first time\n\u0002\nDetailed refresher training for disaster recovery team members\n\u0002\nBrief refresher training for all other employees (can be accomplished as part of other meet-\nings and through a medium like e-mail newsletters sent to all employees)\nLoose-leaf binders provide an excellent option for storage of disaster recovery \nplans. You can distribute single-page changes to the plan without destroying a \nnational forest!\nThe disaster recovery plan should also be fully documented. Earlier in this chapter, we dis-\ncussed several of the documentation options available to you. Be sure that you implement the \nnecessary documentation programs and modify the documentation as changes to the plan \noccur. Because of the rapidly changing nature of the disaster recovery and business continuity \nplans, you might consider publication on a secured portion of your organization’s intranet.\n" }, { "page_number": 605, "text": "560\nChapter 16\n\u0002 Disaster Recovery Planning\nYour DRP should be treated as an extremely sensitive document and provided to individuals \non a compartmentalized, need-to-know basis only. Individuals who participate in the plan \nshould fully understand their roles, but they do not need to know or have access to the entire \nplan. Of course, it is essential to ensure that key DRP team members and senior management \nhave access to the entire plan and understand the high-level implementation details. You cer-\ntainly don’t want this knowledge to rest in the mind of one individual.\nRemember that a disaster may render your intranet unavailable. If you choose to \ndistribute your disaster recovery and business continuity plans through an intra-\nnet, be sure that you maintain an adequate number of printed copies of the plan \nat both the primary and alternate sites and maintain only the most current copy!\nTesting and Maintenance\nEvery disaster recovery plan must be tested on a periodic basis to ensure that the plan’s provi-\nsions are viable and that it meets the changing needs of the organization. The types of tests that \nyou are able to conduct will depend upon the types of recovery facilities available to you, the \nculture of your organization, and the availability of disaster recovery team members. The five \nmain test types—checklist tests, structured walk-throughs, simulation tests, parallel tests, and \nfull-interruption tests—are discussed in the remaining sections of this chapter.\nChecklist Test\nThe checklist test is one of the simplest tests to conduct, but it is also one of the most critical. \nIn this type of test, you simply distribute copies of the disaster recovery checklists to the mem-\nbers of the disaster recovery team for review. This allows you to simultaneously accomplish \nthree goals. First, it ensures that key personnel are aware of their responsibilities and have that \nknowledge refreshed on a periodic basis. Second, it provides individuals with an opportunity to \nreview the checklists for obsolete information and update any items that require modification \ndue to changes within the organization. Finally, in large organizations, it aids in the identifica-\ntion of situations in which key personnel have left the company and nobody bothered to reas-\nsign their disaster recovery responsibilities! This is also a good reason why disaster recovery \nresponsibilities should be included in job descriptions.\nStructured Walk-Through\nThe structured walk-through takes testing one step further. In this type of test, often referred to \nas a “table-top exercise,” members of the disaster recovery team gather in a large conference \nroom and role-play a disaster scenario. Normally, the exact scenario is known only to the test \nmoderator, who presents the details to the team at the meeting. The team members then refer \n" }, { "page_number": 606, "text": "Summary\n561\nto their copies of the disaster recovery plan and discuss the appropriate responses to that par-\nticular type of disaster.\nSimulation Test\nSimulation tests are similar to the structured walk-throughs. In simulation tests, disaster recovery \nteam members are presented with a scenario and asked to develop an appropriate response. Unlike \nthe tests previously discussed, some of these response measures are then tested. This may involve the \ninterruption of noncritical business activities and the use of some operational personnel.\nParallel Test\nParallel tests represent the next level in testing and involve actually relocating personnel to the \nalternate recovery site and implementing site activation procedures. The employees relocated to \nthe site perform their disaster recovery responsibilities in the same manner as they would for an \nactual disaster. The only difference is that operations at the main facility are not interrupted. \nThat site retains full responsibility for conducting the day-to-day business of the organization.\nFull-Interruption Test\nFull-interruption tests operate in a manner similar to parallel tests, but they involve actually \nshutting down operations at the primary site and shifting them to the recovery site. For obvious \nreasons, full-interruption tests are extremely difficult to arrange and you often encounter resis-\ntance from management.\nMaintenance\nRemember that your disaster recovery plan is a living document. As your organization’s needs \nchange, you must adapt the disaster recovery plan to meet those changed needs. You will discover \nmany necessary modifications through the use of a well-organized and coordinated testing plan. \nMinor changes may often be made through a series of telephone conversations or e-mails, whereas \nmajor changes may require one or more meetings of the full disaster recovery team.\nSummary\nDisaster recovery planning is a critical portion of a comprehensive information security program. \nNo matter how comprehensive your business continuity plan, the day may come when your busi-\nness is interrupted by a disaster and you have the task of quickly and efficiently restoring opera-\ntions to the primary site. Keep in mind the old adage that an ounce of prevention is worth a pound \nof cure. Spending the time and effort developing a comprehensive disaster recovery plan will \ngreatly ease the process of recovering operations in the midst of a chaotic emergency.\n" }, { "page_number": 607, "text": "562\nChapter 16\n\u0002 Disaster Recovery Planning\nAn organization’s disaster recovery plan is one of the most important documents under the \npurview of security professionals. It should provide guidance to the personnel responsible for \nensuring the continuity of operations in the face of disaster. The DRP provides an orderly \nsequence of events designed to activate alternate processing sites while simultaneously restoring \nthe primary site to operational status. Security professionals should ensure that adequate pro-\ngrams are in place so that those team members charged with disaster recovery duties are well-\ntrained for their roles under the plan.\nExam Essentials\nKnow the common types of natural disasters that may threaten an organization.\nNatural \ndisasters that commonly threaten organizations include earthquakes, floods, storms, fires, tsu-\nnamis, and volcanic eruptions.\nKnow the common types of man-made disasters that may threaten an organization.\nExplo-\nsions, electrical fires, terrorist acts, power outages, other utility failures, infrastructure failures, \nhardware/software failures, labor difficulties, theft, and vandalism are all common man-made \ndisasters.\nBe familiar with the common types of recovery facilities.\nThe common types of recovery \nfacilities are cold sites, warm sites, hot sites, mobile sites, service bureaus, and multiple sites. It \nis important that you understand the benefits and drawbacks of each of these facilities.\nExplain the potential benefits behind Mutual Assistance Agreements as well as the reasons they \nare not commonly implemented in businesses today.\nMutual Assistance Agreements (MAAs) \nprovide an inexpensive alternative to disaster recovery sites, but they are not commonly used \nbecause they are difficult to enforce. Organizations participating in an MAA may also be shut \ndown by the same disaster, and MAAs raise confidentiality concerns.\nKnow the five types of disaster recovery plan tests and the impact each has on normal business \noperations.\nThe five types of disaster recovery plan tests are checklist tests, structured walk-\nthroughs, simulation tests, parallel tests, and full-interruption tests. Checklist tests are purely \npaperwork exercises, whereas structured walk-throughs involve a project team meeting. Nei-\nther has an impact on business operations. Simulation tests may shut down noncritical business \nunits. Parallel tests involve relocation of personnel but do not affect day-to-day operations. Full-\ninterruption tests involve shutting down primary systems and shifting responsibility to the \nrecovery facility.\n" }, { "page_number": 608, "text": "Written Lab\n563\nWritten Lab\nAnswer the following questions about Disaster Recovery Planning:\n1.\nWhat are some of the main concerns businesses have when considering adopting a Mutual \nAssistance Agreement?\n2.\nList and explain the five types of disaster recovery tests.\n3.\nExplain the differences between the three types of backup strategies discussed in this chapter.\n" }, { "page_number": 609, "text": "564\nChapter 16\n\u0002 Disaster Recovery Planning\nReview Questions\n1.\nWhat is the end goal of Disaster Recovery Planning?\nA. Preventing business interruption\nB. Setting up temporary business operations\nC. Restoring normal business activity\nD. Minimizing the impact of a disaster\n2.\nWhich one of the following is an example of a man-made disaster?\nA. Tsunami\nB. Earthquake\nC. Power outage\nD. Lightning strike\n3.\nAccording to the Federal Emergency Management Agency, approximately what percentage of \nU.S. states is considered to have at least a moderate risk of seismic activity?\nA. 20 percent\nB. 40 percent\nC. 60 percent\nD. 80 percent\n4.\nWhich one of the following disaster types is not normally covered by standard business or home-\nowner’s insurance?\nA. Earthquake\nB. Flood\nC. Fire\nD. Theft\n5.\nIn the wake of the September 11, 2001 terrorist attacks, what industry made drastic changes that \ndirectly impact DRP/BCP activities?\nA. Tourism\nB. Banking\nC. Insurance\nD. Airline\n" }, { "page_number": 610, "text": "Review Questions\n565\n6.\nWhich one of the following statements about Business Continuity Planning and Disaster Recov-\nery Planning is not correct?\nA. Business Continuity Planning is focused on keeping business functions uninterrupted when \na disaster strikes.\nB. Organizations can choose whether to develop Business Continuity Planning or Disaster \nRecovery Planning plans.\nC. Business Continuity Planning picks up where Disaster Recovery Planning leaves off.\nD. Disaster Recovery Planning guides an organization through recovery of normal operations \nat the primary facility.\n7.\nWhat does the term “100-year flood plain” mean to emergency preparedness officials?\nA. The last flood of any kind to hit the area was more than 100 years ago.\nB. A flood is expected to hit the area once every 100 years.\nC. The area is expected to be safe from flooding for at least 100 years.\nD. The last significant flood to hit the area was more than 100 years ago.\n8.\nIn which one of the following database recovery techniques is an exact, up-to-date copy of the \ndatabase maintained at an alternative location?\nA. Transaction logging\nB. Remote journaling\nC. Electronic vaulting\nD. Remote mirroring\n9.\nWhat disaster recovery principle best protects your organization against hardware failure?\nA. Consistency\nB. Efficiency\nC. Redundancy\nD. Primacy\n10. What Business Continuity Planning technique can help you prepare the business unit prioritiza-\ntion task of Disaster Recovery Planning?\nA. Vulnerability Analysis\nB. Business Impact Assessment\nC. Risk Management\nD. Continuity Planning\n11. Which one of the following alternative processing sites takes the longest time to activate?\nA. Hot site\nB. Mobile site\nC. Cold site\nD. Warm site\n" }, { "page_number": 611, "text": "566\nChapter 16\n\u0002 Disaster Recovery Planning\n12. What is the typical time estimate to activate a warm site from the time a disaster is declared?\nA. 1 hour\nB. 6 hours\nC. 12 hours\nD. 24 hours\n13. Which one of the following items is a characteristic of hot sites but not a characteristic of warm sites?\nA. Communications circuits\nB. Workstations\nC. Servers\nD. Current data\n14. What type of database backup strategy involves bulk transfers of data to a remote site on a peri-\nodic basis but does not involve maintenance of a live backup server at the remote site?\nA. Transaction logging\nB. Remote journaling\nC. Electronic vaulting\nD. Remote mirroring\n15. What type of document will help public relations specialists and other individuals who need a \nhigh-level summary of disaster recovery efforts while they are underway?\nA. Executive summary\nB. Technical guides\nC. Department-specific plans\nD. Checklists\n16. What Disaster Recovery Planning tool can be used to protect an organization against the failure \nof a critical software firm to provide appropriate support for their products?\nA. Differential backups\nB. Business Impact Assessment\nC. Incremental backups\nD. Software escrow agreement\n17.\nWhat type of backup involves always storing copies of all files modified since the most recent full \nbackup?\nA. Differential backups\nB. Partial backup\nC. Incremental backups\nD. Database backup\n" }, { "page_number": 612, "text": "Review Questions\n567\n18. What combination of backup strategies provides the fastest backup creation time?\nA. Full backups and differential backups\nB. Partial backups and incremental backups\nC. Full backups and incremental backups\nD. Incremental backups and differential backups\n19. What combination of backup strategies provides the fastest backup restoration time?\nA. Full backups and differential backups\nB. Partial backups and incremental backups\nC. Full backups and incremental backups\nD. Incremental backups and differential backups\n20. What type of disaster recovery plan test fully evaluates operations at the backup facility but does \nnot shift primary operations responsibility from the main site?\nA. Structured walk-through\nB. Parallel test\nC. Full-interruption test\nD. Simulation test\n" }, { "page_number": 613, "text": "568\nChapter 16\n\u0002 Disaster Recovery Planning\nAnswers to Review Questions\n1.\nC. Disaster Recovery Planning picks up where Business Continuity Planning leaves off. Once a \ndisaster interrupts the business operations, the goal of DRP is to restore normal business activity \nas quickly as possible.\n2.\nC. A power outage is an example of a man-made disaster. The other events listed—tsunamis, \nearthquakes, and lightning strikes—are all naturally occurring events.\n3.\nD. As shown in Table 16.1, 40 of the 50 U.S. states are considered to have a moderate, high, or \nvery high risk of seismic activity.\n4.\nB. Most general business insurance and homeowner’s insurance policies do not provide any pro-\ntection against the risk of flooding or flash floods. If floods pose a risk to your organization, you \nshould consider purchasing supplemental flood insurance under FEMA’s National Flood Insur-\nance Program.\n5.\nC. Although all of the industries listed in the options made changes to their practices after Sep-\ntember 11, 2004, the insurance industry’s change toward noncoverage of acts of terrorism most \ndirectly impacts the BCP/DRP process.\n6.\nC. The opposite of this statement is true—Disaster Recovery Planning picks up where Business \nContinuity Planning leaves off. The other three statements are all accurate reflections of the role \nof Business Continuity Planning and Disaster Recovery Planning.\n7.\nB. The term “100-year flood plain” is used to describe an area where flooding is expected once \nevery 100 years. It can also be said that there is a 1 percent probability of flooding in any given \nyear.\n8.\nD. When you use remote mirroring, an exact copy of the database is maintained at an alternative \nlocation. You keep the remote copy up-to-date by executing all transactions on both the primary \nand remote site at the same time.\n9.\nC. Redundant systems/components provide protection against the failure of one particular piece \nof hardware.\n10. B. During the Business Impact Assessment phase, you must identify the business priorities of \nyour organization to assist with the allocation of BCP resources. This same information can be \nused to drive the DRP business unit prioritization.\n11. C. The cold site contains none of the equipment necessary to restore operations. All of the equip-\nment must be brought in and configured and data must be restored to it before operations can \ncommence. This often takes weeks.\n12. C. Warm sites typically take about 12 hours to activate from the time a disaster is declared. This \nis compared to the relatively instantaneous activation of a hot site and the lengthy (at least a \nweek) time required to bring a cold site to operational status.\n" }, { "page_number": 614, "text": "Answers to Review Questions\n569\n13. D. Warm sites and hot sites both contain workstations, servers, and the communications circuits \nnecessary to achieve operational status. The main difference between the two alternatives is the \nfact that hot sites contain near real-time copies of the operational data and warm sites require \nthe restoration of data from backup.\n14. C. In an electronic vaulting scenario, bulk transfers of data occur between the primary site and \nthe backup location on a periodic basis. These backups are stored at the remote location but are \nnot maintained on a live database server. Once a disaster is declared, technicians retrieve the \ndata from the vault and apply it to production servers.\n15. A. The executive summary provides a high-level view of the entire organization’s disaster recov-\nery efforts. This document is useful for the managers and leaders of the firm as well as public \nrelations personnel who need a nontechnical perspective on this complex effort.\n16. D. Software escrow agreements place the application source code in the hands of an independent \nthird party, thus providing firms with a “safety net” in the event a developer goes out of business \nor fails to honor the terms of a service agreement.\n17.\nA. Differential backups involve always storing copies of all files modified since the most recent \nfull backup regardless of any incremental or differential backups created during the intervening \ntime period.\n18. C. Any backup strategy must include full backups at some point in the process. Incremental \nbackups are created faster than differential backups due to the number of files it is necessary to \nback up each time.\n19. A. Any backup strategy must include full backups at some point in the process. If a combination \nof full and differential backups is used, a maximum of two backups must be restored. If a com-\nbination of full and incremental backups is chosen, the number of required restorations may be \nunlimited.\n20. B. Parallel tests involve moving personnel to the recovery site and gearing up operations, but \nresponsibility for conducting day-to-day operations of the business remains at the primary oper-\nations center.\n" }, { "page_number": 615, "text": "570\nChapter 16\n\u0002 Disaster Recovery Planning\nAnswers to Written Lab\nFollowing are answers to the questions in this chapter’s written lab:\n1.\nThere are three main concerns businesses have when considering the adoption of Mutual \nAssistance Agreements. First, the nature of an MAA often necessitates that the businesses \nbe located in close geographical proximity. However, this requirement also increases the \nrisk that the two businesses will fall victim to the same threat. Second, MAAs are difficult \nto enforce in the middle of a crisis. If one of the organizations is affected by a disaster and \nthe other isn’t, the organization not affected could back out at the last minute and the other \norganization is out of luck. Finally, confidentiality concerns (both legal and business \nrelated) often prevent businesses from trusting others with their sensitive operational data.\n2.\nThere are five main types of disaster recovery tests:\n\u0002\nChecklist tests involve the distribution of recovery checklists to disaster recovery per-\nsonnel for review.\n\u0002\nStructured walk-throughs are “table-top” exercises that involve assembling the disaster \nrecovery team to discuss a disaster scenario.\n\u0002\nSimulation tests are more comprehensive and may impact one or more noncritical busi-\nness units of the organization.\n\u0002\nParallel tests involve relocating personnel to the alternate site and commencing opera-\ntions there.\n\u0002\nFull-interruption tests involve relocating personnel to the alternate site and shutting \ndown operations at the primary site.\n3.\nFull backups create a copy of all data stored on a server. Incremental backups create copies \nof all files modified since the last full or incremental backup. Differential backups create \ncopies of all files modified since the last full backup without regard to any previous differ-\nential or incremental backups that may have taken place.\n" }, { "page_number": 616, "text": "Chapter\n17\nLaw and \nInvestigations\nTHE CISSP EXAM TOPICS COVERED IN THIS \nCHAPTER INCLUDE:\n\u0001 Laws\n\u0001 Major Categories and Types of Laws\n\u0001 Investigations\n" }, { "page_number": 617, "text": "In the early days of computer security, information security profes-\nsionals were pretty much left on their own to defend their systems \nagainst attacks. They didn’t have much help from the criminal and \ncivil justice systems. When they did seek assistance from law enforcement, they were met with \nreluctance by overworked agents who didn’t have a basic understanding of how something that \ninvolved a computer could actually be a crime. The legislative branch of government hadn’t \naddressed the issue of computer crime, and the executive branch felt that they simply didn’t have \nstatutory authority or obligation to pursue those matters.\nFortunately, both our legal system and the men and women of law enforcement have come \na long way over the past two decades. The legislative branches of governments around the \nworld have at least attempted to address issues of computer crime. Many law enforcement agen-\ncies have full-time, well-trained computer crime investigators with advanced security training. \nThose that don’t usually know where to turn when they require this sort of experience.\nIn this chapter, we’ll take a look at the various types of laws that deal with computer security \nissues. We’ll examine the legal issues surrounding computer crime, privacy, intellectual prop-\nerty, and a number of other related topics. We’ll also cover basic investigative techniques, \nincluding the pros and cons of calling in assistance from law enforcement.\nCategories of Laws\nThere are three main categories of laws that play a role in our legal system. Each is used to cover \na variety of different circumstances, and the penalties for violating laws in the different catego-\nries vary widely. In the following sections, we’ll take a look at how criminal law, civil law, and \nadministrative law interact to form the complex web of our justice system.\nCriminal Law\nCriminal law forms the bedrock of the body of laws that preserve the peace and keep our society \nsafe. Many high-profile court cases involve matters of criminal law; these are the laws that the \npolice and other law enforcement agencies concern themselves with. Criminal law contains pro-\nhibitions against acts such as murder, assault, robbery, arson, and similar offenses. Penalties for \nviolating criminal statutes fall in a range that includes mandatory hours of community service, \nmonetary penalties in the form of fines (small and large), deprivation of civil liberties in the form \nof prison sentences, and in the most extreme cases, forfeiture of one’s life through application of \nthe death penalty.\n" }, { "page_number": 618, "text": "Categories of Laws\n573\nThere are a number of criminal laws that serve to protect society against computer crime. In \nlater sections of this chapter, you’ll learn how some laws, like the Computer Fraud and Abuse \nAct, the Electronic Communications Privacy Act, and the Identity Theft and Assumption Deter-\nrence Act (among others), provide criminal penalties for serious cases of computer crime. Tech-\nnically savvy prosecutors teamed with concerned law enforcement agencies have dealt serious \nblows to the “hacking underground” by using the court system to slap lengthy prison terms on \noffenders guilty of what used to be considered harmless pranks.\nIn the United States, legislative bodies at all levels of government establish criminal laws \nthrough elected representatives. At the federal level, both the House of Representatives and the \nSenate must pass criminal law bills by a majority vote (in most cases) in order for the bill to \nbecome law. Once passed, these laws then become federal law and apply in all cases where the \nfederal government has jurisdiction (mainly cases that involve interstate commerce, cases that \ncross state boundaries, or cases that are offenses against the federal government itself). If federal \njurisdiction does not apply, state authorities handle the case using laws passed in a similar man-\nner by state legislators.\nAll federal and state laws must comply with the document that dictates how our system of \ngovernment works—the U.S. Constitution. All laws are subject to judicial review by regional \ncourts with the right of appeal all the way to the Supreme Court of the United States. If a court \nfinds that a law is unconstitutional, it has the power to strike it down and render it invalid.\nKeep in mind that criminal law is a serious matter. If you find yourself involved in a matter \nin which criminal authorities become involved—either as a witness, defendant, or victim of a \ncomputer crime—you’d be well advised to seek advice from an attorney familiar with the crim-\ninal justice system and specifically with matters of computer crime. It’s not wise to “go it alone” \nin such a complex system.\nCivil Law\nCivil laws form the bulk of our body of laws. They are designed to provide for an orderly society \nand govern matters that are not crimes but require an impartial arbiter to settle between indi-\nviduals and organizations. Examples of the types of matters that may be judged under civil law \ninclude contract disputes, real estate transactions, employment matters, and estate/probate pro-\ncedures. Civil laws also are used to create the framework of government that the executive \nbranch uses to carry out its responsibilities. These laws provide budgets for governmental activ-\nities and lay out the authority granted to the executive branch to create administrative laws (see \nthe next section).\nCivil laws are enacted in the same manner as criminal laws. They must pass through the leg-\nislative process before enactment and are subject to the same constitutional parameters and \njudicial review procedures. At the federal level, both criminal and civil laws are embodied in the \nUnited States Code (USC).\nThe major difference between civil laws and criminal laws is the way that they are enforced. \nNormally, law enforcement authorities do not become involved in matters of civil law beyond \ntaking action necessary to restore order. In a criminal prosecution, the government, through law \nenforcement investigators and prosecutors, brings action against a person accused of a crime. \nIn civil matters, it is incumbent upon the person who feels they have been wronged to obtain \n" }, { "page_number": 619, "text": "574\nChapter 17\n\u0002 Law and Investigations\nlegal counsel and file a civil lawsuit against the person they feel is responsible for their grievance. \nThe government (unless it is the plaintiff or defendant) does not take sides in the dispute or \nargue one position or the other. The only role of the government in civil matters is to provide \nthe judges, juries, and court facilities used to hear civil cases and to play an administrative role \nin managing the judicial system in accordance with the law.\nAs with criminal law, it is best to obtain legal assistance if you feel that you need to file a civil \nlawsuit or you fear that a civil lawsuit may be filed against you. Although civil law does not pro-\nvide for imprisonment, the losing party may face extremely severe financial penalties. One need \nlook no further than the nightly news for examples—multimillion-dollar cases against tobacco \ncompanies, major corporations, and wealthy individuals are heard every day.\nAdministrative Law\nThe executive branch of our government charges numerous agencies with wide-ranging respon-\nsibilities to ensure that government functions effectively. It is the duty of these agencies to abide \nby and enforce the criminal and civil laws enacted by the legislative branch. However, as can be \neasily imagined, criminal and civil law can’t possibly lay out rules and procedures that should \nbe followed in any possible situation. Therefore, executive branch agencies have some leeway \nto enact administrative law, in the form of policies, procedures, and regulations that govern the \ndaily operations of the agency. Administrative law covers topics as mundane as the procedures \nto be used within a federal agency to obtain a desk telephone to more substantial issues such as \nthe immigration policies that will be used to enforce the laws passed by Congress. Administra-\ntive law is published in the Code of Federal Regulations, often referred to as the CFR.\nAlthough administrative law does not require an act of the legislative branch to gain the force \nof law, it must comply with all existing civil and criminal law. Government agencies may not \nimplement regulations that directly contradict existing laws passed by the legislature. Further-\nmore, administrative law (and the actions of government agencies) must also comply with the \nU.S. Constitution and are subject to judicial review.\nLaws\nThroughout these sections, we’ll examine a number of laws that relate to information technol-\nogy. By necessity, this discussion is U.S.-centric, as is the material covered by the CISSP exam. \nWe’ll look at several high-profile foreign laws, such as the European Union’s data privacy act. \nHowever, if you operate in an environment that involves foreign jurisdictions, you should retain \nlocal legal counsel to guide you through the system.\nEvery information security professional should have a basic understanding of \nthe law as it relates to information technology. However, the most important les-\nson to be learned is knowing when it’s necessary to call in a legal professional. \nIf you feel that you’re in a legal “gray area,” it’s best to seek professional advice.\n" }, { "page_number": 620, "text": "Laws\n575\nComputer Crime\nThe first computer security issues addressed by legislators were those involving computer crime. \nEarly computer crime prosecutions were attempted under traditional criminal law, and many \nwere dismissed because judges felt that applying traditional law to this modern type of crime \nwas too far of a stretch. Legislators responded by passing specific statutes that defined computer \ncrime and laid out specific penalties for various crimes. In the following sections, we’ll take a \nlook at several of those statutes.\nThe U.S. laws discussed in this chapter are federal laws. Almost every state in \nthe union has enacted some form of legislation regarding computer security \nissues. Due to the global reach of the Internet, most computer crimes cross \nstate lines and, therefore, fall under federal jurisdiction and are prosecuted in \nthe federal court system. However, in some circumstances, state laws can be \nmore restrictive than federal laws and impose harsher penalties.\nComputer Fraud and Abuse Act of 1984\nCongress first enacted the Computer Fraud and Abuse Act (CFAA) in 1984 and it remains in force \ntoday, with several amendments. This law was carefully written to exclusively cover computer \ncrimes that crossed state boundaries to avoid infringing upon states’ rights and treading on thin \nconstitutional ice. The major provisions of the act are that it is a crime to perform the following:\n\u0002\nAccess classified information or financial information in a federal system without authori-\nzation or in excess of authorized privileges\n\u0002\nAccess a computer used exclusively by the federal government without authorization\n\u0002\nUse a federal computer to perpetrate a fraud (unless the only object of the fraud was to gain \nuse of the computer itself)\n\u0002\nCause malicious damage to a federal computer system in excess of $1,000\n\u0002\nModify medical records in a computer when doing so impairs or may impair the examina-\ntion, diagnosis, treatment, or medical care of an individual\n\u0002\nTraffic in computer passwords if the trafficking affects interstate commerce or involves a \nfederal computer system\nThe CFAA was amended in 1986 to change the scope of the act. Instead of merely covering \nfederal computers that processed sensitive information, the act was changed to cover all “fed-\neral interest” computers. This widened the coverage of the act to include the following:\n\u0002\nAny computer used exclusively by the U.S. government\n\u0002\nAny computer used exclusively by a financial institution\n\u0002\nAny computer used by the government or a financial institution when the offense impedes \nthe ability of the government or institution to use that system\n\u0002\nAny combination of computers used to commit an offense when they are not all located in \nthe same state\n" }, { "page_number": 621, "text": "576\nChapter 17\n\u0002 Law and Investigations\n1994 CFAA Amendments\nIn 1994, Congress recognized that the face of computer security had drastically changed since \nthe CFAA was last amended in 1986 and made a number of sweeping changes to the act. Col-\nlectively, these changes are referred to as the Computer Abuse Amendments Act of 1994 and \nincluded the following provisions:\n\u0002\nOutlawed the creation of any type of malicious code that might cause damage to a com-\nputer system\n\u0002\nModified the CFAA to cover any computer used in interstate commerce rather than just \n“federal interest” computer systems\n\u0002\nAllowed for the imprisonment of offenders, regardless of whether they actually intended to \ncause damage\n\u0002\nProvided legal authority for the victims of computer crime to pursue civil action to gain \ninjunctive relief and compensation for damages\nComputer Security Act of 1987\nAfter amending the CFAA in 1986 to cover a wider variety of computer systems, Congress \nturned its view inward and examined the current state of computer security in federal govern-\nment systems. Members of Congress were not satisfied with what they saw and enacted the \nComputer Security Act (CSA) of 1987 to mandate baseline security requirements for all federal \nagencies. In the introduction to the CSA, Congress specified four main purposes of the act:\n\u0002\nTo give the National Bureau of Standards (now the National Institute of Standards and \nTechnology, or NIST) responsibility for developing standards and guidelines for federal \ncomputer systems, including responsibility for developing standards and guidelines for fed-\neral computer systems. Drawing on the technical advice and assistance (including work \nproducts) of the National Security Agency where appropriate.\n\u0002\nTo provide for promulgation of such standards and guidelines.\n\u0002\nTo require establishment of security plans by all operators of federal computer systems that \ncontain sensitive information.\n\u0002\nTo require mandatory periodic training for all persons involved in management, use, or \noperation of federal computer systems that contain sensitive information.\nThis act clearly set out a number of requirements that formed the basis of federal computer \nsecurity policy for many years. It also divided responsibility for computer security among two \nfederal agencies. The National Security Agency (NSA), which formerly had authority over all \ncomputer security issues, now retained authority over classified systems. NIST gained respon-\nsibility for securing all other federal government systems.\n" }, { "page_number": 622, "text": "Laws\n577\nFederal Sentencing Guidelines\nThe Federal Sentencing Guidelines released in 1991 provided punishment guidelines to help fed-\neral judges interpret computer crime laws. There are three major provisions of these guidelines \nthat have had a lasting impact on the information security community:\nThey formalized the prudent man rule, which requires senior executives to take personal \nresponsibility for ensuring the due care that ordinary, prudent individuals would exercise in the \nsame situation. This rule, developed in the realm of fiscal responsibility, now applies to infor-\nmation security as well.\nThey allowed organizations and executives to minimize punishment for infractions by dem-\nonstrating that they used due diligence in the conduct of their information security duties.\nThey outlined three burdens of proof for negligence. First, there must be a legally recognized \nobligation of the person accused of negligence. Second, the person must have failed to comply \nwith recognized standards. Finally, there must be a causal relationship between the act of neg-\nligence and subsequent damages.\nPaperwork Reduction Act of 1995\nThe Paperwork Reduction Act of 1995 requires that agencies obtain Office of Management and \nBudget (OMB) approval before requesting most types of information from the public. Informa-\ntion collections include forms, interviews, record-keeping requirements, and a wide variety of \nother things. This act was amended by the Government Information Security Reform Act \n(GISRA) of 2000.\nNational Information Infrastructure Protection Act of 1996\nIn 1996, Congress passed yet another set of amendments to the Computer Fraud and Abuse Act \ndesigned to further extend the protection it provides. It included the following main new areas \nof coverage:\n\u0002\nBroadens the act to cover computer systems used in international commerce in addition to \nsystems used in interstate commerce\n\u0002\nExtends similar protections to portions of the national infrastructure other than computing sys-\ntems, such as railroads, gas pipelines, electric power grids, and telecommunications circuits\n\u0002\nTreats any intentional or reckless act that causes damage to critical portions of the national \ninfrastructure as a felony\nGovernment Information Security Reform Act of 2000\nThe Government Information Security Reform Act of 2000 amends the United States Code to \nimplement additional information security policies and procedures. In the text of the act, Con-\ngress laid out five basic purposes for establishing the GISRA:\n\u0002\nTo provide a comprehensive framework for establishing and ensuring the effectiveness of \ncontrols over information resources that support federal operations and assets\n" }, { "page_number": 623, "text": "578\nChapter 17\n\u0002 Law and Investigations\n\u0002\nTo recognize the highly networked nature of the federal computing environment, including \nthe need for federal government interoperability, and in the implementation of improved \nsecurity management measures, to assure that opportunities for interoperability are not \nadversely affected\n\u0002\nTo provide effective government-wide management and oversight of the related informa-\ntion security risks, including coordination of information security efforts throughout the \ncivilian, national security, and law enforcement communities\n\u0002\nTo provide for development and maintenance of minimum controls required to protect fed-\neral information and information systems\n\u0002\nTo provide a mechanism for improved oversight of federal agency information security \nprograms\nThe provisions of the GISRA continue to charge the National Institute of Standards and \nTechnology and the National Security Agency with security oversight responsibilities for \nunclassified and classified information processing systems, respectively. However, GISRA \nplaces the burden of maintaining the security and integrity of government information and \ninformation systems squarely on the shoulders of individual agency leaders.\nGISRA also creates a new category of computer system. Mission-critical systems meet one of \nthe following criteria:\n\u0002\nIt is defined as a national security system by other provisions of law.\n\u0002\nIt is protected by procedures established for classified information.\n\u0002\nThe loss, misuse, disclosure, or unauthorized access to or modification of any information \nit processes would have a debilitating impact on the mission of an agency.\nThe GISRA provides specific evaluation and auditing authority for mission-critical systems \nto the secretary of defense and the director of central intelligence. This is an attempt to ensure \nthat all government agencies, even those that do not routinely deal with classified national secu-\nrity information, implement adequate security controls on systems that are absolutely critical to \nthe continued functioning of the agency.\nIntellectual Property\nAmerica’s role in the global economy is shifting away from a manufacturer of goods and \ntoward a provider of services. This trend also shows itself in many of the world’s large indus-\ntrialized nations. With this shift toward providing services, intellectual property takes on an \nincreasingly important role in many firms. Indeed, it is arguable that the most valuable assets \nof many large multinational companies are simply the brand names that we’ve all come to rec-\nognize, and company names like Dell, Proctor & Gamble, and Merck bring instant credibility \nto any product. Publishing companies, movie producers, and artists depend upon their cre-\native output to earn their livelihood. Many products depend upon secret recipes or produc-\ntion techniques—take the legendary secret formula for Coca-Cola or the Colonel’s secret \nblend of herbs and spices, for example.\n" }, { "page_number": 624, "text": "Laws\n579\nThese intangible assets are collectively referred to as intellectual property, and a whole host \nof laws exist to protect the rights of their owners. After all, it simply wouldn’t be fair if a music \nstore only bought one copy of each artist’s CD and burned copies for all of their customers—\nthat would deprive the artist of the benefits of their labor. In the following sections, we’ll \nexplore the laws surrounding the four major types of intellectual property—copyrights, trade-\nmarks, patents, and trade secrets. We’ll also discuss how these concepts specifically concern \ninformation security professionals. Many countries protect (or fail to protect) these rights in dif-\nferent ways, but the basic concepts ring true throughout the world.\nSome countries are notorious for violating intellectual property rights. The \nmost notable example is China. China is world-renowned for its blatant disre-\ngard of copyright and patent law. If you’re planning to do business in this \nregion of the world, you should definitely consult with an attorney who spe-\ncializes in this area.\nCopyrights\nCopyright law guarantees the creators of “original works of authorship” protection against the \nunauthorized duplication of their work. There are eight broad categories of works that qualify \nfor copyright protection:\n\u0002\nLiterary works\n\u0002\nMusical works\n\u0002\nDramatic works\n\u0002\nPantomimes and choreographic works\n\u0002\nPictorial, graphical, and sculptural works\n\u0002\nMotion pictures and other audiovisual works\n\u0002\nSound recordings\n\u0002\nArchitectural works\nThere is precedent for copyrighting computer software—it’s done under the scope of literary \nworks. However, it’s important to note that copyright law only protects the expression inherent \nin computer software—that is, the actual source code. It does not protect the ideas or process \nbehind the software. There has also been some question over whether copyrights can be \nextended to cover the “look and feel” of a software package’s graphical user interface. Court \ndecisions have gone in both directions on this matter; if you will be involved in this type of issue, \nyou should consult a qualified intellectual property attorney to determine the current state of \nlegislation and case law.\nThere is a formal procedure to obtain a copyright that involves sending copies of the protected \nwork along with an appropriate registration fee to the Library of Congress. For more information \non this process, visit the Library’s website at www.loc.gov/copyright/. However, it is impor-\ntant to note that officially registering a copyright is not a prerequisite for copyright enforcement. \n" }, { "page_number": 625, "text": "580\nChapter 17\n\u0002 Law and Investigations\nIndeed, the law states that the creator of a work has an automatic copyright from the instant the \nwork is created. If you can prove in court that you were the creator of a work (perhaps by pub-\nlishing it), you will be protected under copyright law. Official registration merely provides the \ngovernment’s acknowledgment that they received your work on a specific date.\nCopyright ownership always defaults to the creator of a work. The exceptions to this policy \nare works for hire. A work is considered “for hire” when it is made for an employer during the \nnormal course of an employee’s workday. For example, when an employee in a company’s pub-\nlic relations department writes a press release, the press release is considered a work for hire. A \nwork may also be considered a work for hire when it is made as part of a written contract \ndeclaring it as such.\nCurrent copyright law provides for a very lengthy period of protection. Works by one or \nmore authors are protected until 70 years after the death of the last surviving author. Works for \nhire and anonymous works are provided protection for the shorter of 95 years from the date of \nfirst publication or 120 years from the date of creation.\nDigital Millennium Copyright Act of 1998\nIn 1998, Congress recognized the rapidly changing digital landscape that was stretching the \nreach of existing copyright law. To help meet this challenge, it enacted the hotly debated Digital \nMillennium Copyright Act. The DMCA also serves to bring United States copyright law into \ncompliance with terms of two World Intellectual Property Organization (WIPO) treaties.\nThe first major provision of the DMCA is the prohibition of attempts to circumvent copy-\nright protection mechanisms placed on a protected work by the copyright holder. This clause \nwas designed to protect copy-prevention mechanisms placed on digital media like CDs and \nDVDs. The DMCA provides for penalties of up to $1,000,000 and 10 years in prison for repeat \noffenders. Nonprofit institutions such as libraries and schools are exempted from this provision.\nThe DMCA also limits the liability of Internet service providers when their circuits are used \nby criminals violating the copyright law. The DMCA recognizes that ISPs have a legal status \nsimilar to the “common carrier” status of telephone companies and does not hold them liable \nfor the “transitory activities” of their users. In order to qualify for this exemption, the service \nprovider’s activities must meet the following requirements (quoted directly from the Digital \nMillennium Copyright Act of 1998, U.S. Copyright Office Summary, December 1998):\n\u0002\nThe transmission must be initiated by a person other than the provider.\n\u0002\nThe transmission, routing, provision of connections, or copying must be carried out by an \nautomated technical process without selection of material by the service provider.\n\u0002\nThe service provider must not determine the recipients of the material.\n\u0002\nAny intermediate copies must not ordinarily be accessible to anyone other than anticipated \nrecipients, and must not be retained for longer than reasonably necessary.\n\u0002\nThe material must be transmitted with no modification to its content.\nThe DMCA also exempts activities of service providers related to system caching, search \nengines, and the storage of information on a network by individual users. However, in those \ncases, the service provider must take prompt action to remove copyrighted materials upon noti-\nfication of the infringement.\n" }, { "page_number": 626, "text": "Laws\n581\nCongress also included provisions in the DMCA that allow the creation of backup copies of \ncomputer software and any maintenance, testing, or routine usage activities that require soft-\nware duplication. These provisions only apply if the software is licensed for use on a particular \ncomputer, the usage is in compliance with the license agreement, and any such copies are imme-\ndiately deleted when no longer required for a permitted activity.\nFinally, the DMCA spells out the application of copyright law principles to the emerging \nfield of webcasting, or broadcasting audio and/or video content to recipients over the Internet. \nThis technology is often referred to as streaming audio or streaming video. The DMCA states \nthat these uses are to be treated as “eligible nonsubscription transmissions.” The law in this area \nis still under development, so if you plan to engage in this type of activity, you should contact \nan attorney to ensure that you are in compliance with current law.\nTrademarks\nCopyright laws are used to protect creative works; there is also protection for trademarks, which \nare words, slogans, and logos used to identify a company and its products or services. For exam-\nple, a business might obtain a copyright on its sales brochure to ensure that competitors can’t \nduplicate its sales materials. That same business might also seek to obtain trademark protection \nfor its company name and the names of specific products and services that it offers to its clients.\nThe main objective of trademark protection is to avoid confusion in the marketplace while \nprotecting the intellectual property rights of people and organizations. As with copyright pro-\ntection, trademarks do not need to be officially registered to gain protection under the law. If \nyou use a trademark in the course of your public activities, you are automatically protected \nunder any relevant trademark law and may use the symbol to show that you intend to protect \nwords or slogans as trademarks. If you want official recognition of your trademark, you may \nregister it with the United States Patent and Trademark Office (USPTO). This process generally \nrequires an attorney to perform a “due diligence” comprehensive search for existing trademarks \nthat might preclude your registration. The entire registration process can take over a year from \nstart to finish. Once you’ve received your registration certificate from the USPTO, you may \ndenote your mark as a registered trademark with the symbol.\nOne major advantage of trademark registration is that you may register a trademark that you \nintend to use but are not necessarily already using. This type of application is called an “intent \nto use” application and conveys trademark protection as of the date of filing provided that you \nactually use the trademark in commerce within a certain time period. If you opt not to register \nyour trademark with the PTO, your protection begins only when you first use the trademark.\nThere are two main requirements for the acceptance of a trademark application in the \nUnited States:\n\u0002\nThe trademark must not be confusingly similar to another trademark—you should deter-\nmine this during your attorney’s due diligence search. There will be an open opposition \nperiod during which other companies may dispute your trademark application.\n\u0002\nThe trademark should not be descriptive of the goods and services that you will offer. For \nexample, “Mike’s Software Company” would not be a good trademark candidate because \nit describes the product produced by the company. The USPTO may reject an application \nif it considers the trademark descriptive.\n" }, { "page_number": 627, "text": "582\nChapter 17\n\u0002 Law and Investigations\nIn the United States, trademarks are granted for an initial period of 10 years and may be \nrenewed for successive 10-year periods.\nPatents\nPatents protect the intellectual property rights of inventors. They provide a period of 20 years \nduring which the inventor is granted exclusive rights to use the invention (whether directly or \nvia licensing agreements). At the end of the patent exclusivity period, the invention is then in the \npublic domain available for anyone to use.\nThere are three main requirements for a patent:\n\u0002\nThe invention must be new. Inventions are only patentable if they are original ideas.\n\u0002\nThe invention must be useful. It must actually work and accomplish some sort of task.\n\u0002\nThe invention must be non-obvious. You could not, for example, obtain a patent for your \nidea to use a drinking cup to collect rainwater. This is an obvious solution. You might, \nhowever, be able to patent a specially designed cup that optimizes the amount of rainwater \ncollected while minimizing evaporation.\nIn the technology field, patents have long been used to protect hardware devices and manu-\nfacturing processes. There is plenty of precedent on the side of inventors in those areas. Recent \npatents have also been issued covering software programs and similar mechanisms, but the \njury’s still out on whether these patents will hold up to the scrutiny of the courts.\nOne high-profile case involved Amazon.com’s patent on the “One-Click Shop-\nping” e-commerce methodology. Amazon.com claims that its patent grants the \ncompany exclusive rights to use this technique. Arguments against this claim \nrevolve around the novelty and non-obviousness requirements of patent law.\nTrade Secrets\nMany companies have intellectual property that is absolutely critical to their business and would \ncause significant damage if it were disclosed to competitors and/or the public—in other words, \ntrade secrets. We previously mentioned two examples of this type of information from popular \nculture—the secret formula for Coca-Cola and Kentucky Fried Chicken’s “secret blend of herbs \nand spices.” Other examples are plentiful—a manufacturing company may wish to keep secret a \ncertain manufacturing process that only a few key employees fully understand, or a statistical \nanalysis company might wish to safeguard an advanced model developed for in-house use.\nTwo of the previously discussed intellectual property tools—copyrights and patents—could \nbe used to protect this type of information, but with two major disadvantages:\n\u0002\nFiling a copyright or patent application requires that you publicly disclose the details of \nyour work or invention. This automatically removes the “secret” nature of your property \nand may harm your firm by removing the mystique surrounding a product or by allowing \n" }, { "page_number": 628, "text": "Laws\n583\nunscrupulous competitors to copy your property in violation of international intellectual \nproperty laws.\n\u0002\nCopyrights and patents both provide protection for a limited period of time. Once your \nlegal protection expires, other firms are free to use your work at will (and they have all the \ndetails from the public disclosure you made during the application process!).\nThere actually isn’t much of an official process regarding trade secrets—by their nature \nyou don’t register them with anyone. You simply must implement adequate controls within \nyour organization to ensure that only authorized personnel who need to know the secrets \nhave access to them in the course of their duties. You must also ensure that anyone who \ndoes have this type of access is bound by a nondisclosure agreement (NDA) or other legal \ndocument that prohibits them from sharing the information with others and provides pen-\nalties for violation of the agreement. It is important to ensure that the agreement lasts for \nthe maximum period permitted by law.\nTrade secret protection is one of the best ways to protect computer software. As discussed \nin the previous section, patent law does not provide adequate protection for computer software \nproducts. Copyright law only protects the actual text of the source code and doesn’t prohibit \nothers from rewriting your code in a different form and accomplishing the same objective. If you \ntreat your source code as a trade secret, it keeps it out of the hands of your competitors in the \nfirst place. This is the technique used by large software development companies like Microsoft \nto protect their core base of intellectual property.\nEconomic Espionage Act of 1996\nTrade secrets are very often the crown jewels of major corporations, and the United States gov-\nernment recognized the importance of protecting this type of intellectual property when Con-\ngress enacted the Economic Espionage Act of 1996. This law has two major provisions:\n\u0002\nAnyone found guilty of stealing trade secrets from a U.S. corporation with the intention of \nbenefiting a foreign government or agent may be fined up to $500,000 and imprisoned for \nup to 15 years.\n\u0002\nAnyone found guilty of stealing trade secrets under other circumstances may be fined up \nto $250,000 and imprisoned for up to 10 years.\nThe terms of the Economic Espionage Act give true teeth to the intellectual property rights of \ntrade secret owners. Enforcement of this law requires that companies take adequate steps to \nensure that their trade secrets are well protected and not accidentally placed into the public \ndomain.\n" }, { "page_number": 629, "text": "584\nChapter 17\n\u0002 Law and Investigations\nLicensing\nSecurity professionals should also be familiar with the legal issues surrounding software licens-\ning agreements. There are three common types of license agreements in use today:\n\u0002\nContractual license agreements utilize a written contract between the software vendor and \nthe customer outlining the responsibilities of each. These agreements are commonly found \nfor high-priced and/or highly specialized software packages.\n\u0002\nShrink-wrap license agreements are written on the outside of the software packaging. They \nget their name because they commonly include a clause stating that you acknowledge agree-\nment to the terms of the contract simply by breaking the shrink-wrap seal on the package.\n\u0002\nClick-wrap license agreements are becoming more commonplace than shrink-wrap agree-\nments. In this type of agreement, the contract terms are either written on the software box \nor included in the software documentation. During the installation process, you are \nrequired to click a button indicating that you have read the terms of the agreement and \nagree to abide by them. This adds an active consent to the process, ensuring that the indi-\nvidual is aware of the agreement’s existence prior to installation.\nTwo important industry groups provide guidance and enforcement activities \nregarding software licensing. You can get more information from their web-\nsites. The Business Software Alliance (BSA) can be found at www.bsa.org, and \nSPA Anti-Piracy can be found at www.spa.org/piracy/default.asp.\nImport/Export\nThe federal government recognizes that the very same computers and encryption technologies that \ndrive the Internet and e-commerce also can be extremely powerful tools in the hands of a military \nforce. For this reason, during the Cold War, the government developed a complex set of regula-\ntions governing the export of sensitive hardware and software products to other nations.\nUniform Computer Information Transactions Act\nThe Uniform Computer Information Transactions Act (UCITA) is a federal law designed for \nadoption by each of the 50 states to provide a common framework for the conduct of computer-\nrelated business transactions. UCITA contains provisions that address software licensing. The \nterms of the UCITA give legal backing to the previously questionable practices of shrink-wrap \nlicensing and click-wrap licensing by giving them status as legally binding contracts. UCITA \nalso requires that manufacturers provide software users with the option to reject the terms of \nthe license agreement before completing the installation process and receive a full refund of the \nsoftware’s purchase price.\n" }, { "page_number": 630, "text": "Laws\n585\nUntil recently, it was very difficult to export high-powered computers outside of the United \nStates, except to a select handful of allied nations. The controls on exporting encryption soft-\nware were even more severe, rendering it virtually impossible to export any encryption technol-\nogy outside of the country. Recent changes in federal policy have relaxed these restrictions and \nprovided for more open commerce.\nComputer Export Controls\nCurrently, U.S. firms may export high-performance computing systems to virtually any country \nwithout receiving prior approval from the government. There are exceptions to this rule for \ncountries designated by the Department of Commerce as Tier 3 countries. This includes coun-\ntries such as India, Pakistan, Afghanistan, and countries in the Middle East. The export of any \ncomputer that is capable of operating in excess of 190,000 MTOPS (million theoretical opera-\ntions per second) must be preapproved by the Department of Commerce.\nA complete list of countries and their corresponding computer export tiers may \nbe found on the Department of Commerce’s website at www.bxa.doc.gov/\nHPCs/ctpchart.htm.\nThe export of high-performance computers to any country currently on the Tier 4 list is pro-\nhibited. These countries include Cuba, Iran, Iraq, Libya, North Korea, Sudan, and Syria.\nEncryption Export Controls\nThe Department of Commerce’s Bureau of Industry and Security sets forth regulations on the \nexport of encryption products outside of the United States. Under previous regulations, it was \nvirtually impossible to export even relatively low-grade encryption technology outside of the \nUnited States. This placed U.S. software manufacturers at a great competitive disadvantage to \nforeign firms who faced no similar regulations. After a lengthy lobbying campaign by the soft-\nware industry, the president directed the Commerce Department to revise its regulations to fos-\nter the growth of the American security software industry.\nCurrent regulations now designate the categories of retail and mass market security soft-\nware. The rules now permit firms to submit these products for review by the Commerce Depart-\nment, but the review will take no longer than 30 days. After successful completion of this \nreview, companies may freely export these products.\nPrivacy\nThe right to privacy has for years been a hotly contested issue in the United States. The main \nsource of this contention is that the Constitution’s Bill of Rights does not explicitly provide for \na right to privacy. However, this right has been upheld by numerous courts and is vigorously \npursued by organizations like the American Civil Liberties Union (ACLU).\nEuropeans have also long been concerned with their privacy. Indeed, countries like Switzerland \nare world-renowned for their ability to keep financial secrets. In the second half of this section, we’ll \nexamine how the new European Union data privacy laws impact companies and Internet users.\n" }, { "page_number": 631, "text": "586\nChapter 17\n\u0002 Law and Investigations\nU.S. Privacy Law\nAlthough there is no constitutional guarantee of privacy, there is a myriad of federal laws (many \nenacted in recent years) designed to protect the private information the government maintains \nabout citizens as well as key portions of the private sector like financial, educational, and \nhealthcare institutions. In this section, we’ll examine a number of these federal laws.\nFourth Amendment\nThe basis for privacy rights is in the Fourth Amendment to the U.S. Constitution. It reads as follows:\n“The right of the people to be secure in their persons, houses, papers, and effects, against \nunreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon \nprobable cause, supported by oath or affirmation, and particularly describing the place to be \nsearched, and the persons or things to be seized.”\nThe direct interpretation of this amendment prohibits government agents from searching private \nproperty without a warrant and probable cause. The courts have expanded their interpretation of \nthe Fourth Amendment to include protections against wiretapping and other invasions of privacy.\nPrivacy Act of 1974\nThe Privacy Act of 1974 is perhaps the most significant piece of privacy legislation restricting \nthe way the federal government may deal with private information about individual citizens. It \nseverely limits the ability of federal government agencies to disclose private information to other \npersons or agencies without the prior written consent of the affected individual(s). It does pro-\nvide for exceptions involving the Census, law enforcement, the National Archives, health and \nsafety, and court orders.\nThe Privacy Act mandates that agencies only maintain records that are necessary for the conduct \nof their business and that they destroy those records when they are no longer needed for a legitimate \nfunction of government. It provides a formal procedure for individuals to gain access to records the \ngovernment maintains about them and to request that incorrect records be amended.\nElectronic Communications Privacy Act of 1986\nThe Electronic Communications Privacy Act (ECPA) makes it a crime to invade the electronic \nprivacy of an individual. This act updated the Federal Wiretap Act to apply to the illegal inter-\nception of electronic (i.e., computer) communications or to the intentional, unauthorized access \nof electronically stored data. It prohibits the interception or disclosure of electronic communi-\ncation and defines those situations in which disclosure is legal. It protects against the monitoring \nof e-mail and voicemail communications and prevents providers of those services from making \nunauthorized disclosures of their content.\nOne of the most notable provisions of the ECPA is the fact that it makes it illegal to monitor \ncellular telephone conversations. In fact, such monitoring is punishable by a fine of up to $500 \nand a prison term of up to five years.\nCommunications Assistance for Law Enforcement Act (CALEA) of 1994\nThe Communications Assistance for Law Enforcement Act (CALEA) of 1994 amended the \nElectronic Communications Privacy Act of 1986. CALEA requires all communications carriers \nto make wiretaps possible for law enforcement with an appropriate court order, regardless of \nthe technology in use.\n" }, { "page_number": 632, "text": "Laws\n587\nEconomic and Protection of Proprietary Information Act of 1996\nThe Economic and Protection of Proprietary Information Act of 1996 extends the definition of \nproperty to include proprietary economic information so that the theft of this information can \nbe considered industrial or corporate espionage. This changed the legal definition of theft so \nthat it was no longer restricted by physical constraints.\nHealth Insurance Portability and Accountability Act of 1996\nIn 1996, Congress passed the Health Insurance Portability and Accountability Act (HIPAA), \nwhich made numerous changes to the laws governing health insurance and health maintenance \norganizations (HMOs). Among the provisions of HIPAA are privacy regulations requiring strict \nsecurity measures for hospitals, physicians, insurance companies, and other organizations that \nprocess or store private medical information about individuals.\nThe HIPAA privacy regulations are quite complex. You should be familiar with the \nbroad intentions of the act, as described here. If you work in the healthcare industry, \nyou should consider devoting time to an in-depth study of this law’s provisions.\nThe HIPAA also clearly defines the rights of individuals who are the subject of medical records \nand requires organizations who maintain such records to disclose these rights in writing.\nChildren’s Online Privacy Protection Act of 1998\nIn April 2000, provisions of the Children’s Online Privacy Protection Act (COPPA) became the \nlaw of the land in the United States. COPPA makes a series of demands upon websites that cater \nto children or knowingly collect information from children:\n\u0002\nWebsites must have a privacy notice that clearly states the types of information they collect \nand what it’s used for, including whether any information is disclosed to third parties. The \nprivacy notice must also include contact information for the operators of the site.\n\u0002\nParents must be provided with the opportunity to review any information collected from \ntheir children and permanently delete it from the site’s records.\n\u0002\nParents must give verifiable consent to the collection of information about children under \nthe age of 13 prior to any such collection. There are exceptions in the law that allow the site \nto collect minimal information solely for the purpose of obtaining such parental consent.\nGramm-Leach-Bliley Act of 1999\nUntil the Gramm-Leach-Bliley Act (GLB) became law in 1999, there were strict governmental \nbarriers between financial institutions. Banks, insurance companies, and credit providers were \nseverely limited in the services they could provide and the information they could share with \neach other. GLB somewhat relaxed the regulations concerning the services each organization \ncould provide. When Congress passed this law, it realized that this increased latitude could have \nfar-reaching privacy implications. Due to this concern, it included a number of limitations on \nthe types of information that could be exchanged even among subsidiaries of the same corpo-\nration and required financial institutions to provide written privacy policies to all of their cus-\ntomers by July 1, 2001.\n" }, { "page_number": 633, "text": "588\nChapter 17\n\u0002 Law and Investigations\nUSA Patriot Act of 2001\nCongress passed the USA Patriot Act of 2001 in direct response to the 9/11 terrorist attacks. \nThe Patriot Act greatly broadened the powers of law enforcement organizations and intelligence \nagencies across a number of areas, including the monitoring of electronic communications.\nOne of the major changes prompted by the Patriot Act revolves around the way government \nagencies obtain wiretapping authorizations. Previously, police could obtain warrants for only \none circuit at a time, after proving that the circuit was used by someone subject to monitoring. \nProvisions of the Patriot Act allow authorities to obtain a blanket authorization for a person \nand then monitor all communications to or from that person under the single warrant.\nAnother major change is in the way the government deals with Internet service providers \n(ISPs). Under the terms of the Patriot Act, ISPs may voluntarily provide the government with a \nlarge range of information. The Patriot Act also allows the government to obtain detailed infor-\nmation on user activity through the use of a subpoena (as opposed to a wiretap).\nFinally, the USA Patriot Act amends the Computer Fraud and Abuse Act (yes, another set of \namendments!) to provide more severe penalties for criminal acts. The Patriot Act provides for \njail terms of up to 20 years and once again expands the coverage of the CFAA.\nFamily Educational Rights and Privacy Act\nThe Family Educational Rights and Privacy Act (FERPA) is another specialized privacy bill that \naffects any educational institution that accepts any form of funding from the federal govern-\nment (the vast majority of schools). It grants certain privacy rights to students over the age of \n18 and the parents of minor students. Specific FERPA protections include the following:\n\u0002\nParents/students have the right to inspect any educational records maintained by the insti-\ntution on the student.\n\u0002\nParents/students have the right to request correction of records they feel are erroneous and \nthe right to include a statement in the records contesting anything that is not corrected.\n\u0002\nSchools may not release personal information from student records without written con-\nsent, except under certain circumstances.\nIdentity Theft and Assumption Deterrence Act\nIn 1998, the president signed the Identity Theft and Assumption Deterrence Act into law. In the \npast, the only legal victims of identity theft were the creditors who were defrauded. This act \nmakes identity theft a crime against the person whose identity was stolen and provides severe \ncriminal penalties (up to a 15-year prison term and/or a $250,000 fine) for anyone found guilty \nof violating this law.\nEuropean Union Privacy Law\nOn October 24, 1995, the European Union Parliament passed a sweeping directive outlin-\ning privacy measures that must be in place for protecting personal data processed by infor-\nmation systems. The directive went into effect three years later in October 1998. The full \ntext of the agreement (document 95/46/EC) is available on the European Union’s website \n(http://europa.eu.int/).\n" }, { "page_number": 634, "text": "Laws\n589\nThe directive requires that all processing of personal data meet one of the following criteria:\n\u0002\nConsent\n\u0002\nContract\n\u0002\nLegal obligation\n\u0002\nVital interest of the data subject\n\u0002\nBalance between the interests of the data holder and the interests of data subject\nThe directive also outlines key rights of individuals about whom data is held and/or processed:\n\u0002\nRight to access the data\n\u0002\nRight to know the data’s source\nPrivacy in the Workplace\nAs you’ve read in this chapter, the U.S. court system has long upheld the traditional right to pri-\nvacy as an extension of basic constitutional rights. However, the courts have maintained that \na key element of this right is that privacy should be guaranteed only when there is a “reason-\nable expectation of privacy.” For example, if you mail a letter to someone in a sealed envelope, \nyou may reasonably expect that it will be delivered without being read along the way—you \nhave a reasonable expectation of privacy. On the other hand, if you send your message on a \npostcard, you do so with the awareness that one or more people might read your note before \nit arrives at the other end—you do not have a reasonable expectation of privacy.\nRecent court rulings have found that employees do not have a reasonable expectation of pri-\nvacy while using employer-owned communications equipment in the workplace. If you send a \nmessage using an employer’s computer, Internet connection, telephone, or other communica-\ntions device, your employer may monitor it as a routine business procedure.\nThat said, if you’re planning to monitor the communications of your employees, you should \ntake reasonable precautions to ensure that there is no implied expectation of privacy. Here are \nsome common measures to consider:\n\u0002\nClauses in employment contracts that state the employee has no expectation of privacy \nwhile using corporate equipment\n\u0002\nSimilar written statements in corporate acceptable use and privacy policies\n\u0002\nLogon banners warning that all communications are subject to monitoring\n\u0002\nWarning labels on computers and telephones warning of monitoring\nAs with many of the issues discussed in this chapter, it’s a good idea to consult with your legal \ncounsel before undertaking any communications monitoring efforts.\n" }, { "page_number": 635, "text": "590\nChapter 17\n\u0002 Law and Investigations\n\u0002\nRight to correct inaccurate data\n\u0002\nRight to withhold consent to process data in some situations\n\u0002\nRight of legal action should these rights be violated\nAmerican companies doing business in Europe may obtain protection under a treaty between \nthe European Union and the United States that allows the Department of Commerce to certify \nbusinesses that comply with regulations and offer them “safe harbor” from prosecution.\nIn order to qualify for the safe harbor provision, U.S. companies conducting business in \nEurope must meet seven requirements for the processing of personal information:\nNotice\nThey must inform individuals of what information they collect about them and how \nthe information will be used.\nChoice\nThey must allow individuals to opt out if the information will be used for any other \npurpose or shared with a third party. For information considered sensitive, an opt-in policy \nmust be used.\nOnward Transfer\nOrganizations may only share data with other organizations that comply \nwith the safe harbor principles.\nAccess\nIndividuals must be granted access to any records kept containing their personal \ninformation.\nSecurity\nProper mechanisms must be in place to protect data against loss, misuse, and unau-\nthorized disclosure.\nData Integrity\nOrganizations must take steps to ensure the reliability of the information they \nmaintain.\nEnforcement\nOrganizations must make a dispute resolution process available to individuals and \nprovide certifications to regulatory agencies that they comply with the safe harbor provisions.\nFor more information on the safe harbor protections available to American \ncompanies, visit the Department of Commerce’s Safe Harbor website at \nwww.export.gov/safeharbor/sh_overview.html.\nInvestigations\nEvery information security professional will, at one time or another, encounter a security inci-\ndent that requires an investigation. In many cases, this investigation will be a brief, informal \ndetermination that the matter is not serious enough to warrant further action or the involve-\nment of law enforcement authorities. However, in some cases, the threat posed or damage done \nwill be severe enough to require a more formal inquiry. When this occurs, investigators must be \ncareful to ensure that proper procedures are followed. Failure to abide by the correct procedures \n" }, { "page_number": 636, "text": "Investigations\n591\nmay violate the civil rights of those individual(s) being investigated and could result in a failed \nprosecution or even legal action against the investigator.\nEvidence\nIn order to successfully prosecute a crime, the prosecuting attorneys must provide sufficient evidence \nto prove an individual’s guilt beyond a reasonable doubt. In the following sections, we’ll look at the \nrequirements that evidence must meet before it is allowed in court, the various types of evidence that \nmay be introduced, and the requirements for handling and documenting evidence.\nAdmissible Evidence\nThere are three basic requirements for evidence to be introduced into a court of law. To be con-\nsidered admissible evidence, it must meet all three of these requirements, as determined by the \njudge, prior to being discussed in open court:\n\u0002\nThe evidence must be relevant to determining a fact.\n\u0002\nThe fact that the evidence seeks to determine must be material (i.e., related) to the case.\n\u0002\nThe evidence must be competent, meaning that it must have been obtained legally. Evidence \nthat results from an illegal search would be inadmissible because it is not competent.\nTypes of Evidence\nThere are four types of evidence that may be used in a court of law: real evidence, documentary \nevidence, testimonial evidence, and demonstrative evidence. Each has slightly different addi-\ntional requirements for admissibility.\nReal Evidence\nReal evidence (also known as object evidence) consists of things that may actually be brought \ninto a court of law. In common criminal proceedings, this may include items like a murder \nweapon, clothing, or other physical objects. In a computer crime case, real evidence might \ninclude seized computer equipment, such as a keyboard with fingerprints on it or a hard drive \nfrom a hacker’s computer system. Depending upon the circumstances, real evidence may also be \nconclusive evidence, such as DNA, that is incontrovertible.\nDocumentary Evidence\nDocumentary evidence includes any written items brought into court to prove a fact at hand. \nThis type of evidence must also be authenticated. For example, if an attorney wishes to intro-\nduce a computer log as evidence, they must bring a witness (e.g., the system administrator) into \ncourt to testify that the log was collected as a routine business practice and is indeed the actual \nlog that the system collected.\nThere are two additional evidence rules that apply specifically to documentary evidence:\n\u0002\nThe best evidence rule states that, when a document is used as evidence in a court proceed-\ning, the original document must be introduced. Copies or descriptions of original evidence \n" }, { "page_number": 637, "text": "592\nChapter 17\n\u0002 Law and Investigations\n(known as secondary evidence) will not be accepted as evidence unless certain exceptions \nto the rule apply.\n\u0002\nThe parol evidence rule states that, when an agreement between parties is put into written \nform, the written document is assumed to contain all of the terms of the agreement and no \nverbal agreements may modify the written agreement.\nIf documentary evidence meets the materiality, competency, and relevancy requirements and \nalso complies with the best evidence and parol evidence rules, it may be admitted into court.\nChain of Evidence\nReal evidence, like any type of evidence, must meet the relevancy, materiality, and competency \nrequirements before being admitted into court. Additionally, real evidence must be authenti-\ncated. This may be done by a witness who can actually identify an object as unique (e.g., “That \nknife with my name on the handle is the one that the intruder took off the table in my house and \nstabbed me with”).\nIn many cases, it is not possible for a witness to uniquely identify an object in court. In those \ncases, a chain of evidence (also known as a chain of custody) must be established. This involves \neveryone who handles evidence—including the police who originally collect it, the evidence tech-\nnicians who process it, and the lawyers who use it in court. The location of the evidence must be \nfully documented from the moment it was collected to the moment it appears in court to ensure \nthat it is indeed the same item. This requires thorough labeling of evidence and comprehensive \nlogs noting who had access to the evidence at specific times and the reasons they required such \naccess.\nWhen evidence is labeled to preserve the chain of custody, the label should include the follow-\ning types of information regarding the collection:\n\u0002\nGeneral description of the evidence\n\u0002\nTime, date, and exact location of collection\n\u0002\nName of the person collecting the evidence\n\u0002\nRelevant circumstances surrounding the collection\nEach person who handles the evidence must sign the chain of custody log indicating the time \nthat they took direct responsibility for the evidence and the time that they handed it off to the \nnext person in the chain of custody. The chain must provide an unbroken sequence of events \naccounting for the evidence from the time it was collected until the time of the trial.\n" }, { "page_number": 638, "text": "Investigations\n593\nTestimonial Evidence\nTestimonial evidence is, quite simply, evidence consisting of the testimony of a witness, either \nverbal testimony in court or written testimony in a recorded deposition. Witnesses must take an \noath agreeing to tell the truth and they must have personal knowledge upon which their testi-\nmony is based. Furthermore, witnesses must remember the basis for their testimony (they may \nconsult written notes or records to aid their memory). Witnesses can offer direct evidence: oral \ntestimony that proves or disproves a claim based upon their own direct observation. The testi-\nmonial evidence of most witnesses must be strictly limited to direct evidence based upon the wit-\nness’s factual observations. However, this does not apply if a witness has been accepted by the \ncourt as an expert in a certain field. In that case, the witness may offer an expert opinion based \nupon the other facts presented and their personal knowledge of the field.\nTestimonial evidence must not be so-called hearsay evidence. That is, a witness may not tes-\ntify as to what someone else told them outside of court. Computer log files that are not authen-\nticated by a system administrator may also be considered hearsay evidence.\nInvestigation Process\nWhen you initiate a computer security investigation, you should first assemble a team of com-\npetent analysts to assist with the investigation.\nCalling In Law Enforcement\nOne of the first decisions that must be made in an investigation is whether law enforcement \nauthorities should be called in. This is actually a relatively complicated decision that should \ninvolve senior management officials. There are many factors in favor of calling in the experts. \nFor example, the FBI now maintains a National Computer Crime Squad that includes individ-\nuals with the following qualifications:\n\u0002\nDegrees in the computer sciences\n\u0002\nPrior work experience in industry and academic institutions\n\u0002\nBasic and advanced commercial training\n\u0002\nKnowledge of basic data and telecommunications networks\n\u0002\nExperience with Unix and other computer operating systems\nOn the other hand, there are also two major factors that may cause a company to shy away \nfrom calling in the authorities. First, the investigation will more than likely become public and \nmay embarrass the company. Second, law enforcement authorities are bound to conduct an \ninvestigation that complies with the Fourth Amendment and other legal requirements that may \nnot apply to a private investigation.\n" }, { "page_number": 639, "text": "594\nChapter 17\n\u0002 Law and Investigations\nConducting the Investigation\nIf you elect not to call in law enforcement, you should still attempt to abide by the principles of \na sound investigation to ensure the accuracy and fairness of your inquiry. It is important to \nremember a few key principles:\n\u0002\nNever conduct your investigation on an actual system that was compromised. Take the sys-\ntem offline, make a backup, and use the backup to investigate the incident.\n\u0002\nNever attempt to “hack back” and avenge a crime. You may inadvertently attack an inno-\ncent third party and find yourself liable for computer crime charges.\nSearch Warrants\nEven the most casual viewer of American crime television is familiar with the question “Do you \nhave a warrant?” The Fourth Amendment of the U.S. Constitution outlines the burden placed \nupon investigators to have a valid search warrant before conducting certain searches and the \nlegal hurdle they must overcome to obtain a warrant:\n“The right of the people to be secure in their persons, houses, papers and effects, against \nunreasonable searches and seizures, shall not be violated, and no warrants shall issue, but \nupon probable cause, supported by oath or affirmation, and particularly describing the place to \nbe searched, and the persons or things to be seized.”\nThis amendment contains several important provisions that guide the activities of law enforce-\nment personnel:\n\u0002\nInvestigators must obtain a warrant before searching a person’s private belongings, \nassuming that there is a reasonable expectation of privacy. There are a number of docu-\nmented exceptions to this requirement, such as when an individual consents to a search, \nthe evidence of a crime is in plain view, or there is a life-threatening emergency necessi-\ntating the search.\n\u0002\nWarrants can be issued only based upon probable cause. There must be some type of evi-\ndence that a crime took place and that the search in question will yield evidence relating \nto that crime. The standard of “probable cause” required to get a warrant is much weaker \nthan the standard of evidence required to secure a conviction. Most warrants are “sworn \nout” based solely upon the testimony of investigators.\n\u0002\nWarrants must be specific in their scope. The warrant must contain a detailed description \nof the legal bounds of the search and seizure.\nIf investigators fail to comply with even the smallest detail of these provisions, they may find \ntheir warrant invalidated and the results of the search deemed inadmissible. This leads to \nanother one of those American colloquialisms: “He got off on a technicality.”\n" }, { "page_number": 640, "text": "Exam Essentials\n595\n\u0002\nIf in doubt, call in expert assistance. If you don’t wish to call in law enforcement, contact a pri-\nvate investigations firm with specific experience in the field of computer security investigations.\n\u0002\nNormally, it’s best to begin the investigation process using informal interviewing tech-\nniques. These are used to gather facts and determine the substance of the case. When spe-\ncific suspects are identified, they should be questioned using interrogation techniques. \nAgain, this is an area best left untouched without specific legal advice.\nSummary\nComputer security necessarily entails a high degree of involvement from the legal community. \nIn this chapter, you learned about a large number of laws that govern security issues such as \ncomputer crime, intellectual property, data privacy, and software licensing. You also learned \nabout the procedures that must be followed when investigating an incident and collecting evi-\ndence that may later be admitted into a court of law during a civil or criminal trial.\nGranted, computer security professionals can not be expected to understand the intricate \ndetails of all of the laws that cover computer security. However, the main objective of this chap-\nter is to provide you with the foundations of that knowledge. The best legal skill that a CISSP \ncandidate should have is ability to identify a legally questionable issue and know when to call \nin an attorney who specializes in computer/Internet law.\nExam Essentials\nUnderstand the differences between criminal law, civil law, and administrative law.\nCrimi-\nnal law protects society against acts that violate the basic principles we believe in. Violations of \ncriminal law are prosecuted by federal and state governments. Civil law provides the framework \nfor the transaction of business between people and organizations. Violations of civil law are \nbrought to the court and argued by the two affected parties. Administrative law is used by gov-\nernment agencies to effectively carry out their day-to-day business.\nBe able to explain the basic provisions of the major laws designed to protect society against \ncomputer crime.\nThe Computer Fraud and Abuse Act (as amended) protects computers used \nby the government or in interstate commerce from a variety of abuses. The Computer Security \nAct outlines steps the government must take to protect its own systems from attack. The Gov-\nernment Information Security Reform Act further develops the federal government information \nsecurity program.\nKnow the difference between copyrights, trademarks, patents, and trade secrets.\nCopy-\nrights protect original works of authorship, such as books, articles, poems, and songs. Trade-\nmarks are names, slogans, and logos that identify a company, product, or service. Patents \nprovide protection to the creators of new inventions. Trade secret law protects the operating \nsecrets of a firm.\n" }, { "page_number": 641, "text": "596\nChapter 17\n\u0002 Law and Investigations\nBe able to explain the basic provisions of the Digital Millennium Copyright Act of 1998.\nThe Digital Millennium Copyright Act prohibits the circumvention of copy protection mecha-\nnisms placed in digital media and limits the liability of Internet service providers for the activ-\nities of their users.\nKnow the basic provisions of the Economic Espionage Act of 1996.\nThe Economic Espio-\nnage Act provides penalties for individuals found guilty of the theft of trade secrets. Harsher pen-\nalties apply when the individual knows that the information will benefit a foreign government.\nUnderstand the various types of software license agreements.\nContractual license agree-\nments are written agreements between a software vendor and user. Shrink-wrap agreements are \nwritten on software packaging and take effect when a user opens the package. Click-wrap \nagreements are included in a package but require the user to accept the terms during the soft-\nware installation process.\nExplain the impact of the Uniform Computer Information Transactions Act on software licensing.\nThe Uniform Computer Information Transactions Act provides a framework for the enforce-\nment of shrink-wrap and click-wrap agreements by federal and state governments.\nUnderstand the restrictions placed upon export of high-performance hardware and encryption \ntechnology outside of the United States.\nNo high-performance computers or encryption tech-\nnology may be exported to Tier 4 countries. The export of hardware capable of operating in \nexcess of 190,000 MTOPS to Tier 3 countries must be approved by the Department of Com-\nmerce. New rules permit the easy exporting of “mass market” encryption software.\nUnderstand the major laws that govern privacy of personal information in both the United \nStates and the European Union.\nThe United States has a number of privacy laws that affect \nthe government’s use of information as well as the use of information by specific industries, like \nfinancial services companies and healthcare organizations, that handle sensitive information. \nThe European Union has a more comprehensive directive on data privacy that regulates the use \nand exchange of personal information.\nKnow the basic requirements for evidence to be admissible in a court of law.\nTo be admissi-\nble, evidence must be relevant to a fact at issue in the case, the fact must be material to the case, \nand the evidence must be competent, or legally collected.\nExplain the various types of evidence that may be used in a criminal or civil trial.\nReal evi-\ndence consists of actual objects that may be brought into the courtroom. Documentary evidence \nconsists of written documents that provide insight into the facts. Testimonial evidence consists \nof verbal or written statements made by witnesses.\n" }, { "page_number": 642, "text": "Written Lab\n597\nWritten Lab\nAnswer the following questions about law and investigations:\n1.\nWhat are the key rights guaranteed to individuals under the European Union’s directive on \ndata privacy?\n2.\nWhat are the three basic requirements that evidence must meet in order to be admissible in \ncourt?\n3.\nWhat are some common steps that employers take to notify employees of system monitoring?\n" }, { "page_number": 643, "text": "598\nChapter 17\n\u0002 Law and Investigations\nReview Questions\n1.\nWhich criminal law was the first to implement penalties for the creators of viruses, worms, and \nother types of malicious code that cause harm to computer system(s)?\nA. Computer Security Act\nB. National Infrastructure Protection Act\nC. Computer Fraud and Abuse Act\nD. Electronic Communications Privacy Act\n2.\nWhich law first required operators of federal interest computer systems to undergo periodic \ntraining in computer security issues?\nA. Computer Security Act\nB. National Infrastructure Protection Act\nC. Computer Fraud and Abuse Act\nD. Electronic Communications Privacy Act\n3.\nWhat type of law does not require an act of Congress to implement at the federal level but, \nrather, is enacted by the executive branch in the form of regulations, policies, and procedures?\nA. Criminal law\nB. Common law\nC. Civil law\nD. Administrative law\n4.\nWhich federal government agency has responsibility for ensuring the security of government \ncomputer systems that are not used to process sensitive and/or classified information?\nA. National Security Agency\nB. Federal Bureau of Investigation\nC. National Institute of Standards and Technology\nD. Secret Service\n5.\nWhat is the broadest category of computer systems protected by the Computer Fraud and Abuse \nAct, as amended?\nA. Government-owned systems\nB. Federal interest systems\nC. Systems used in interstate commerce\nD. Systems located in the United States\n" }, { "page_number": 644, "text": "Review Questions\n599\n6.\nWhat law protects the right of citizens to privacy by placing restrictions on the authority granted \nto government agencies to search private residences and facilities?\nA. Privacy Act\nB. Fourth Amendment\nC. Second Amendment\nD. Gramm-Leach-Bliley Act\n7.\nMatthew recently authored an innovative algorithm for solving a mathematical problem and he \nwould like to share it with the world. However, prior to publishing the software code in a tech-\nnical journal, he would like to obtain some sort of intellectual property protection. Which type \nof protection is best suited to his needs?\nA. Copyright\nB. Trademark\nC. Patent\nD. Trade Secret\n8.\nMary is the cofounder of Acme Widgets, a manufacturing firm. Together with her partner, Joe, \nshe has developed a special oil that will dramatically improve the widget manufacturing process. \nTo keep the formula secret, Mary and Joe plan to make large quantities of the oil by themselves \nin the plant after the other workers have left. They would like to protect this formula for as long \nas possible. What type of intellectual property protection best suits their needs?\nA. Copyright\nB. Trademark\nC. Patent\nD. Trade secret\n9.\nRichard recently developed a great name for a new product that he plans to begin using imme-\ndiately. He spoke with his attorney and filed the appropriate application to protect his product \nname but has not yet received a response from the government regarding his application. He \nwould like to begin using the name immediately. What symbol should he use next to the name \nto indicate its protected status?\nA. © \nB. ® \nC. ™ \nD. †\n10. What law prevents government agencies from disclosing personal information that an individual \nsupplies to the government under protected circumstances?\nA. Privacy Act\nB. Electronic Communications Privacy Act\nC. Health Insurance Portability and Accountability Act\nD. Gramm-Leach-Bliley Act\n" }, { "page_number": 645, "text": "600\nChapter 17\n\u0002 Law and Investigations\n11. What law formalizes many licensing arrangements used by the software industry and attempts \nto standardize their use from state to state?\nA. Computer Security Act\nB. Uniform Computer Information Transactions Act\nC. Digital Millennium Copyright Act\nD. Gramm-Leach-Bliley Act\n12. The Children’s Online Privacy Protection Act was designed to protect the privacy of children \nusing the Internet. What is the minimum age a child must be before companies may collect per-\nsonal identifying information from them without parental consent?\nA. 13\nB. 14\nC. 15\nD. 16\n13. Which one of the following is not a requirement that Internet service providers must satisfy in \norder to gain protection under the “transitory activities” clause of the Digital Millennium Copy-\nright Act?\nA. The service provider and the originator of the message must be located in different states.\nB. The transmission, routing, provision of connections, or copying must be carried out by an \nautomated technical process without selection of material by the service provider.\nC. Any intermediate copies must not ordinarily be accessible to anyone other than anticipated \nrecipients and must not be retained for longer than reasonably necessary.\nD. The transmission must be originated by a person other than the provider.\n14. Which one of the following laws is not designed to protect the privacy rights of consumers and \nInternet users?\nA. Health Insurance Portability and Accountability Act\nB. Identity Theft Assumption and Deterrence Act\nC. USA Patriot Act\nD. Gramm-Leach-Bliley Act\n15. Which one of the following types of licensing agreements is most well known because it does not \nrequire that the user take action to acknowledge that they have read the agreement prior to exe-\ncuting it?\nA. Standard license agreement\nB. Shrink-wrap agreement\nC. Click-wrap agreement\nD. Verbal agreement\n" }, { "page_number": 646, "text": "Review Questions\n601\n16. What industry is most directly impacted by the provisions of the Gramm-Leach-Bliley Act?\nA. Healthcare\nB. Banking\nC. Law enforcement\nD. Defense contractors\n17.\nWhat is the standard duration of patent protection in the United States?\nA. 14 years from the application date\nB. 14 years from the date the patent is granted\nC. 20 years from the application date\nD. 20 years from the date the patent is granted\n18. Which one of the following is not a valid legal reason for processing information about an indi-\nvidual under the European Union’s data privacy directive?\nA. Contract\nB. Legal obligation\nC. Marketing needs\nD. Consent\n19. What type of evidence must be authenticated by a witness who can uniquely identify it or \nthrough a documented chain of custody?\nA. Documentary evidence\nB. Testimonial evidence\nC. Real evidence\nD. Hearsay evidence\n20. What evidentiary principle states that a written contract is assumed to contain all of the terms \nof an agreement?\nA. Material evidence\nB. Best evidence\nC. Parol evidence\nD. Relevant evidence\n" }, { "page_number": 647, "text": "602\nChapter 17\n\u0002 Law and Investigations\nAnswers to Review Questions\n1.\nC. The Computer Fraud and Abuse Act, as amended, provides criminal and civil penalties for \nthose individuals convicted of using viruses, worms, Trojan horses, and other types of malicious \ncode to cause damage to computer system(s).\n2.\nA. The Computer Security Act requires mandatory periodic training for all persons involved in the \nmanagement, use, or operation of federal computer systems that contain sensitive information.\n3.\nD. Administrative laws do not require an act of the legislative branch to implement at the federal \nlevel. Administrative laws consist of the policies, procedures, and regulations promulgated by \nagencies of the executive branch of government. Although they do not require an act of Con-\ngress, these laws are subject to judicial review and must comply with criminal and civil laws \nenacted by the legislative branch.\n4.\nC. The National Institute of Standards and Technology (NIST) is charged with the security man-\nagement of all federal government computer systems that are not used to process sensitive \nnational security information. The National Security Agency (part of the Department of \nDefense) is responsible for managing those systems that do process classified and/or sensitive \ninformation.\n5.\nC. The original Computer Fraud and Abuse Act of 1984 covered only systems used by the gov-\nernment and financial institutions. The act was broadened in 1986 to include all federal interest \nsystems. The Computer Abuse Amendments Act of 1994 further amended the CFAA to cover \nall systems that are used in interstate commerce, covering a large portion (but not all) of the com-\nputer systems in the United States.\n6.\nB. The Fourth Amendment to the U.S. Constitution sets the “probable cause” standard that law \nenforcement officers must follow when conducting searches and/or seizures of private property. \nIt also states that those officers must obtain a warrant before gaining involuntary access to such \nproperty.\n7.\nA. Copyright law is the only type of intellectual property protection available to Matthew. It \ncovers only the specific software code that Matthew used. It does not cover the process or ideas \nbehind the software. Trademark protection is not appropriate for this type of situation. Patent \nprotection does not apply to mathematical algorithms. Matthew can’t seek trade secret protec-\ntion because he plans to publish the algorithm in a public technical journal.\n8.\nD. Mary and Joe should treat their oil formula as a trade secret. As long as they do not publicly \ndisclose the formula, they can keep it a company secret indefinitely.\n9.\nC. Richard’s product name should be protected under trademark law. Until his registration is \ngranted, he may use the symbol next to it to inform others that it is protected under trademark \nlaw. Once his application is approved, the name becomes a registered trademark and Richard \nmay begin using the symbol.\n10. A. The Privacy Act of 1974 limits the ways government agencies may use information that pri-\nvate citizens disclose to them under certain circumstances.\n" }, { "page_number": 648, "text": "Answers to Review Questions\n603\n11. B. The Uniform Computer Information Transactions Act (UCITA) attempts to implement a stan-\ndard framework of laws regarding computer transactions to be adopted by all states. One of the \nissues addressed by UCITA is the legality of various types of software license agreements.\n12. A. The Children’s Online Privacy Protection Act (COPPA) provides severe penalties for compa-\nnies that collect information from young children without parental consent. COPPA states that \nthis consent must be obtained from the parents of children under the age of 13 before any infor-\nmation is collected (other than basic information required to obtain that consent).\n13. A. The Digital Millennium Copyright Act does not include any geographical location require-\nments for protection under the “transitory activities” exemption. The other options are three of \nthe five mandatory requirements. The other two requirements are that the service provider must \nnot determine the recipients of the material and the material must be transmitted with no mod-\nification to its content.\n14. C. The USA Patriot Act was adopted in the wake of the 9/11 terrorist attacks. It broadens the \npowers of the government to monitor communications between private citizens and therefore \nactually weakens the privacy rights of consumers and Internet users. The other laws mentioned \nall contain provisions designed to enhance individual privacy rights.\n15. B. Shrink-wrap license agreements become effective when the user opens a software package. \nClick-wrap agreements require the user to click a button during the installation process to accept \nthe terms of the license agreement. Standard license agreements require that the user sign a writ-\nten agreement prior to using the software. Verbal agreements are not normally used for software \nlicensing but also require some active degree of participation by the software user.\n16. B. The Gramm-Leach-Bliley Act provides, among other things, regulations regarding the way \nfinancial institutions may handle private information belonging to their customers.\n17.\nC. United States patent law provides for an exclusivity period of 20 years beginning at the time \nthe patent application is submitted to the Patent and Trademark Office.\n18. C. Marketing needs are not a valid reason for processing personal information, as defined by the \nEuropean Union privacy directive.\n19. C. Real evidence must be either uniquely identified by a witness or authenticated through a doc-\numented chain of custody.\n20. C. The parol evidence rule states that a written contract is assumed to contain all of the terms \nof an agreement and may not be modified by a verbal agreement.\n" }, { "page_number": 649, "text": "604\nChapter 17\n\u0002 Law and Investigations\nAnswers to Written Lab\nFollowing are answers to the questions in this chapter’s written lab:\n1.\nIndividuals have a right to access records kept about them and know the source of data \nincluded in those records. They also have the right to correct inaccurate records. Individu-\nals have the right to withhold consent from data processors and have legal recourse if these \nrights are violated.\n2.\nTo be admissible, evidence must be reliable, competent, and material to the case.\n3.\nSome common steps that employers take to notify employees of monitoring include clauses \nin employment contracts that state that the employee should have no expectation of privacy \nwhile using corporate equipment, similar written statements in corporate acceptable use \nand privacy policies, logon banners warning that all communications are subject to moni-\ntoring, and warning labels on computers and telephones warning of monitoring.\n" }, { "page_number": 650, "text": "Chapter\n18\nIncidents and Ethics\nTHE CISSP EXAM TOPICS COVERED IN THIS \nCHAPTER INCLUDE:\n\u0001 Major Categories of Computer Crime\n\u0001 Incident Handling\n\u0001 Ethics\n" }, { "page_number": 651, "text": "In this chapter, we’ll continue our discussion from Chapter 17 \nregarding the Law, Investigation, and Ethics domain of the Com-\nmon Body of Knowledge (CBK) for the CISSP certification exam. \nThis domain deals with topics and issues related to computer crime laws and regulations, inves-\ntigative techniques used to determine if a computer crime has been committed and to collect evi-\ndence when appropriate, and ethics issues and code of conduct for the computer practitioner.\nThe first step in deciding how to respond to a computer attack is to know if and when an \nattack has taken place. You must know how to determine that an attack is occurring, or has \noccurred, before you can properly choose a course of action. Once you have determined that an \nincident has occurred, the next step is to conduct an investigation and collect evidence to find \nout what has happened and determine the extent of any damage that might have been done. You \nmust be sure you conduct the investigation in accordance with local laws and practices.\nMajor Categories of Computer Crime\nThere are many ways to attack a computer system and many motivations to do so. Information \nsystem security practitioners generally put crimes against or involving computers into different \ncategories. Simply put, a computer crime is a crime (or violation of a law or regulation) that \ninvolves a computer. The crime could be against the computer, or the computer could have been \nused in the actual commission of the crime. Each of the categories of computer crimes represents \nthe purpose of an attack and its intended result.\nAny individual who violates one or more of your security policies is considered to be an \nattacker. An attacker uses different techniques to achieve a specific goal. Understanding the \ngoals helps to clarify the different types of attacks. Remember that crime is crime, and the moti-\nvations behind computer crime are no different than the motivations behind any other type of \ncrime. The only real difference may be in the methods the attacker uses to strike.\nComputer crimes are generally classified as one of the following types:\n\u0002\nMilitary and intelligence attacks\n\u0002\nBusiness attacks\n\u0002\nFinancial attacks\n\u0002\nTerrorist attacks\n\u0002\nGrudge attacks\n\u0002\n“Fun” attacks\n" }, { "page_number": 652, "text": "Major Categories of Computer Crime\n607\nIt is important to understand the differences among the categories of computer crime to best \nunderstand how to protect a system and react when an attack occurs. The type and amount of \nevidence left by an attacker is often dependent on their expertise. In the following sections, we’ll \ndiscuss the different categories of computer crimes and what type of evidence you might find \nafter an attack. The evidence can help you determine what the attacker did and what the \nintended target of the attack was. You may find that your system was only a link in the chain \nof network hops used to reach the real victim and possibly make the trail harder to follow back \nto the attacker.\nMilitary and Intelligence Attacks\nMilitary and intelligence attacks are launched primarily to obtain secret and restricted informa-\ntion from law enforcement or military and technological research sources. Disclosure of such \ninformation could compromise investigations, disrupt military planning, and threaten national \nsecurity. Attacks to gather military information or other sensitive intelligence often precede \nother, more damaging attacks.\nAn attacker may be looking for the following kinds of information:\n\u0002\nMilitary descriptive information of any type, including deployment information, readiness \ninformation, and order of battle plans\n\u0002\nSecret intelligence gathered for military or law enforcement purposes\n\u0002\nDescriptions and storage locations of evidence obtained in a criminal investigation\n\u0002\nAny secret information that could be used in a later attack\nDue to the sensitive nature of information collected and used by the military and intelligence \nagencies, their computer systems are often attractive targets for experienced attackers. To pro-\ntect from more numerous and more sophisticated attackers, you will generally find more formal \nsecurity policies in place on systems that house such information. As you learned in Chapter 5, \n“Security Management Concepts and Principles,” data can be classified according to sensitivity \nand stored on systems that support the required level of security. It is common to find stringent \nperimeter security as well as internal controls to limit access to classified documents on military \nand intelligence agency systems.\nYou can be sure that serious attacks to acquire military or intelligence information are car-\nried out by professionals. Professional attackers are generally very thorough in covering their \ntracks. There is usually very little evidence to collect after such an attack. Attackers in this cat-\negory are the most successful and the most satisfied when no one is aware that an attack \noccurred.\nBusiness Attacks\nBusiness attacks focus on illegally obtaining an organization’s confidential information. This \ncould be information that is critical to the operation of the organization, such as a secret recipe, \nor information that could damage the organization’s reputation if disclosed, such as personal \ninformation about its officers. The gathering of a competitor’s confidential information, also \n" }, { "page_number": 653, "text": "608\nChapter 18\n\u0002 Incidents and Ethics\ncalled industrial espionage, is not a new phenomenon. Businesses have used illegal means to \nacquire competitive information for many years. The temptation to steal a competitor’s secrets \nand the ease with which a savvy attacker can compromise some computer systems to extract \nfiles that contain valuable research or other confidential information can make this type of \nattack attractive.\nThe goal of business attacks is solely to extract confidential information. The use of the infor-\nmation gathered during the attack usually causes more damage than the attack itself. A business \nthat has suffered an attack of this type can be put into a position from which it might not ever \nrecover. It is up to you as the security professional to ensure that the systems that contain con-\nfidential data are secure. In addition, a policy must be developed that will handle such an intru-\nsion should it occur. (For more information on security policies, see Chapter 6, “Asset Value, \nPolicies, and Roles.”)\nFinancial Attacks\nFinancial attacks are carried out to unlawfully obtain money or services. They are the type of \ncomputer crime you most commonly hear about. The goal of a financial attack could be to \nincrease the balance in a bank account or to place “free” long-distance telephone calls. You \nhave probably heard of individuals breaking into telephone company computers and placing \nfree calls. This type of financial attack is called phone phreaking.\nShoplifting and burglary are both examples of financial attacks. You can usually tell the \nsophistication of the attacker by the dollar amount of the damages. Less-sophisticated attackers \nseek easier targets, but although the damages are usually minimal, they can add up over time.\nFinancial attacks launched by sophisticated attackers can result in substantial damages. \nAlthough phone phreaking causes the telephone company to lose the revenue of calls placed, \nserious financial attacks can result in losses amounting to millions of dollars. As with the attacks \npreviously described, the ease with which you can detect an attack and track an attacker is \nlargely dependent on the attacker’s skill level.\nTerrorist Attacks\nTerrorist attacks are a reality in many different areas of our society. Our increasing reliance \nupon information systems makes them more and more attractive to terrorists. Such attacks dif-\nfer from military and intelligence attacks. The purpose of a terrorist attack is to disrupt normal \nlife, whereas a military or intelligence attack is designed to extract secret information. Intelli-\ngence gathering generally precedes any type of terrorist attack. The very systems that are victims \nof a terrorist attack were probably compromised in an earlier attack to collect intelligence. The \nmore diligent you are in detecting attacks of any type, the better prepared you will be to inter-\nvene before more serious attacks occur.\nPossible targets of a computer terrorist attack could be systems that regulate power plants or \ncontrol telecommunications or power distribution. Many such control and regulatory systems \nare computerized and vulnerable to terrorist action. In fact, the possibility exists of a simulta-\nneous physical and computerized terrorist attack. Our ability to respond to such an attack \n" }, { "page_number": 654, "text": "Major Categories of Computer Crime\n609\nwould be greatly diminished if the physical attack were simultaneously launched with a com-\nputer attack designed to knock out power and communications.\nMost large power and communications companies have dedicated a security staff to ensure \nthe security of their systems, but many smaller businesses that have systems connected to the \nInternet are more vulnerable to attacks. You must diligently monitor your systems to identify \nany attacks and then respond swiftly when an attack is discovered.\nGrudge Attacks\nGrudge attacks are attacks that are carried out to damage an organization or a person. The \ndamage could be in the loss of information or information processing capabilities or harm to the \norganization or a person’s reputation. The motivation behind a grudge attack is usually a feeling \nof resentment, and the attacker could be a current or former employee or someone who wishes \nill will upon an organization. The attacker is disgruntled with the victim and takes out their frus-\ntration in the form of a grudge attack.\nAn employee who has recently been fired is a prime example of a person who might carry out \na grudge attack to “get back” at the organization. Another example is a person who has been \nrejected in a personal relationship with another employee. The person who has been rejected \nmight launch an attack to destroy data on the victim’s system.\nYour security policy should address the potential of attacks by disgruntled employees. For \nexample, as soon as an employee is terminated, all system access for that employee should be \nterminated. This action reduces the likelihood of a grudge attack and removes unused access \naccounts that could be used in future attacks.\nAlthough most grudge attackers are just disgruntled people with limited hacking and crack-\ning abilities, some possess the skills to cause substantial damage. An unhappy cracker can be a \nhandful for security professionals. Take extreme care when a person with known cracking abil-\nity leaves your company. At the least, you should perform a vulnerability assessment of all sys-\ntems the person could access. You may be surprised to find one or more “back doors” left in \nthe system. But even in the absence of any back doors, a former employee who is familiar with \nthe technical architecture of the organization may know how to exploit its weaknesses.\nGrudge attacks can be devastating if allowed to occur unchecked. Diligent monitoring and \nassessing systems for vulnerabilities is the best protection for most grudge attacks.\n“Fun” Attacks\nFun attacks are the attacks that crackers with few true skills launch. Attackers who lack the abil-\nity to devise their own attacks will often download programs that do their work for them. These \nattackers are often called “script kiddies” because they only run other people’s programs, or \nscripts, to launch an attack.\nThe main motivation behind fun attacks is the thrill of getting into a system. If you are the \nvictim of a fun attack, the most common fate you will suffer is a service interruption. Although \nan attacker of this type may destroy data, the main motivation is to compromise a system and \nperhaps use it to launch an attack against another victim.\n" }, { "page_number": 655, "text": "610\nChapter 18\n\u0002 Incidents and Ethics\nEvidence\nChapter 17 included a general coverage of the topic of evidence. Remember that the term evi-\ndence refers to any hardware, software, or data that you can use to prove the identity and \nactions of an attacker. Make sure you understand the importance of properly handling any and \nall evidence you collect after an attack. You should realize that most computer evidence is intan-\ngible, meaning it is electronic and magnetically stored information that is vulnerable to erasure, \ncorruption, and other forms of damage.\nYour ability to recover damages in a court of law may depend solely on your diligence during \nthe evidence collection process. In fact, your ability to determine the extent of an attack depends \non your evidence collecting abilities. Once an attack has been identified, you should start the \nevidence collection process. Always assume an attack will result in a legal battle. It is far easier \nto take evidence collection seriously from the beginning than to later realize an attack was more \nsevere than first thought and then try to go back and do it right. Following standard evidence \ncollection procedures also ensures that you conduct your investigation in an orderly, scientific \nmanner.\nMost attacks leave evidence of some kind. However, professional attackers may leave evi-\ndence that is so subtle that it is difficult or impossible to find. Another problem with evidence \nis that it is often time sensitive. Your logs probably roll over periodically and old information is \nlost. Do you know the frequency of your log purge routines? Some attacks leave traces in mem-\nory. The bulk of the evidence will be lost when you remove power from the system. Each step \nyou take as you collect evidence should be deliberate and well documented.\nYou must know what your system baseline looks like and how it operates in a normal mode. \nWithout this knowledge, you will be hard-pressed to recognize an attack or to know where to \nsearch for valuable evidence. Experienced security professionals learn how their systems oper-\nate on a daily basis and are comfortable with the regular operations of the system. The more you \nknow your systems, the more an unusual event stands out.\nIncident Handling\nWhen an incident occurs, you must handle it in a manner that is outlined in your security policy \nand consistent with local laws and regulations. The first step in handling an incident properly \nis recognizing when one occurs. Even before recognition, you need to clearly understand what \nan incident is. Your security policy should define recognized incidents, but the general definition \nof an incident is a violation or the threat of a violation of your security policy.\nThe most common reason incidents are not reported is that they are never identified. You \ncould have many security policy violations occurring each day, but if you don’t have a way of \nidentifying them, you will never know. Therefore, your security policy should identify and list \nall possible violations and ways to detect them. It’s also important to update your security pol-\nicy as new types of violations and attacks emerge.\nWhat you do when you find that an incident has occurred depends on the type of incident \nand scope of damage. Law dictates that some incidents must be reported, such as those that \n" }, { "page_number": 656, "text": "Incident Handling\n611\nimpact government or federal interest computers (a federal interest computer is one that is used \nby financial institutions and by infrastructure systems such as water and power systems) or cer-\ntain financial transactions, regardless of the amount of damage.\nNext, we’ll look at some of the different types of incidents and typical responses.\nCommon Types of Incidents\nWe discussed the different types of attacks in Chapter 2. An incident occurs when an attack, or \nother violation of your security policy, is carried out against your system. There are many ways \nto classify incidents; here is a general list of categories:\n\u0002\nScanning\n\u0002\nCompromises\n\u0002\nMalicious code\n\u0002\nDenial of service\nThese four areas are the basic entry points for attackers to impact a system. You must focus \non each of these areas to create an effective monitoring strategy that detects system incidents. \nEach incident area has representative signatures that can tip off an alert security administrator \nthat an incident has occurred. Make sure you know your operating system environment and \nwhere to look for the telltale signs of each type of incident.\nScanning\nScanning attacks are incidents that usually indicate that another attack is possible. Attackers \nwill gather as much information about your system as possible before launching a directed \nattack. Look for any unusual activity on any port or from any single address. A high volume \nof Simple Network Management Protocol (SNMP) packets can point to a systematic scan of \nyour system.\nRemember that simply scanning your system is not illegal. It is similar to “casing” a neigh-\nborhood prior to a burglary. It can indicate that illegal activity will follow, so it is a good idea \nto treat scans as incidents and to collect evidence of scanning activity. You may find that the evi-\ndence you collect at the time the system is scanned could be the link you need later to find the \nparty responsible for a later attack.\nBecause scanning is such a common occurrence, you definitely want to automate evidence \ncollection. Set up your firewall to log the SNMP traffic and archive your log files. The logs can \nbecome relatively large, but that is the price you pay for retained evidence.\nCompromise\nFor a system that contains sensitive information, a compromise could be the most serious inci-\ndent. A system compromise is any unauthorized access to the system or information the system \nstores. A compromise could originate inside or outside the organization. To make matters \nworse, a compromise could come from a valid user. An unauthorized use of a valid user ID is \njust as much of a compromise incident as an experienced cracker breaking in from the outside.\n" }, { "page_number": 657, "text": "612\nChapter 18\n\u0002 Incidents and Ethics\nSystem compromises can be very difficult to detect. Most often, the data custodian notices \nsomething unusual about the data. It could be missing, altered, or moved; the time stamps could \nbe different; or something else is just not right. The more you know about the normal operation \nof your system, the better prepared you will be to detect abnormal system behavior.\nMalicious Code\nWhen malicious code is mentioned, you probably think of viruses. Although a virus is a com-\nmon type of malicious code, it is only one type of several. (In Chapter 4, “Communications \nSecurity and Countermeasures,” we discussed different types of malicious code.) Detection of \nthis type of a malicious code incident comes from either an end user reporting behavior caused \nby the malicious code or an automated alert reporting that scanned code containing a malicious \ncomponent has been found.\nThe most effective way to protect your system from malicious code is to implement code \nscanners and keep the signature database up-to-date. In addition, your security policy should \naddress the introduction of outside code. Be specific as to what code you will allow end users \nto install.\nDenial of Service\nThe final type of incident is a denial of service (DoS). This type of incident is often the easiest \nto detect. A user or automated tool reports that one or more services (or the entire machine) is \nunavailable. Although they’re simple to detect, avoidance is a far better course of action. It is \ntheoretically possible to dynamically alter firewall rules to reject DoS network traffic, but in \nrecent years the sophistication and complexity of DoS attacks make them extremely difficult to \ndefend against. Because there are so many variations of the DoS attack, implementing this strat-\negy is a nontrivial task.\nResponse Teams\nMany organizations now have a dedicated team responsible for investigating any computer \nsecurity incidents that take place. These teams are commonly known as Computer Incident \nResponse Teams (CIRTs) or Computer Security Incident Response Teams (CSIRTs). When an \nincident occurs, the response team has four primary responsibilities:\n\u0002\nDetermine the amount and scope of damage caused by the incident\n\u0002\nDetermine whether any confidential information was compromised during the incident\n\u0002\nImplement any necessary recovery procedures to restore security and recover from inci-\ndent-related damages\n\u0002\nSupervise the implementation of any additional security measures necessary to improve \nsecurity and prevent recurrence of the incident\nAs part of these duties, the team should facilitate a postmortem review of the incident within \na week of the occurrence to ensure that key players in the incident share their knowledge and \ndevelop best practices to assist in future incident response efforts.\n" }, { "page_number": 658, "text": "Incident Handling\n613\nThe Gibson Research Denial-of-Service Attacks: Fun or Grudge?\nSteve Gibson is a well-known software developer and personality in the IT industry whose high \nvisibility derives not only from highly regarded products associated with his company, Gibson \nResearch, but also from his many years as a vocal and outspoken columnist for Computer World \nmagazine. In recent years, he has become quite active in the field of computer security, and his \nsite offers free vulnerability scanning services and a variety of patches and fixes for operating \nsystem vulnerabilities. He operates a website at http://grc.com that has been the subject of \nnumerous well-documented denial of service attacks. It’s interesting to speculate whether such \nattacks are motivated by grudges (that is, by those who seek to advance their reputations by \nbreaking into an obvious and presumably well-defended point of attack) or by fun (that is, by \nthose with excess time on their hands who might seek to prove themselves against a worthy \nadversary without necessarily expecting any gain other than notoriety from their actions).\nGibson’s website has in fact been subject to two well-documented denial of service attacks that \nyou can read about in detail on his site:\n\u0002\n“Distributed Reflection Denial of Service,” February 22, 2002, http://grc.com/dos/drdos.htm\n\u0002\n“The Strange Tale of the Denial of Service Attacks Against GRC.COM,” last updated \nMarch 5, 2002, http://grc.com/dos/grcdos.htm\nAlthough his subsequent anonymous discussions with one of the perpetrators involved seem \nto indicate that the motive for some of these attacks was fun rather than business damage or \nacting on a grudge, these reports are fascinating because of the excellent model they provide \nfor incident handling and reporting.\nThese documents contain a brief synopsis of the symptoms and chronology of the attacks that \noccurred, along with short- and long-term fixes and changes enacted to prevent recurrences. \nThey also stress the critical importance of communication with service providers whose infra-\nstructures may be involved in attacks as they’re underway. What’s extremely telling about Gib-\nson’s report on the denial of service attacks is that he experienced 17 hours of downtime \nbecause he was unable to establish contact with a knowledgeable, competent engineer at his \nservice provider who could help define the right kinds of traffic filters to stymie the floods of \ntraffic that characterize denial of service attacks.\nGibson’s analysis also indicates his thoroughness in analyzing the sources of the distributed \ndenial of service attacks and in documenting what he calls “an exact profile of the malicious \ntraffic being generated during these attacks.” This information permitted his ISP to define a set \nof filters that blocked further such traffic from transiting the final T1 links from Gibson’s Internet \nservice provider to his servers. As his experience proves so conclusively, recognizing, analyz-\ning, and characterizing attacks is absolutely essential to defining filters or other countermea-\nsures that can block or defeat them.\n" }, { "page_number": 659, "text": "614\nChapter 18\n\u0002 Incidents and Ethics\nAbnormal and Suspicious Activity\nThe key to identifying incidents is to identify any abnormal or suspicious activity. Hopefully, \nany suspicious activity will also be abnormal. The only way to identify abnormal behavior is to \nknow what normal behavior looks like. Every system is different. Although you can detect many \nattacks by their characteristic signatures, experienced attackers know how to “fly under the \nradar.” You must be very aware of how your system operates normally. Abnormal or suspicious \nactivity is any system activity that does not normally occur on your system.\nAn attacker with a high level of skills generally has little obvious impact on your system. The \nimpact will be there, but it might take substantial skill to detect it. It is not uncommon for expe-\nrienced attackers to replace common operating system monitoring utilities with copies that do \nnot report system activity correctly. Even though you may suspect that an incident is in progress \nand you investigate, you may see no unusual activity. In this case, the activity exists but has been \nhidden from the casual administrator.\nAlways use multiple sources of data when investigating an incident. Be suspicious of any-\nthing that does not make sense. Ensure that you can clearly explain any activity you see is not \nnormal for your system. If it just does not “feel” right, it could be the only clue you have to suc-\ncessfully intervene in an ongoing incident.\nConfiscating Equipment, Software, and Data\nOnce you determine that an incident has occurred, the next step is to choose a course of action. \nYour security policy should specify steps to take for various types of incidents. Always proceed \nwith the assumption that an incident will end up in a court of law. Treat any evidence you col-\nlect as if it must pass admissibility standards. Once you taint evidence, there is no going back. \nYou must ensure that the chain of evidence is maintained.\nIt is common to confiscate equipment, software, or data to perform a proper investigation. \nThe manner in which the evidence is confiscated is important. Confiscation of evidence must be \ncarried out in a proper fashion. There are three basic alternatives.\nFirst, the person who owns the evidence could voluntarily surrender it. This method is gener-\nally only appropriate when the attacker is not the owner. Few guilty parties willingly surrender \nevidence they know will incriminate them. Less-experienced attackers may believe they have suc-\ncessfully covered their tracks and voluntarily surrender important evidence. A good forensic inves-\ntigator can extract much “covered up” information from a computer. In most cases, asking for \nevidence from a suspected attacker just alerts the suspect that you are close to taking legal action.\nSecond, you could get a court to issue a subpoena, or court order, that compels an individual \nor organization to surrender evidence and have the subpoena served by law enforcement. Again, \nthis course of action provides sufficient notice for someone to alter the evidence and render it \nuseless in court.\nThe last option is a search warrant. This option should be used only when you must have \naccess to evidence without tipping off the evidence’s owner or other personnel. You must have \na strong suspicion with credible reasoning to convince a judge to pursue this course of action.\nThe three alternatives apply to confiscating equipment both inside and outside an organiza-\ntion, but there is another step you can take to ensure that the confiscation of equipment that \nbelongs to your organization is carried out properly. It is becoming more common to have all new \n" }, { "page_number": 660, "text": "Incident Handling\n615\nemployees sign an agreement that provides consent to search and seize any necessary evidence dur-\ning an investigation. In this manner, consent is provided as a term of the employment agreement. \nThis makes confiscation much easier and reduces the chances of a loss of evidence while waiting \nfor legal permission to seize it. Make sure your security policy addresses this important topic.\nIncident Data Integrity and Retention\nNo matter how persuasive evidence may be, it can be thrown out of court if you change it during \nthe evidence collection process. Make sure you can prove that you maintained the integrity of \nall evidence. (Chapter 17, “Law and Investigations,” includes more information on evidence \nrules.) But what about the integrity of data before it is collected?\nYou may not detect all incidents as they are happening. Sometimes an investigation reveals that \nthere were previous incidents that went undetected. It is discouraging to follow a trail of evidence \nand find that a key log file that could point back to an attacker has been purged. Carefully con-\nsider the fate of log files or other possible evidence locations. A simple archiving policy can help \nensure that key evidence is available upon demand no matter how long ago the incident occurred.\nBecause many log files can contain valuable evidence, attackers often attempt to sanitize \nthem after a successful attack. Take steps to protect the integrity of log files and to deter their \nmodification. One technique is to implement remote logging. Although not a perfect solution, \nit does provide some protection from post-incident log file cleansing.\nAnother important forensic technique is to preserve the original evidence. Remember that \nthe very conduct of your investigation may alter the evidence you are evaluating. Therefore, it’s \nalways best to work with a copy of the actual evidence whenever possible. For example, when \nconducting an investigation into the contents of a hard drive, make an image of that drive, seal \nthe original drive in an evidence bag, and then use the disk image for your investigation.\nAs with every aspect of security planning, there is no single solution. Get familiar with your \nsystem and take the steps that make the most sense for your organization to protect it.\nReporting Incidents\nWhen should you report an incident? To whom should you report it? These questions are often \ndifficult to answer. Your security policy should contain guidelines on answering both questions. \nThere is a fundamental problem with reporting incidents. If you report every incident, you run \nthe very real risk of being viewed as a noisemaker. When you have a serious incident, you may \nbe ignored. Also, reporting an unimportant incident could give the impression that your orga-\nnization is more vulnerable than is the case. This can have a serious detrimental effect for orga-\nnizations that must maintain strict security. For example, hearing about daily incidents from \nyour bank would probably not instill additional confidence in their security practices.\nOn the other hand, escalation and legal action become more difficult if you do not report an inci-\ndent soon after discovery. If you delay notifying authorities of a serious incident, you will probably \nhave to answer questions about your motivation for delaying. Even an innocent person could look \nas if they were trying to hide something by not reporting an incident in a timely manner.\nAs with most security topics, the answer is not an easy one. In fact, you are compelled by law \nor regulation to report some incidents. If your organization is regulated by a government \n" }, { "page_number": 661, "text": "616\nChapter 18\n\u0002 Incidents and Ethics\nauthority and the incident caused your organization to deviate from any regulation, you must \nreport the incident. Make sure you know what incidents you must report. For example, any \norganization that stores personal health information must report any incident in which disclo-\nsure of such information occurred.\nBefore you encounter an incident, it is very wise to establish a relationship with your corpo-\nrate legal personnel and the appropriate law enforcement agencies. Find out who the appropri-\nate law enforcement contacts are for your organization and talk with them. When the time \ncomes to report an incident, your efforts at establishing a prior working relationship will pay \noff. You will spend far less time in introductions and explanations if you already know the per-\nson with whom you are talking.\nOnce you determine to report an incident, make sure you have as much of the following \ninformation as possible:\n\u0002\nWhat is the nature of the incident, how was it initiated, and by whom?\n\u0002\nWhen did the incident occur? (Be as precise as possible with dates and times.)\n\u0002\nWhere did the incident occur?\n\u0002\nIf known, what tools did the attacker use?\n\u0002\nWhat was the damage resulting from the incident?\nYou may be asked to provide additional information. Be prepared to provide it in as timely \na manner as possible. You may also be asked to quarantine your system.\nAs with any security action you take, keep a log of all communication and make copies of \nany documents you provide as you report an incident.\nEthics\nSecurity professionals with substantial responsibilities are held to a high standard of conduct. \nThe rules that govern personal conduct are collectively known as rules of ethics. Several orga-\nnizations have recognized the need for standard ethics rules, or codes, and have devised guide-\nlines for ethical behavior.\nWe present two codes of ethics in the following sections. These rules are not laws. They are \nminimum standards for professional behavior. They should provide you with a basis for sound, \nethical judgment. Any security professional should be expected to abide by these guidelines \nregardless of their area of specialty. Make sure you understand and agree with the codes of eth-\nics outlined in the following sections.\n(ISC)2 Code of Ethics\nThe governing body that administers the CISSP certification is the International Information \nSystems Security Certification Consortium (ISC)2. The (ISC)2 Code of Ethics was developed to \nprovide the basis for CISSP behavior. It is a simple code with a preamble and four canons. Here \nis a short summary of the major concepts of the Code of Ethics.\n" }, { "page_number": 662, "text": "Ethics\n617\nAll CISSP candidates should be familiar with the entire (ISC)2 Code of Ethics \nbecause they have to sign an agreement that they will adhere to this code. We \nwon’t cover the code in depth, but you can find further details about the (ISC)2’s \nCode of Ethics at www.isc2.org. You need to visit this site and read the entire code.\nCode of Ethics Preamble:\n\u0002\nSafety of the commonwealth, duty to our principals, and to each other requires that we \nadhere, and be seen to adhere, to the highest ethical standards of behavior.\n\u0002\nTherefore, strict adherence to this code is a condition of certification.\nCode of Ethics Canons:\nProtect society, the commonwealth, and the infrastructure.\nSecurity professionals have great \nsocial responsibility. We are charged with the burden of ensuring that our actions benefit the \ncommon good.\nAct honorably, honestly, justly, responsibly, and legally.\nIntegrity is essential to the conduct \nof our duties. We cannot carry out our duties effectively if others within our organization, the \nsecurity community, or the general public have doubts about the accuracy of the guidance we \nprovide or the motives behind our actions.\nProvide diligent and competent service to principals.\nAlthough we have responsibilities to \nsociety as a whole, we also have specific responsibilities to those who have hired us to protect \ntheir infrastructure. We must ensure that we are in a position to provide unbiased, competent \nservice to our organization.\nAdvance and protect the profession.\nOur chosen profession changes on a continuous basis. \nAs security professionals, we must ensure that our knowledge remains current and that we con-\ntribute our own knowledge to the community’s common body of knowledge.\nEthics and the Internet\nIn January 1989, the Internet Advisory Board (IAB) issued a statement of policy concerning the \nproper use of the Internet. The contents of this statement are valid even today. It is important that \nyou know the basic contents of the document, titled “Ethics and the Internet,” Request for Com-\nment (RFC) 1087, because most codes of ethics can trace their roots back to this document.\nThe statement is a brief list of practices considered unethical. Where a code of ethics states \nwhat you should do, this document outlines what you should not do. RFC 1087 states that any \nactivity with the following purposes is unacceptable and unethical:\n\u0002\nSeeks to gain unauthorized access to the resources of the Internet\n\u0002\nDisrupts the intended use of the Internet\n\u0002\nWastes resources (people, capacity, computer) through such actions\n\u0002\nDestroys the integrity of computer-based information\n\u0002\nCompromises the privacy of users\n" }, { "page_number": 663, "text": "618\nChapter 18\n\u0002 Incidents and Ethics\nThere are many ethical and moral codes of IT behavior to choose from. Another \nsystem you should consider is the Generally Accepted Systems Security Prin-\nciples (GASSP). The full text of the GASSP system is found at: http://\nwww.auerbach-publications.com/dynamic_data/2334_1221_gassp.pdf.\nSummary\nComputer crimes are grouped into several major categories, and the crimes in each category \nshare common motivations and desired results. Understanding what an attacker is after can \nhelp in properly securing a system.\nFor example, military and intelligence attacks are launched to acquire secret information \nthat could not be obtained legally. Business attacks are similar except that they target civilian \nsystems. Other types of attacks include financial attacks (phone phreaking is an example of a \nTen Commandments of Computer Ethics\nThe Computer Ethics Institute created its own code of ethics. The Ten Commandments of Com-\nputer Ethics are as follows:\n1.\nThou shalt not use a computer to harm other people.\n2.\nThou shalt not interfere with other people’s computer work.\n3.\nThou shalt not snoop around in other people’s computer files.\n4.\nThou shalt not use a computer to steal.\n5.\nThou shalt not use a computer to bear false witness.\n6.\nThou shalt not copy proprietary software for which you have not paid.\n7.\nThou shalt not use other people’s computer resources without authorization or proper \ncompensation.\n8.\nThou shalt not appropriate other people’s intellectual output.\n9.\nThou shalt think about the social consequences of the program you are writing or the sys-\ntem you are designing.\n10. Thou shalt always use a computer in ways that ensure consideration and respect for your \nfellow humans.\n" }, { "page_number": 664, "text": "Exam Essentials\n619\nfinancial attack) and terrorist attacks (which, in the context of computer crimes, are attacks \ndesigned to disrupt normal life). Finally, there are grudge attacks, the purpose of which is to \ncause damage by destroying data or using information to embarrass an organization or person, \nand fun attacks, launched by inexperienced crackers to compromise or disable a system. \nAlthough generally not sophisticated, fun attacks can be annoying and costly.\nAn incident is a violation or the threat of a violation of your security policy. When an inci-\ndent is suspected, you should immediately begin an investigation and collect as much evidence \nas possible because, if you decide to report the incident, you must have enough admissible evi-\ndence to support your claims.\nThe set of rules that govern your personal behavior is called a code of ethics. There are sev-\neral codes of ethics, from general to specific in nature, that security professionals can use to \nguide them. The (ISC)2 makes the acceptance of its code of ethics a requirement for certification.\nExam Essentials\nKnow the definition of computer crime.\nComputer crime is a crime (or violation of a law or \nregulation) that is directed against, or directly involves, a computer.\nBe able to list and explain the six categories of computer crimes.\nComputer crimes are \ngrouped into six categories: military and intelligence attack, business attack, financial attack, ter-\nrorist attack, grudge attack, and fun attack. Be able to explain the motive of each type of attack.\nKnow the importance of collecting evidence.\nAs soon you discover an incident, you must \nbegin to collect evidence and as much information about the incident as possible. The evidence \ncan be used in a subsequent legal action or in finding the identity of the attacker. Evidence can \nalso assist you in determining the extent of damage.\nUnderstand that an incident is any violation, or threat of a violation, of your security policy.\nIncidents should be defined in your security policy. Even though specific incidents may not be \noutlined, the existence of the policy sets the standard for the use of your system. Any departure \nfrom the accepted use of your system is defined as an incident.\nBe able to list the four common types of incidents and know the telltale signs of each.\nAn \nincident occurs when an attack or other violation of your security policy is carried out against your \nsystem. Incidents can be grouped into four categories: scanning, compromises, malicious code, and \ndenial of service. Be able to explain what each type of incident involves and what signs to look for.\nKnow the importance of identifying abnormal and suspicious activity.\nAttacks will generate \nsome activity that is not normal. Recognizing abnormal and suspicious activity is the first step \ntoward detecting incidents.\nKnow how to investigate intrusions and how to gather sufficient information from the equip-\nment, software, and data.\nYou must have possession of equipment, software, or data to ana-\nlyze it and use it as evidence. You must acquire the evidence without modifying it or allowing \nanyone else to modify it.\n" }, { "page_number": 665, "text": "620\nChapter 18\n\u0002 Incidents and Ethics\nKnow the three basic alternatives for confiscating evidence and when each one is appropriate.\nFirst, the person who owns the evidence could voluntarily surrender it. Second, a subpoena could \nbe used to compel the subject to surrender the evidence. Third, a search warrant is most useful \nwhen you need to confiscate evidence without giving the subject an opportunity to alter it.\nKnow the importance of retaining incident data.\nBecause you will discover some incidents \nafter they have occurred, you will lose valuable evidence unless you ensure that critical log files \nare retained for a reasonable period of time. You can retain log files and system status infor-\nmation either in place or in archives.\nBe familiar with how to report an incident.\nThe first step is to establish a working relation-\nship with the corporate and law enforcement personnel with whom you will work to resolve an \nincident. When you do have a need to report an incident, gather as much descriptive informa-\ntion as possible and make your report in a timely manner.\nUnderstand the importance of ethics to security personnel.\nSecurity practitioners are granted \na very high level of authority and responsibility to execute their job functions. The potential for \nabuse exists, and without a strict code of personal behavior, security practitioners could be \nregarded as having unchecked power. Adherence to a code of ethics helps ensure that such \npower is not abused.\nKnow the (ISC)2 Code of Ethics and RFC 1087, “Ethics and the Internet.”\nAll CISSP candi-\ndates should be familiar with the entire (ISC)2 Code of Ethics because they have to sign an agree-\nment that they will adhere to it. In addition, be familiar with the basic statements of RFC 1087.\n" }, { "page_number": 666, "text": "Review Questions\n621\nReview Questions\n1.\nWhat is a computer crime?\nA. Any attack specifically listed in your security policy\nB. Any illegal attack that compromises a protected computer\nC. Any violation of a law or regulation that involves a computer\nD. Failure to practice due diligence in computer security\n2.\nWhat is the main purpose of a military and intelligence attack?\nA. To attack the availability of military systems\nB. To obtain secret and restricted information from military or law enforcement sources\nC. To utilize military or intelligence agency systems to attack other nonmilitary sites\n3.\nWhat type of attack targets trade secret information stored on a civilian organization’s system?\nA. Business attack\nB. Denial of service attack\nC. Financial attack\nD. Military and intelligence attack\n4.\nWhat goal is not a purpose of a financial attack?\nA. Access services you have not purchased\nB. Disclose confidential personal employee information\nC. Transfer funds from an unapproved source into your account\n5.\nWhat is one possible goal of a terrorist attack?\nA. Alter sensitive trade secret documents\nB. Damage the ability to communicate and respond to a physical attack\nC. Steal unclassified information\nD. Transfer funds to other countries\n6.\nWhich of the following would not be a primary goal of a grudge attack?\nA. Disclose embarrassing personal information\nB. Launch a virus on an organization’s system\nC. Send inappropriate e-mail with a spoofed origination address of the victim organization\nD. Use automated tools to scan the organization’s systems for vulnerable ports\n" }, { "page_number": 667, "text": "622\nChapter 18\n\u0002 Incidents and Ethics\n7.\nWhat are the primary reasons attackers engage in “fun” attacks? (Choose all that apply.)\nA. Bragging rights\nB. Money from the sale of stolen documents\nC. Pride of conquering a secure system\nD. Retaliation against a person or organization\n8.\nWhat is the most important rule to follow when collecting evidence?\nA. Do not turn off a computer until you photograph the screen.\nB. List all people present while collecting evidence.\nC. Never modify evidence during the collection process.\nD. Transfer all equipment to a secure storage location.\n9.\nWhat would be a valid argument for not immediately removing power from a machine when an \nincident is discovered?\nA. All of the damage has been done. Turning the machine off would not stop additional damage.\nB. There is no other system that can replace this one if it is turned off.\nC. Too many users are logged in and using the system.\nD. Valuable evidence in memory will be lost.\n10. What is the reason many incidents are never reported?\nA. It involves too much paperwork.\nB. Reporting too many incidents could hurt an organization’s reputation.\nC. The incident is never discovered.\nD. Too much time has passed and the evidence is gone.\n11. What is an incident?\nA. Any active attack that causes damage to your system\nB. Any violation of a code of ethics\nC. Any crime (or violation of a law or regulation) that involves a computer\nD. Any violation of your security policy\n12. If port scanning does no damage to a system, why is it generally considered an incident?\nA. All port scans indicate adversarial behavior.\nB. Port scans can precede attacks that cause damage and can indicate a future attack.\nC. Scanning a port damages the port.\n" }, { "page_number": 668, "text": "Review Questions\n623\n13. What type of incident is characterized by obtaining an increased level of privilege?\nA. Compromise\nB. Denial of service\nC. Malicious code\nD. Scanning\n14. What is the best way to recognize abnormal and suspicious behavior on your system?\nA. Be aware of the newest attacks.\nB. Configure your IDS to detect and report all abnormal traffic.\nC. Know what your normal system activity looks like.\nD. Study the activity signatures of the main types of attacks.\n15. If you need to confiscate a PC from a suspected attacker who does not work for your organiza-\ntion, what legal avenue should you pursue?\nA. Consent agreement signed by employees\nB. Search warrant\nC. Subpoena\nD. Voluntary consent\n16. Why should you avoid deleting log files on a daily basis?\nA. An incident may not be discovered for several days and valuable evidence could be lost.\nB. Disk space is cheap and log files are used frequently.\nC. Log files are protected and cannot be altered.\nD. Any information in a log file is useless after it is several hours old.\n17.\nWhich of the following conditions indicate that you must report an incident? (Choose all that apply.)\nA. Confidential information protected by government regulation was possibly disclosed.\nB. Damages exceeded $1,500.\nC. The incident has occurred before.\nD. The incident resulted in a violation of a law.\n18. What are ethics?\nA. Mandatory actions required to fulfill job requirements\nB. Professional standards of regulations\nC. Regulations set forth by a professional organization\nD. Rules of personal behavior\n" }, { "page_number": 669, "text": "624\nChapter 18\n\u0002 Incidents and Ethics\n19. According to the (ISC)2 Code of Ethics, how are CISSPs expected to act?\nA. Honestly, diligently, responsibly, and legally\nB. Honorably, honestly, justly, responsibly, and legally\nC. Upholding the security policy and protecting the organization\nD. Trustworthy, loyally, friendly, courteously\n20. Which of the following actions are considered unacceptable and unethical according to RFC \n1087, “Ethics and the Internet?”\nA. Actions that compromise the privacy of classified information\nB. Actions that compromise the privacy of users\nC. Actions that disrupt organizational activities\nD. Actions in which a computer is used in a manner inconsistent with a stated security policy\n" }, { "page_number": 670, "text": "Answers to Review Questions\n625\nAnswers to Review Questions\n1.\nC. A crime is any violation of a law or regulation. The violation stipulation defines the action as \na crime. It is a computer crime if the violation involves a computer either as the target or a tool.\n2.\nB. A military and intelligence attack is targeted at the classified data that resides on the system. \nTo the attacker, the value of the information justifies the risk associated with such an attack. The \ninformation extracted from this type of attack is often used to plan subsequent attacks.\n3.\nA. Confidential information that is not related to the military or intelligence agencies is the tar-\nget of business attacks. The ultimate goal could be destruction, alteration, or disclosure of con-\nfidential information.\n4.\nB. A financial attack focuses primarily on obtaining services and funds illegally.\n5.\nB. A terrorist attack is launched to interfere with a way of life by creating an atmosphere of fear. \nA computer terrorist attack can reach this goal by reducing the ability to respond to a simulta-\nneous physical attack.\n6.\nD. Any action that can harm a person or organization, either directly or through embarrass-\nment, would be a valid goal of a grudge attack. The purpose of such an attack is to “get back” \nat someone.\n7.\nA, C. Fun attacks have no reward other than providing a boost to pride and ego. The thrill of \nlaunching a fun attack comes from the act of participating in the attack (and not getting caught).\n8.\nC. Although the other options have some merit in individual cases, the most important rule is to \nnever modify, or taint, evidence. If you modify evidence, it becomes inadmissible in court.\n9.\nD. The most compelling reason for not removing power from a machine is that you will lose the \ncontents of memory. Carefully consider the pros and cons of removing power. After all is con-\nsidered, it may be the best choice.\n10. C. Although an organization would not want to report a large number of incidents (unless \nreporting them is mandatory), the reality is that many incidents are never discovered. The lack \nof well-trained users results in many incidents that are never recognized.\n11. D. An incident is defined by your security policy. Actions that you define as an incident may not \nbe considered an incident in another organization. For example, your organization may prohibit \nInternet access while another organization encourages it. Accessing the Internet would be an \nincident in your organization.\n12. B. Some port scans are normal. An unusually high volume of port scan activity can be a recon-\nnaissance activity preceding a more dangerous attack. When you see unusual port scanning, you \nshould always investigate.\n13. A. Any time an attacker exceeds their authority, the incident is classified as a system compro-\nmise. This includes valid users who exceed their authority as well as invalid users who gain \naccess through the use of a valid user ID.\n" }, { "page_number": 671, "text": "626\nChapter 18\n\u0002 Incidents and Ethics\n14. C. Although options A, B, and D are actions that can make you aware of what attacks look like \nand how to detect them, you will never successfully detect most attacks until you know your sys-\ntem. When you know what the activity on your system looks like on a normal day, you can \nimmediately detect any abnormal activity.\n15. B. In this case, you need a search warrant to confiscate equipment without giving the suspect \ntime to destroy evidence. If the suspect worked for your organization and you had all employees \nsign consent agreements, you could simply confiscate the equipment.\n16. A. Log files contain a large volume of generally useless information. However, when you are try-\ning to track down a problem or an incident, they can be invaluable. Even if an incident is dis-\ncovered as it is happening, it may have been preceded by other incidents. Log files provide \nvaluable clues and should be protected and archived.\n17.\nA, D. You must report an incident when the incident resulted in the violation of a law or regula-\ntion. This includes any damage (or potential damage) to or disclosure of protected information.\n18. D. Ethics are simply rules of personal behavior. Many professional organizations establish for-\nmal codes of ethics to govern their members, but ethics are personal rules individuals use to \nguide their lives.\n19. B. The second canon of the (ISC)2 Code of Ethics states how a CISSP should act, which is hon-\norably, honestly, justly, responsibly, and legally.\n20. B. RFC 1087 does not specifically address the statements in A, C, or D. Although each type of activ-\nity listed is unacceptable, only the activity identified in option B is identified in RFC 1087.\n" }, { "page_number": 672, "text": "Chapter\n19\nPhysical Security \nRequirements\nTHE CISSP EXAM TOPICS COVERED IN THIS \nCHAPTER INCLUDE:\n\u0001 Physical Security Threats\n\u0001 Facility Requirements\n\u0001 Forms of Physical Access Controls\n\u0001 Technical Controls\n\u0001 Environment and Life Safety\n" }, { "page_number": 673, "text": "The Physical Security domain of the Common Body of Knowledge \n(CBK) for the CISSP certification exam deals with topics and issues \nrelated to facility construction and location, the security features of \na facility, forms of physical access control, types of physical security technical controls, and main-\ntaining security by properly sustaining the environment and protecting human life.\nThe purpose of physical security is to protect against physical threats. The following types of \nphysical threats are among the most common:\n\u0002\nFire and smoke\n\u0002\nWater (rising/falling)\n\u0002\nEarth movement (earthquakes, landslides, volcanoes)\n\u0002\nStorms (wind, lightning, rain, snow, sleet, ice)\n\u0002\nSabotage/vandalism\n\u0002\nExplosion/destruction\n\u0002\nBuilding collapse\n\u0002\nToxic materials\n\u0002\nUtility loss (power, heating, cooling, air, water)\n\u0002\nEquipment failure\n\u0002\nPersonnel loss (strikes, illness, access, transport)\nThis chapter explores each of these issues and provides discussion of safeguards and coun-\ntermeasures to protect against them. In many cases, a disaster recovery plan or a business con-\ntinuity plan will be needed in the event a serious physical threat (such as an explosion, sabotage, \nor natural disaster) becomes a reality. See Chapter 15, “Business Continuity Planning,” and \nChapter 16, “Disaster Recovery Planning,” for additional details.\nFacility Requirements\nIt should be blatantly obvious if you’ve read the previous 18 chapters that without control over \nthe physical environment, no amount of administrative, technical, or logical access controls can \nprovide adequate security. If a malicious person can gain physical access to your facility or \nequipment, they can do just about anything they want, from destruction to disclosure and alter-\nation. Physical controls are your first line of defense, while people are your last.\nThere are many aspects and elements to implementing and maintaining physical security. \nOne of the core or foundational elements is selecting or designing the facility that will house \n" }, { "page_number": 674, "text": "Facility Requirements\n629\nyour IT infrastructure and the operations of your organization. The process of selecting or \ndesigning a secure facility must start with a plan.\nSecure Facility Plan\nA secure facility plan outlines the security needs of your organization and emphasizes methods \nor mechanisms to employ to provide security. Such a plan is developed through a process \nknown as critical path analysis. Critical path analysis is a systematic effort to identify relation-\nships between mission-critical applications, processes, and operations and all of the necessary \nsupporting elements. For example, an e-commerce server used to sell products over the Web \nrelies on Internet access, computer hardware, electricity, temperature control, storage facility, \nand so on. When critical path analysis is performed properly, a complete picture of the inter-\ndependencies and interactions necessary to sustain the organization is produced. Once the anal-\nysis is complete, the results serve as a list of items to secure. The first step in designing a secure \nIT infrastructure is providing security for the basic requirements of the organization and its \ncomputers. The basic requirements include electricity, environmental control (i.e., a building, \nair conditioning, heating, humidity control, etc.), and water/sewage.\nPhysical Security Controls\nThe security controls implemented to manage physical security can be divided into three groups: \nadministrative, technical, and physical. Because these are the same categories used to describe \naccess control, it is important to keep in mind the physical security nature of these groupings. \nAdministrative physical security controls include facility construction and selection, site man-\nagement, personnel controls, awareness training, and emergency response and procedures. \nTechnical physical security controls include access controls; intrusion detection; alarms; closed-\ncircuit television (CCTV); monitoring; heating, ventilating, and air conditioning (HVAC); \npower supplies; and fire detection and suppression. Physical controls for physical security \ninclude fencing, lighting, locks, construction materials, mantraps, dogs, and guards.\nWhen designing the physical security for an environment, keep the functional order of controls \nin mind. Security controls should be deployed so that initial attempts to access physical assets are \ndeterred (i.e, boundary restrictions). If deterrence fails, then direct access to the physical assets \nshould be denied (for example, locked vault doors). If denial fails, then your system needs to detect \nintrusion (for example, using motion detectors) and the intrusion should be delayed sufficiently \nfor response by authorities (for example, a cable lock on the asset). So, it’s important to remember \nthe order of deployment: deterrence, then denial, then detection, then delay.\nSite Selection\nSite selection should be based on the security needs of the organization. Cost, location, and size \nare important, but addressing the requirements of security should always take precedence. \nWhen choosing a site on which to build a facility or selecting a preexisting structure, be sure to \ncarefully examine every aspect of the location.\n" }, { "page_number": 675, "text": "630\nChapter 19\n\u0002 Physical Security Requirements\nVisibility\nVisibility is important. What is the surrounding terrain? Would it be easy to approach the facil-\nity by vehicle or on foot without being seen? The makeup of the surrounding area is also impor-\ntant. Is it in or near a residential, business, or industrial area? What is the local crime rate? \nWhere are the closest emergency services located (fire, medical, police)? What unique hazards \nare found in the area (chemical plants, homeless shelter, university, construction, etc.)?\nAccessibility\nThe accessibility to the area is also important. Single entrances are great for providing security, \nbut multiple entrances are better for evacuation during emergencies. What types of roads are \nnearby? What means of transportation are easily accessible (trains, highway, airport, shipping)? \nWhat is the level of traffic throughout the day?\nNatural Disasters\nAnother concern is the effect of natural disasters in the area. Is the area prone to earthquakes, \nmud slides, sink holes, fires, floods, hurricanes, tornadoes, falling rocks, snow, rainfall, ice, \nhumidity, heat, extreme cold, and so on? You must prepare for natural disasters and equip your \nIT environment to either survive an event or be easily replaceable.\nFacility Design\nWhen designing a facility for construction, you need to understand the level of security needed by \nyour organization. The proper level of security must be planned and designed before construction \nbegins. Some important issues to consider include the combustibility, fire rating, construction \nmaterials, load rating, placement, and control of items such as walls, doors, ceilings, flooring, \nHVAC, power, water, sewage, gas, and so on. Forced intrusion, emergency access, resistance to \nentry, direction of entries and exits, use of alarms, and conductivity are other important aspects \nto evaluate. Every element within a facility should be evaluated in terms of how it could be used \nfor and against the protection of the IT infrastructure and personnel (for example, positive flows for \nboth air and water from inside the facility to the outside of the facility).\nWork Areas\nThe design and configuration of work areas and visitor areas should be carefully considered. \nThere should not be equal access to all locations within a facility. Areas that contain assets of \nhigher value or importance should have restricted access. For example, anyone who enters the \nfacility should be able to access the restrooms and the public telephone, but only the network \nadministrators and security staff should have access to the server room. Valuable and confidential \nassets should be located in the heart or center of protection provided by a facility. In effect, you \nshould focus on deploying concentric circles of protection. This type of configuration requires \nincreased levels of authorization to gain access into the more sensitive areas of the organization.\n" }, { "page_number": 676, "text": "Forms of Physical Access Controls\n631\nWalls or partitions can be used to separate similar but distinct work areas. Such divisions deter \ncasual shoulder surfing or eavesdropping. Shoulder surfing is the act of gathering information \nfrom a system by observing the monitor or the use of the keyboard by the operator. Floor-to-\nceiling walls should be used to separate areas with differing levels of sensitivity and confidentiality.\nEach work area should be evaluated and assigned a type of classification just as IT assets are clas-\nsified. Only people with clearance or classifications corresponding to the classification of the work \narea should be allowed access. Areas with different purposes or uses should be assigned different lev-\nels of access or restrictions. The more access to assets the equipment within an area offers, the greater \nthe restrictions to control who enters those areas and what activities they perform should be.\nServer Rooms\nServer rooms, server vaults, and IT closets are enclosed, restricted and protected rooms where \nyour mission-critical servers and network devices are housed. Centralized server rooms need not \nbe human compatible. In fact, the more human incompatible a server room is, the more pro-\ntection against both casual and determined attacks it will offer. Human incompatibility can be \naccomplished by including Halon or other oxygen-displacement fire detection and extinguish-\ning systems, low temperatures, little or no lighting, and equipment stacked so there is little room \nfor walking or moving. Server rooms should be designed to best support the operation of the IT \ninfrastructure and to prevent unauthorized human access and intervention.\nThe walls of your server room should also have a 1-hour minimum fire rating.\nVisitors\nIf a facility employs restricted areas to control physical security, then a mechanism to handle vis-\nitors is required. Often an escort is assigned to visitors and their access and activities are mon-\nitored closely. Failing to track the actions of outsiders when they are granted access into a \nprotected area can result in malicious activity against the most protected assets.\nForms of Physical Access Controls\nThere are many types of physical access control mechanisms that can be deployed in an envi-\nronment to control, monitor, and manage access to a facility. These range from deterrents to \ndetection mechanisms.\nThe various sections, divisions, or areas of a site or facility should be clearly designated as \npublic, private, or restricted. Each of these areas requires unique and focused physical access \ncontrols, monitoring, and prevention mechanisms. The following sections discuss many of the \nmechanisms that can be used to separate, isolate, and control access to the various types of areas \non a site.\n" }, { "page_number": 677, "text": "632\nChapter 19\n\u0002 Physical Security Requirements\nFences, Gates, Turnstiles, and Mantraps\nA fence is a perimeter-defining device. Fences are used to clearly differentiate between areas that \nare under a specific level of security protection and those that aren’t. Fencing can include a wide \nrange of components, materials, and construction methods. It can consist of stripes painted on \nthe ground, chain link fences, barbed wire, concrete walls, and even invisible perimeters using \nlaser, motion, or heat detectors. Various types of fences are effective against different types of \nintruders:\n\u0002\nFences that are 3 to 4 feet high deter casual trespassers.\n\u0002\nFences that are 6 to 7 feet high are too hard to climb easily.\n\u0002\nFences that are 8 feet high with three strands of barbed wire deter determined intruders.\nA gate is a controlled exit and entry point in a fence. The deterrent level of a gate must be \nequivalent to the deterrent level of the fence to sustain the effectiveness of the fence as a whole. \nHinges and locking/closing mechanisms should be hardened against tampering, destruction, or \nremoval. When a gate is closed, it should not offer any additional access vulnerabilities. Gates \nshould be kept to a minimum. They may be manned by guards or not. When they’re not pro-\ntected by guards, deployment of dogs or CCTV is recommended.\nA turnstile (see Figure 19.1) is a form of gate that prevents more than one person from gain-\ning entry at a time and often restricts movement in one direction. It is used to gain entry but not \nexit or vice versa. A turnstile is basically a fencing equivalent of a secured revolving door.\nDeploying Physical Access Controls\nIn the real world, you will deploy multiple layers of physical access controls to manage the traf-\nfic of authorized and unauthorized individuals within your facility. The outermost layer will be \nlighting. The entire outer perimeter of your site should be clearly lit. This will provide for easy \nidentification of personnel and make it easier to notice intrusions. Just inside of the lighted area \nshould be a fence or wall designed to prevent intrusion. Specific controlled points along that \nfence or wall should be entrance points. There should be gates, turnstiles, or mantraps all mon-\nitored by closed-circuit television (CCTV) and security guards. Identification and authentication \nshould be required at these entrance points before entrance is granted.\nWithin the facility, areas of different sensitivity or confidentiality levels should be distinctly sep-\narated and compartmentalized. This is especially true of public areas and areas accessible to \nvisitors. An additional identification/authentication process to validate a need to enter should \nbe required when anyone is moving from one area to another. The most sensitive resources \nand systems should be isolated from all but the most privileged personnel and located at the \ncenter or core of the facility.\n" }, { "page_number": 678, "text": "Forms of Physical Access Controls\n633\nA mantrap is a double set of doors that is often protected by a guard (see Figure 19.1). The \npurpose of a mantrap is to contain a subject until their identity and authentication is verified. \nIf they are proven to be authorized for entry, the inner door opens, allowing them to enter the \nfacility or premises. If they are not authorized, both doors remain closed and locked until an \nescort (typically a guard or a police officer) arrives to escort them off the property or arrest them \nfor trespassing (this is known as a delay feature). Often a mantrap will include a scale to prevent \npiggybacking or tailgating.\nLighting\nLighting is one of the most commonly used forms of perimeter security control. The primary \npurpose of lighting is to discourage casual intruders, trespassers, prowlers, and would-be \nthieves who would rather perform their maliciousness in the dark. However, lighting is not a \nstrong deterrent. It should not be used as the primary or sole protection mechanism except in \nareas with a low threat level.\nF I G U R E\n1 9 . 1\nA secure physical boundary with a mantrap and a turnstile\nLighting should not illuminate the positions of guards, dogs, patrol posts, or other similar \nsecurity elements. It should be combined with guards, dogs, CCTV, or some form of intrusion \ndetection or surveillance mechanism. Lighting must not cause a nuisance or problem for nearby \nresidents, roads, railways, airports, and so on.\nThe National Institute of Standards and Technology (NIST) standard for perimeter protec-\ntion using lighting is that critical areas should be illuminated with 2 candle feet of power at a \nheight of 8 feet. Another common issue related to the use of lighting is the placement of the \nlights. Standards seem to indicate that light poles should be placed the same distance apart as \nthe diameter of the illuminated area created by the light. So, if the lighted area is 40 feet in diam-\neter, the poles should be 40 feet apart.\nSecured area\nTurnstile\nMantrap\n" }, { "page_number": 679, "text": "634\nChapter 19\n\u0002 Physical Security Requirements\nSecurity Guards and Dogs\nAll physical security controls, whether static deterrents or active detection and surveillance mech-\nanisms, ultimately rely upon personnel to intervene and stop actual intrusions and attacks. Secu-\nrity guards exist to fulfill this need. Guards may be posted around a perimeter or inside to monitor \naccess points or watch detection and surveillance monitors. The real benefit of guards is that they \nare able to adapt and react to any condition or situation. Guards are able to learn and recognize \nattack and intrusion activities and patterns, can adjust to a changing environment, and are able \nto make decisions and judgment calls. Security guards are often an appropriate security control \nwhen immediate, onsite, situation handling and decision making is necessary.\nUnfortunately, using security guards is not a perfect solution. There are numerous disadvan-\ntages to deploying, maintaining, and relying upon security guards. Not all environments and \nfacilities support security guards. This may be due to actual human incompatibility or to the lay-\nout, design, location, and construction of the facility. Not all security guards are themselves reli-\nable. Prescreening, bonding, and training do not guarantee that you won’t end up with an \nineffective and unreliable security guard. Likewise, even if a guard is initially reliable, they are \nsubject to physical injury and illness, take vacations, can become distracted, are vulnerable to \nsocial engineering, and can become unemployable due to substance abuse. In addition, they are \nsometimes focused on self-preservation instead of the preservation of the security of the guarded \nfacility. This may mean that security guards can offer protection only up to the point at which \ntheir life is endangered. Additionally, security guards are usually unaware of the scope of the \noperations within a facility and are therefore not thoroughly equipped to know how to respond \nto every situation. Finally, security guards are expensive.\nGuard dogs can be an alternative to security guards. They can often be deployed as a perimeter \nsecurity control. As a detection and deterrent, dogs are extremely effective. However, dogs are \ncostly, require a high level of maintenance, and impose serious insurance and liability requirements.\nKeys and Combination Locks\nLocks are used to keep closed doors closed. They are designed and deployed to prevent access to \neveryone without proper authorization. A lock is a crude form of an identification and authorization \nmechanism. If you posses the correct key or combination, you are considered authorized and per-\nmitted entry. Key-based locks are the most common and inexpensive forms of physical access con-\ntrol devices. These are often known as preset locks. These types of locks are often subject to picking, \nwhich is often categorized under the class of lock mechanism attacks called shimming.\nProgrammable or combination locks offer a broader range of control than preset locks. Some \nprogrammable locks can be configured with multiple valid access combinations or may include \ndigital or electronic controls employing keypads, smart cards, or cipher devices. For instance, \nan Electronic Access Control (EAC) lock comprises three elements: an electromagnet to keep \nthe door closed, a credential reader to authenticate subjects and to disable the electromagnet, \nand a door closed sensor to reenable the electromagnet.\nLocks serve as an alternative to security guards as a perimeter entrance access control device. \nA gate or door can be opened and closed to allow access by a security guard who verifies your \n" }, { "page_number": 680, "text": "Forms of Physical Access Controls\n635\nidentity before granting access, or the lock itself can serve as the verification device that also \ngrants or restricts entry.\nBadges\nBadges, identification cards, and security IDs are forms of physical identification and/or of elec-\ntronic access control devices. A badge can be as simple as a name tag indicating whether you are \na valid employee or a visitor. Or it can be as complex as a smart card or token device that \nemploys multifactor authentication to verify and prove your identity and provide authentica-\ntion and authorization to access a facility, specific rooms, or secured workstations. Badges often \ninclude pictures, magnetic strips with encoded data, and personal details to help a security \nguard verify identity.\nBadges may be used in environments in which physical access is primarily controlled by secu-\nrity guards. In such conditions, the badge serves as a visual identification tool for the guards. \nThey can verify your identity by comparing your picture to your person and consult a printed \nor electronic roster of authorized personnel to determine whether you have valid access.\nBadges can also serve in environments guarded by scanning devices rather than security \nguards. In such conditions, the badge can be used either for identification or for authentication. \nWhen the badge is used for identification, it is swiped in a device and then the badge owner must \nprovide one or more authentication factors, such as a password, pass phrase, or biological trait \n(if a biometric device is used). When the badge is used for authentication, the badge owner pro-\nvides their ID, username, and so on and then swipes the badge to authenticate.\nMotion Detectors\nA motion detector, or motion sensor, is a device that senses the occurrence of motion in a specific \narea. There are many different types of motion detectors, including infrared, heat, wave pattern, \ncapacitance, photoelectric, and passive audio. An infrared motion detector monitors for signifi-\ncant or meaningful changes in the infrared lighting pattern of a monitored area. A heat-based \nmotion detector monitors for significant or meaningful changes in the heat levels and patterns in \na monitored area. A wave pattern motion detector transmits a consistent low ultrasonic or high \nmicrowave frequency pattern into the monitored area and monitors for significant or meaningful \nchanges or disturbances in the reflected pattern. A capacitance motion detector senses changes in \nthe electrical or magnetic field surrounding a monitored object. A photoelectric motion detector \nsenses changes in the visible light levels of the monitored area. Photoelectric motion detectors are \nusually deployed in internal rooms that have no windows and are kept dark. A passive audio \nmotion detector listens for abnormal sounds in the monitored area.\nIntrusion Alarms\nWhenever a motion detector registers a significant or meaningful change in the environment, it \ntriggers an alarm. An alarm is a separate mechanism that triggers a deterrent, a repellant, and/\nor a notification. Alarms that trigger deterrents may engage additional locks, shut doors, and \n" }, { "page_number": 681, "text": "636\nChapter 19\n\u0002 Physical Security Requirements\nso on. The goal of such an alarm is to make further intrusion or attack more difficult. Alarms \nthat trigger repellants usually sound an audio siren or bell and turn on lights. These kinds of \nalarms are used to discourage the intruder or attacker from continuing their malicious or tres-\npassing activities and get them to leave the premises. Alarms that trigger notification are often \nsilent from the perspective of an intruder/attacker, but they record data about the incident and \nnotify administrators, security guards, and law enforcement. The recording of an incident can \ntake the form of log files and/or CCTV tapes. The purpose of a silent alarm is to bring autho-\nrized security personnel to the location of the intrusion or attack in hopes of catching the person \ncommitting the unwanted acts.\nLocal alarm systems must broadcast an audible alarm signal that can be easily heard up to \n400 feet away. Additionally, they must be protected, usually by security guards, from tampering \nand disablement. For a local alarm system to be effective, there must be a security team or \nguards positioned nearby who can respond when the alarm is triggered. A centralized alarm sys-\ntem may not have a local alarm; a remote or centralized monitoring station is signaled when the \nalarm is triggered. Auxiliary alarm systems can be added to either local or centralized alarm sys-\ntems. The purpose of an auxiliary alarm system is to notify local police or fire services when an \nalarm is triggered.\nSecondary Verification Mechanisms\nWhen motion detectors, sensors, and alarms are used, secondary verification mechanisms \nshould be in place. As the sensitivity of these devices is increased, a false trigger will occur more \noften. Innocuous events such as the presence of animals, birds, bugs, and authorized personnel \ncan trigger false alarms. Deploying two or more detection and sensor systems and requiring two \nor more triggers in quick succession to occur before an alarm is triggered may significantly \nreduce false alarms and increase the certainty of sensing actual intrusions or attacks.\nCCTV (closed-circuit television via security cameras) is a security mechanism related to \nmotion detectors, sensors, and alarms. However, CCTV is not an automated detection-and-\nresponse system. CCTV requires personnel to watch the captured video to detect suspicious and \nmalicious activities and to trigger alarms. Security cameras can expand the effective visible \nrange of a security guard, therefore increasing the scope of his oversight. In many cases, CCTV \nis not used as a primary detection tool due to the high cost of paying a person to sit and watch \nthe video screens. Instead, it is used as a secondary or follow-up mechanism that is reviewed \nafter a trigger by an automated system occurs. In fact, the same logic used on auditing and audit \ntrails is used for CCTV and recorded events. A CCTV is a preventative measure, while review-\ning recorded events is a detective measure.\nTechnical Controls\nThe technical controls most often found employed as an access control mechanism to manage \nphysical access include smart/dumb cards and biometrics. In addition to access control, physical \nsecurity mechanisms include audit trails, access logs, and intrusion detection systems (IDSs).\n" }, { "page_number": 682, "text": "Technical Controls\n637\nSmart Cards\nSmart cards are credit-card-sized IDs, badges, or security passes that have a magnetic strip, bar \ncode, or integrated circuit chip embedded in them. They can contain information about the \nauthorized bearer that can be used for identification and/or authentication purposes. Some \nsmart cards are even capable of processing information or can be used to store reasonable \namounts of data in a memory chip. A smart card can be referred to by several phrases or terms:\n\u0002\nAn identity token containing integrated circuits (ICs)\n\u0002\nA processor IC card\n\u0002\nAn IC card with an ISO 7816 interface\nSmart cards are often viewed as a complete security solution, but they should not be consid-\nered a complete solution. As with any single security mechanism, such a solution has weakness \nand vulnerabilities. Smart cards can be subjected to physical attacks, logical attacks, Trojan \nhorse attacks, and social engineering attacks.\nMemory cards are machine-readable ID cards with a magnetic strip. Like a credit card, debit \ncard, or ATM card, memory cards are capable of retaining a small amount of data but are \nunable to process data like a smart card. Memory cards often function as a type of two-factor \ncontrol in that they usually require that the user have physical possession of the card (Type 2 \nfactor) as well as know the PIN code for the card (Type 1 factor). However, memory cards are \neasy to copy or duplicate and are considered insufficient for authentication purposes in a secure \nenvironment.\nDumb cards are human-readable card IDs that usually have a photo and written information \nabout the authorized bearer. Dumb cards are for use in environments in which automated con-\ntrols are infeasible or unavailable but security guards are practical.\nProximity Readers\nIn addition to smart and dumb cards, proximity readers can be used to control physical access. \nA proximity reader can be a passive device, a field-powered device, or a transponder. The prox-\nimity device is worn or held by the authorized bearer. When they pass a proximity reader, the \nreader is able to determine who the bearer is and whether they have authorized access. A passive \ndevice reflects or otherwise alters the electromagnetic field generated by the reader. This alter-\nation is detected by the reader. The passive device has no active electronics; it is just a small mag-\nnet with specific properties (like the antitheft devices commonly found on DVDs). A field-\npowered device has electronics that are activated when it enters the electromagnetic field gen-\nerated by the reader. Such devices actually generate electricity from the EM field to power them-\nselves (like card readers that only require that the access card be waved within inches of the \nreader to unlock doors). A transponder device is self-powered and transmits a signal received \nby the reader. This can occur consistently or only at the press of a button (like a toll road pass \nor a garage door opener).\nIn addition to smart/dumb cards and proximity readers, physical access can be managed with \nbiometric access control devices. See Chapter 1, “Accountability and Access Control,” for a \ndescription of biometric devices.\n" }, { "page_number": 683, "text": "638\nChapter 19\n\u0002 Physical Security Requirements\nAccess Abuses\nNo matter what form of physical access control is used, a security guard or other monitoring \nsystem must be deployed to prevent abuse, masquerading, and piggybacking. Examples of \nabuses of physical access controls are propping open secured doors and bypassing locks or \naccess controls. Masquerading is using someone else’s security ID to gain entry into a facility. \nPiggybacking is following someone through a secured gate or doorway without being identified \nor authorized personally.\nAudit trails and access logs are useful tools even for physical access control. They may need \nto be created manually by security guards. Or they can be generated automatically if sufficient \nautomated access control mechanisms (such as smart cards and certain proximity readers) are \nin place. The time a subject requests entry, the result of the authentication process, and the \nlength of time the secured gate remains open are important elements to include in audit trails \nand access logs. In addition to the electronic or paper trail, you should consider monitoring \nentry points with CCTV. CCTV enables you to compare the audit trails and access logs with a \nvisually recorded history of the events. Such information is critical for reconstructing the events \nof an intrusion, breach, or attack.\nIntrusion Detection Systems\nIntrusion detection systems are systems—automated or manual—that are designed to detect the \nattempted intrusion, breach, or attack of an authorized individual; the use of an unauthorized \nentry point; or the committal of the event at an unauthorized or abnormal time. Intrusion detec-\ntion systems used to monitor physical activity may include security guards, automated access con-\ntrols, and motion detectors, as well as other specialty monitoring techniques. Physical intrusion \ndetection systems, also called burglar alarms, detect unauthorized activities and notify the author-\nities (internal security or external law enforcement). Physical intrusion detection systems can mon-\nitor for vibrations, movement, temperature changes, sound, changes in electromagnetic fields, and \nmuch more. The most common type of system uses a simple circuit (a.k.a. dry contact switches) \ncomprising foil tape in entrance points to detect when a door or window has been opened.\nAn intrusion detection mechanism is useful only if it is connected to an intrusion alarm. An \nintrusion alarm notifies authorities about a breach of physical security. There are four types \nof alarms:\nLocal alarm system\nAn alarm sounds locally and can be heard up to 400 feet away.\nCentral station system\nThe alarm is silent locally, but offsite monitoring agents are notified so \nthey can respond to the security breach. Most residential security systems are of this type. Most \ncentral station systems are well-known or national security companies, such as Brinks and ADT.\nProprietary system\nThis is the same thing as a central station system; however, the host orga-\nnization has its own onsite security staff waiting to respond to security breaches.\nAuxiliary station\nWhen the security perimeter is breached, emergency services are notified to \nrespond to the incident and arrive at the location. This could include fire, police, and medical \nservices.\n" }, { "page_number": 684, "text": "Technical Controls\n639\nTwo or more of these types of intrusion and alarm systems can be incorporated in a single \nsolution. However, there are two aspects of any intrusion detection and alarm system that can \ncause it to fail: how it gets its power and how it communicates. If the system loses power, it will \nnot function. Thus, a reliable detection and alarm system has a battery backup with enough \nstored power for 24 hours of operation. If the communication lines are cut, the alarm may not \nfunction and security personnel and emergency services will not be notified. Thus, a reliable \ndetection and alarm system has a heartbeat sensor for line supervision. A heartbeat sensor is a \nmechanism by which the communication pathway is either constantly or periodically checked \nwith a test signal. If the receiving station ever detects a failed heartbeat signal, the alarm is trig-\ngered automatically. Both of these measures are designed to prevent an intruder from circum-\nventing the detection and alarm system.\nEmanation Security\nMany electrical devices emanate electrical signals or radiation that can be intercepted by unau-\nthorized individuals. These signals may contain confidential, sensitive, or private data. Obvious \nexamples of emanation devices are wireless networking equipment and mobile phones, but \nthere are many other devices that that are vulnerable to interception. Some possible examples \ncould be monitors, modems, and internal and external media drives (hard drives, floppy drives, \nCDs, etc.). With the right equipment, unauthorized users could intercept the electromagnetic or \nradio frequency signals (collectively known as emanations) and extract confidential data.\nTEMPEST\nClearly, if a device is sending out a signal that can be intercepted by someone outside of your orga-\nnization, a security precaution is needed. The types of countermeasures and safeguards used to pro-\ntect against emanation attacks are known as Transient Electromagnetic Pulse Equipment Shielding \nTechniques (TEMPEST) devices. TEMPEST was originally a government research study aimed at \nprotecting electronic equipment from damage from the electromagnetic pulse (EMP) from nuclear \nexplosions. It has since expanded to a general study of monitoring emanations and preventing ema-\nnation interception. Thus TEMPEST is now a formal name referencing a broad category of activities \nrather than an acronym for a specific purpose.\nTEMPEST Countermeasures\nSome TEMPEST countermeasures are Faraday cages, white noise, and control zones. A Faraday \ncage is a box, mobile room, or entire building that is designed with an external metal skin, often \na wire mesh, that fully surrounds an area on all six sides (i.e., front, back, left, right, top, and \nbottom). This metal skin is slightly electrified to produce a capacitor-like effect (hence the name \nFaraday) that prevents all electromagnetic signals (emanations) from exiting or entering the \narea enclosed by the Faraday cage. Faraday cages are very effective in blocking EM signals. In \nfact, inside of an active Faraday cage, mobile phones do not work and neither can you pick up \nbroadcast radio or television stations.\nWhite noise is simply the broadcasting of false traffic at all times to mask and hide the pres-\nence of real emanations. White noise can consist of a real signal of another source that is not \n" }, { "page_number": 685, "text": "640\nChapter 19\n\u0002 Physical Security Requirements\nconfidential, a constant signal of a specific frequency, a randomly variable signal (such as the \nwhite noise heard between radio stations or television stations), or even a jam signal that causes \ninterception equipment to fail. White noise is most effective when created around the perimeter \nof an area so that it is broadcast outward to protect the internal area where emanations may be \nneeded for normal operations.\nThe final type of TEMPEST countermeasure, a control zone, is simply the implementation \nof either a Faraday cage or white noise generation in an environment where a specific area is \nprotected while the rest is not. A control zone can be a room, a floor, or an entire building. Con-\ntrol zones are those areas where emanation signals are supported and used by necessary equip-\nment, such as wireless networking, mobile phones, radios, and televisions. Outside of the \ncontrol zones, emanation interception is blocked or prevented through the use of various TEM-\nPEST countermeasures.\nEnvironment and Life Safety\nAn important aspect of physical access control and maintaining the security of a facility is pro-\ntecting the basic elements of the environment and protecting human life. In all circumstances \nand under all conditions, the most important aspect of security is protecting people. Preventing \nharm to people is the most important goal of all security solutions.\nPersonnel Safety\nPart of maintaining safety for personnel is maintaining the basic environment of a facility. For \nshort periods of time, people can survive without water, food, air conditioning, and power. But \nin some cases, the loss of these elements can have disastrous results or they can be symptoms of \nmore immediate and dangerous problems. Flooding, fires, release of toxic materials, and natu-\nral disasters all threaten human life as well as the stability of a facility. Physical security proce-\ndures should focus on protecting human life and then on restoring the safety of the environment \nand restoring the utilities necessary for the IT infrastructure to function.\nPeople should always be your top priority. Only after personnel are safe can you consider \naddressing business continuity issues. Many organizations are adopting Occupant Emergency \nPlans (OEPs) to guide and assist with sustaining personnel safety in the event of a disaster. The \nOEP provides guidance on how to minimize threats to life, prevent injury, and protect property \nfrom damage in the event of a destructive physical event. The OEP does not address IT issues \nor business continuity, just personnel and general property. The BCP and DRP address IT and \nbusiness continuity and recovery issues.\nPower and Electricity\nPower supplied by electric companies is not always consistent and clean. Most electronic equip-\nment demands clean power to function properly. Equipment damage due to power fluctuations \n" }, { "page_number": 686, "text": "Environment and Life Safety\n641\nis a common occurrence. Many organizations opt to manage their own power through several \nmeans. An uninterruptible power supply (UPS) is a type of self-charging battery that can be used \nto supply consistent clean power to sensitive equipment. A UPS functions basically by taking \npower in from the wall outlet, storing it in a battery, pulling power out of the battery, and then \nfeeding that power to whatever devices are connected to it. By directing current through its bat-\ntery, it is able to maintain a consistent clean power supply. A UPS has a second function, one \nthat is used most often as a selling point. A UPS provides continuous power even after the pri-\nmary power source fails. A UPS can continue to supply power for minutes or hours, depending \non its capacity and the amount of power the equipment needs.\nAnother means to ensure that equipment is not damaged by power fluctuations is the use of \npower strips with surge protectors. A surge protector includes a fuse that will blow before power \nlevels change significantly enough to cause damage to equipment. However, once a surge protec-\ntor’s fuse or circuit is tripped, the electric flow is completely interrupted. Surge protectors should \nbe used only when instant termination of electricity will not cause damage or loss to the equip-\nment. Otherwise, a UPS should be employed.\nIf maintaining operations for considerable time in spite of a brownout or blackout is a neces-\nsity, then onsite electric generators are required. Such generators turn on automatically when a \npower failure is detected. Most generators operate using a fuel tank of liquid or gaseous pro-\npellant that must be maintained to ensure reliability. Electric generators are considered alternate \nor backup power sources.\nThe problems with power are numerous. Here is a list of terms associated with power issues \nyou should be familiar with:\nFault\nA momentary loss of power\nBlackout\nA complete loss of power\nSag\nMomentary low voltage\nBrownout\nProlonged low voltage\nSpike\nMomentary high voltage\nSurge\nProlonged high voltage\nInrush\nAn initial surge of power usually associated with connecting to a power source, \nwhether primary or alternate/secondary\nNoise\nA steady interfering disturbance\nTransient\nA short duration of line noise disturbance\nClean\nNonfluctuating pure power\nGround\nThe wire in an electrical circuit that is grounded\nA brownout is an interesting power issue because its definition references the ANSI standards \nfor power. The ANSI standards allow for an 8-percent drop in power between the power source \nand the facility meter and a drop of 3.5 percent between the facility meter and the wall outlet \nbefore the instance of prolonged low voltage is labeled as a brownout. The ANSI standard further \ndistinguishes that the low voltage outside of your meter is to be repaired by the power company, \nwhile the internal brownout is your responsibility.\n" }, { "page_number": 687, "text": "642\nChapter 19\n\u0002 Physical Security Requirements\nNoise\nNoise can cause more than just problems with how equipment functions; it can also interfere with \nthe quality of communications, transmissions, and playback. Noise generated by electric current \ncan affect any means of data transmission that relies on electromagnetic transport mechanisms, \nsuch as telephone, cellular, television, audio, radio, and network mechanisms. There are two types \nof electromagnetic interference (EMI): common mode and traverse mode. Common mode noise \nis generated by the difference in power between the hot and ground wires of a power source or \noperating electrical equipment. Traverse mode noise is generated by the difference in power \nbetween the hot and neutral wires of a power source or operating electrical equipment.\nA similar issue is radio frequency interference (RFI), which can affect many of the same sys-\ntems as EMI. RFI is generated by a wide number of common electrical appliances, including flu-\norescent lights, electrical cables, electric space heaters, computers, elevators, motors, and \nelectric magnets.\nProtecting your power supply and your equipment from noise is an important part of main-\ntaining a productive and functioning environment for your IT infrastructure. Steps to take for \nthis kind of protection include providing for sufficient power conditioning, establishing proper \ngrounding, shielding all cables, and limiting exposure to EMI and RFI sources.\nTemperature, Humidity, and Static\nIn addition to power considerations, maintaining the environment involves control over the \nHVAC mechanisms. Rooms primarily containing computers should be kept at 60 to 75 degrees \nFahrenheit (15 to 23 degrees Celsius). Humidity in a computer room should be maintained \nbetween 40 and 60 percent. Too much humidity can cause corrosion. Too little humidity causes \nstatic electricity. Even on nonstatic carpeting, if the environment has low humidity it is still pos-\nsible to generate 20,000-volt static discharges. As you can see in Table 19.1, even minimal levels \nof static discharge can destroy electronic equipment.\nT A B L E\n1 9 . 1\nStatic Voltage and Damage\nStatic Voltage\nPossible Damage\n40\nDestruction of sensitive circuits and other electronic components\n1,000\nScrambling of monitor displays\n1,500\nDestruction of data stored on hard drives\n2,000\nAbrupt system shutdown\n4,000\nPrinter jam or component damage\n17,000\nPermanent circuit damage\n" }, { "page_number": 688, "text": "Environment and Life Safety\n643\nWater\nWater leakage and flooding should be addressed in your environmental safety policy and pro-\ncedures. Plumbing leaks are not an everyday occurrence, but when they do happen, they often \ncause significant damage. Water and electricity don’t mix. If your computer systems come in \ncontact with water, especially while they are operating, damage is sure to occur. Plus water and \nelectricity create a serious risk of electrocution to personnel. Whenever possible, locate server \nrooms and critical computer equipment away from any water source or transport pipes. You \nmay also want to install water detection circuits on the floor around mission-critical systems. \nWater detection circuits will sound an alarm and alert you if water is encroaching upon the \nequipment. To minimize emergencies, be familiar with shutoff valves and drainage locations. In \naddition to monitoring for plumbing leaks, you should evaluate your facility’s capability of han-\ndling severe rain or flooding in your area. Is the facility located on a hill or in a valley? Is there \nsufficient drainage? Is there a history of flooding or accumulation of standing water? Is your \nserver room located in the basement or on the first floor?\nFire Detection and Suppression\nFire detection and suppression must not be overlooked. Protecting personnel from harm should \nalways be the most important goal of any security or protection system. In addition to protect-\ning people, fire detection and suppression is designed to keep damage caused by fire, smoke, \nheat, and suppression materials to a minimum, especially in regard to the IT infrastructure.\nBasic fire education involves knowledge of the fire triangle (see Figure 19.2). The three cor-\nners of the triangle represent fire, heat, and oxygen. The center of the triangle represents the \nchemical reaction of the three elements. The point of the fire triangle is to illustrate that if you \ncan remove any one of the four items from the fire triangle, the fire can be extinguished. Dif-\nferent suppression mediums address different aspects of the fire:\n\u0002\nWater suppresses the temperature.\n\u0002\nSoda acid and other dry powders suppress the fuel supply.\n\u0002\nCO2 suppresses the oxygen supply.\n\u0002\nHalon (and its substitutes) interferes with the chemical reaction of combustion and/or sup-\npresses the oxygen supply.\nWhen selecting a suppression medium, it is important to consider what aspect of the fire tri-\nangle it addresses, what this represents in reality, how effective the suppression medium usually \nis, and what effect the suppression medium will have on your environment.\nIn addition to understanding the fire triangle, it is also helpful to understand the stages of \nfire. Fire has numerous stages, and Figure 19.3 addresses the four most vital stages.\nStage 1: The incipient stage\nAt this stage, there is only air ionization but not smoke.\nStage 2: The smoke stage\nIn Stage 2, smoke is visible from the point of ignition.\nStage 3: The flame stage\nThis is when a flame can be seen with the naked eye.\nStage 4: The heat stage\nAt Stage 4, the fire is considerably further down the timescale to the \npoint where there is an intense heat buildup and everything in the area burns.\n" }, { "page_number": 689, "text": "644\nChapter 19\n\u0002 Physical Security Requirements\nF I G U R E\n1 9 . 2\nThe fire triangle\nF I G U R E\n1 9 . 3\nThe four primary stages of fire\nThe earlier a fire is detected, the easier it is to extinguish and the less damage it and its sup-\npression medium(s) can cause.\nOne of the basics of fire management is proper personnel awareness training. Everyone should \nbe thoroughly familiar with the fire suppression mechanisms in their facility. Everyone should \nalso be familiar with at least two evacuation routes from their primary work location and know \nhow to locate evacuation routes elsewhere in the facility. Personnel should be trained in the \nlocation and use of fire extinguishers. Other items that can be included in fire or general emer-\ngency response training are cardiopulmonary resuscitation (CPR) training, emergency shut-\ndown procedures, and a preestablished rendezvous location or safety verification mechanism \n(such as voicemail).\nChemical\nReaction\nHeat\nOxygen\nFuel\nTemperature\nTime\nStage 1: Incipient\nStage 2: Smoke\nStage 3: Flame\nStage 4: Heat\n" }, { "page_number": 690, "text": "Environment and Life Safety\n645\nMost fires in a data center are caused by overloaded electrical distribution outlets.\nFire Extinguishers\nThere are several different types of fire extinguishers. Understanding what type to use on vari-\nous forms of fire is essential to effective fire suppression. If a fire extinguisher is used improperly \nor the wrong form of fire extinguisher is used, the fire could spread and intensify instead of \nbeing quenched. Fire extinguishers are to be used only when a fire is still in the incipient stage. \nTable 19.2 lists the three common types of fire extinguishers.\nWater cannot be used on Class B fires because it splashes the burning liquids \nand said liquids usually float. Water cannot be used on Class C fires because of \nthe potential for electrocution. Oxygen suppression cannot be used on metal \nfires because burning metal produces its own oxygen.\nFire Detection Systems\nTo properly protect a facility from fire is to install an automated detection and suppression system. \nThere are many types of fire detection systems. Fixed temperature detection systems trigger suppres-\nsion when a specific temperature is reached. The trigger is usually a metal or plastic component that \nis in the sprinkler head and melts at a specific temperature. Rate of rise temperature detection sys-\ntems trigger suppression when the speed at which the temperature changes reaches a specific level. \nFlame actuated systems trigger suppression based on the infrared energy of flames. Smoke actuated \nsystems trigger suppression based on photoelectric or radioactive ionization sensors.\nT A B L E\n1 9 . 2\nFire Extinguisher Classes\nClass\nType\nSuppression Material\nA\nCommon combustibles\nWater, soda acid (a dry powder or liquid \nchemical)\nB\nLiquids\nCO2, Halon*, soda acid\nC\nElectrical\nCO2, Halon*\nD\nMetal\nDry powder\n* Halon or EPA-approved Halon substitute.\n" }, { "page_number": 691, "text": "646\nChapter 19\n\u0002 Physical Security Requirements\nMost fire detection systems can be linked to fire response service notification mechanisms. \nWhen suppression is triggered, such linked systems will contact the local fire response team and \nrequest aid using an automated message or alarm.\nTo be effective, fire detectors need to be placed strategically. Don’t forget to place them in \ndropped ceilings and raised floors, in server rooms, in private offices and public areas, in HVAC \nvents, in elevator shafts, in the basement, and so on.\nAs for the suppression mechanisms used, they can be based on water or on a fire suppression \ngas system. Water is the most common in human-friendly environments, whereas gaseous sys-\ntems are more appropriate for computer rooms where personnel typically do not reside.\nWater Suppression Systems\nThere are four main types of water suppression systems. A wet pipe system (also known as a \nclosed head system) is always full of water. Water discharges immediately when suppression is \ntriggered. A dry pipe system contains compressed air. Once suppression is triggered, the air \nescapes, opening a water valve that in turn causes the pipes to fill and discharge water into the \nenvironment. A deluge system is another form of dry pipe system that uses larger pipes and \ntherefore a significantly larger volume of water. Deluge systems are inappropriate for environ-\nments that contain electronics and computers. A preaction system is a combination dry pipe/wet \npipe system. The system exists as a dry pipe until the initial stages of a fire (smoke, heat, etc.) \nare detected and then the pipes are filled with water. The water is released only after the sprin-\nkler head activation triggers are melted by sufficient heat. If the fire is quenched before the sprin-\nklers are triggered, the pipes can be manually emptied and reset. This also allows for manual \nintervention to stop the release of water before sprinkler triggering occurs. Preaction systems \nare the most appropriate water-based system for environments that include both computers and \nhumans in the same locations.\nThe most common cause of failure for a water-based system is human error, \nsuch as turning off the water source when there is a fire or triggering a water \nrelease when there is no fire.\nGas Discharge Systems\nGas discharge systems are usually more effective than water discharge systems. However, gas \ndischarge systems should not be employed in environments in which people are located. Gas dis-\ncharge systems usually remove the oxygen from the air, thus making them hazardous to per-\nsonnel. They employ a pressurized gaseous suppression medium, such as CO2, Halon, or FM-200 \n(a Halon replacement).\nHalon is a very effective fire suppression compound, but it degrades into toxic gases at 900 \ndegrees Fahrenheit. Additionally, it is not environmentally friendly. The EPA has banned the man-\nufacture of Halon in the United States, but it can still be imported. However, according to the Mon-\ntreal Protocol, you should contact a Halon recycling facility to make arrangements for refilling a \ndischarged system instead of contacting a vendor or manufacturer directly. This action is encour-\naged so that already produced Halon will be consumed and less new Halon will be created.\n" }, { "page_number": 692, "text": "Equipment Failure\n647\nDue to the issues with Halon, it is often replaced by a more ecological and less toxic medium. \nThe following list includes EPA-approved replacements for Halon:\n\u0002\nFM-200 (HFC-227ea)\n\u0002\nCEA-410 or CEA 308\n\u0002\nNAF-S-III (HCFC Blend A)\n\u0002\nFE-13 (HCFC-23)\n\u0002\nAragon (IG55) or Argonite (IG01)\n\u0002\nInergen (IG541)\nHalon may also be replaced by low-pressure water mists, but those systems are usually not \nemployed in computer rooms or electrical equipment storage facilities. A low-pressure water \nmist is a vapor cloud used to quickly reduce the temperature of an area.\nDamage\nAddressing fire detection and suppression includes dealing with the possible contamination and \ndamage caused by a fire. The destructive elements of a fire include smoke and heat, but they also \ninclude the suppression medium, such as water or soda acid. Smoke is damaging to most storage \ndevices. Heat can damage any electronic or computer component. One hundred degrees Fahr-\nenheit can damage storage tapes, 175 degrees can damage computer hardware (i.e., CPU and \nRAM), and 350 degrees can damage paper products (i.e., warping and discoloration).\nSuppression mediums can cause short circuits, initiate corrosion, or otherwise render equip-\nment useless. All of these issues must be addressed when designing a fire response system.\nDon’t forget that in the event of a fire, in addition to damage caused by the fire \nand your selected suppression medium, members of the fire department may \ncause damage using their hoses to spray water and their axes while searching \nfor hot spots.\nEquipment Failure\nNo matter what the quality of the equipment your organization chooses to purchase and install \nis, eventually it will fail. Understanding this fact and preparing for it will ensure the ongoing \navailability of your IT infrastructure and will help you to protect the integrity and availability \nof your resources.\nPreparing for equipment failure can take many forms. In some non-mission-critical situa-\ntions, simply knowing where you can purchase replacement parts for a 48-hour replacement \ntimeline is sufficient. In other situations, maintaining onsite replacement parts is mandatory. \nKeep in mind that the response time in returning a system back to a fully functioning state is \ndirectly proportional to the cost involved in maintaining such a solution. Costs include storage, \n" }, { "page_number": 693, "text": "648\nChapter 19\n\u0002 Physical Security Requirements\ntransportation, prepurchasing, and maintaining onsite installation and restoration expertise. In \nsome cases, maintaining onsite replacements is infeasible. For those cases, establishing a service \nlevel agreement (SLA) with the hardware vendor is essential. An SLA clearly defines the \nresponse time a vendor will provide in the event of an equipment failure emergency.\nAging hardware should be scheduled for replacement and/or repair. The schedule for such \noperations should be based on the mean time to failure (MTTF) and mean time to repair \n(MTTR) estimates established for each device. MTTF is the expected typical functional lifetime \nof the device given a specific operating environment. MTTR is the average length of time \nrequired to perform a repair on the device. A device can often undergo numerous repairs before \na catastrophic failure is expected. Be sure to schedule all devices to be replaced before their \nMTTF expires. When a device is sent out for repairs, you need to have an alternate solution or \na backup device to fill in for the duration of the repair time. Often, waiting until a minor failure \noccurs before a repair is performed is satisfactory, but waiting until a complete failure occurs \nbefore replacement is an unacceptable security practice.\nSummary\nIf you don’t have control over the physical environment, no amount of administrative or tech-\nnical/logical access controls can provide adequate security. If a malicious person can gain phys-\nical access to your facility or equipment, they own it.\nThere are many aspects and elements to implementing and maintaining physical security. \nOne of the core elements is selecting or designing the facility that will house your IT infrastruc-\nture and the operations of your organization. You must start with a plan that outlines the secu-\nrity needs of your organization and emphasizes methods or mechanisms to employ to provide \nsecurity. Such a plan is developed through a process known as critical path analysis.\nThe security controls implemented to manage physical security can be divided into three \ngroups: administrative, technical, and physical. Administrative physical security controls \ninclude facility construction and selection, site management, personnel controls, awareness \ntraining, and emergency response and procedures. Technical physical security controls include \naccess controls, intrusion detection, alarms, CCTV, monitoring, HVAC, power supplies, and \nfire detection and suppression. Examples of physical controls for physical security include fenc-\ning, lighting, locks, construction materials, mantraps, dogs, and guards.\nThere are many types of physical access control mechanisms that can be deployed in an envi-\nronment to control, monitor, and manage access to a facility. These range from deterrents to \ndetection mechanisms. They can be fences, gates, turnstiles, mantraps, lighting, security guards, \nsecurity dogs, key locks, combination locks, badges, motion detectors, sensors, and alarms.\nThe technical controls most often found employed as an access control mechanism to man-\nage physical access include smart/dumb cards and biometrics. In addition to access control, \nphysical security mechanisms can be in the form of audit trails, access logs, and intrusion detec-\ntion systems.\nAn important aspect of physical access control and maintaining the security of a facility is \nprotecting the basic elements of the environment and protecting human life. In all circumstance \n" }, { "page_number": 694, "text": "Exam Essentials\n649\nand under all conditions, the most important aspect of security is protecting people. Preventing \nharm is the utmost goal of all security solutions. Providing clean power sources and managing \nthe environment are also important.\nFire detection and suppression must not be overlooked. In addition to protecting people, fire \ndetection and suppression is designed to keep damage caused by fire, smoke, heat, and suppres-\nsion materials to a minimum, especially in regard to the IT infrastructure.\nExam Essentials\nUnderstand why there is no security without physical security.\nWithout control over the \nphysical environment, no amount of administrative or technical/logical access controls can pro-\nvide adequate security. If a malicious person can gain physical access to your facility or equip-\nment, they can do just about anything they want, from destruction to disclosure and alteration.\nBe able to list administrative physical security controls.\nExamples of administrative physical \nsecurity controls are facility construction and selection, site management, personnel controls, \nawareness training, and emergency response and procedures.\nBe able to list the technical physical security controls.\nTechnical physical security controls \ncan be access controls, intrusion detection, alarms, CCTV, monitoring, HVAC, power supplies, \nand fire detection and suppression.\nBe able to name the physical controls for physical security.\nPhysical controls for physical \nsecurity are fencing, lighting, locks, construction materials, mantraps, dogs, and guards.\nKnow the functional order of controls.\nThe are denial, deterrence, detection, then delay.\nKnow the key elements in making a site selection and designing a facility for construction.\nThe key elements in making a site selection are visibility, composition of the surrounding area, \narea accessibility, and the effects of natural disasters. A key element in designing a facility for \nconstruction is understanding the level of security needed by your organization and planning for \nit before construction begins.\nKnow how to design and configure secure work areas.\nThere should not be equal access to \nall locations within a facility. Areas that contain assets of higher value or importance should \nhave restricted access. Valuable and confidential assets should be located in the heart or center \nof protection provided by a facility. Also, centralized server or computer rooms need not be \nhuman compatible.\nUnderstand how to handle visitors in a secure facility.\nIf a facility employs restricted areas to \ncontrol physical security, then a mechanism to handle visitors is required. Often an escort is \nassigned to visitors and their access and activities are monitored closely. Failing to track the \nactions of outsiders when they are granted access into a protected area can result in malicious \nactivity against the most protected assets.\n" }, { "page_number": 695, "text": "650\nChapter 19\n\u0002 Physical Security Requirements\nKnow the three categories of security controls implemented to manage physical security and be \nable to name examples of each.\nThe security controls implemented to manage physical secu-\nrity can be divided into three groups: administrative, technical, and physical. Understand when \nand how to use each and be able to list examples of each kind.\nKnow the common threats to physical access controls.\nNo matter what form of physical \naccess control is used, a security guard or other monitoring system must be deployed to prevent \nabuse, masquerading, and piggybacking. Abuses of physical access control are propping open \nsecured doors and bypassing locks or access controls. Masquerading is using someone else’s \nsecurity ID to gain entry into a facility. Piggybacking is following someone through a secured \ngate or doorway without being identified or authorized personally.\nUnderstand the need for audit trails and access logs.\nAudit trails and access logs are useful \ntools even for physical access control. They may need to be created manually by security guards. \nOr they can be generated automatically if sufficiently automated access control mechanisms are \nin place (i.e., smart cards and certain proximity readers). You should also consider monitoring \nentry points with CCTV. Through CCTV, you can compare the audit trails and access logs with \na visually recorded history of the events. Such information is critical to reconstructing the events \nof an intrusion, breach, or attack.\nUnderstand the need for clean power.\nPower supplied by electric companies is not always \nconsistent and clean. Most electronic equipment demands clean power in order to function \nproperly. Equipment damage due to power fluctuations is a common occurrence. Many orga-\nnizations opt to manage their own power through several means. A UPS (uninterruptible power \nsupply) is a type of self-charging battery that can be used to supply consistent clean power to \nsensitive equipment. UPSs also provide continuous power even after the primary power source \nfails. A UPS can continue to supply power for minutes or hours depending on its capacity and \nthe draw by equipment.\nKnow the terms commonly associated with power issues.\nKnow the definitions of the follow-\ning: fault, blackout, sag, brownout, spike, surge, inrush, noise, transient, clean, and ground.\nUnderstand controlling the environment.\nIn addition to power considerations, maintaining \nthe environment involves control over the HVAC mechanisms. Rooms primarily containing \ncomputers should be kept at 60 to 75 degrees Fahrenheit (15 to 23 degrees Celsius). Humidity \nin a computer room should be maintained between 40 and 60 percent. Too much humidity can \ncause corrosion. Too little humidity causes static electricity.\nKnow about static electricity.\nEven on nonstatic carpeting, if the environment has low humid-\nity, it is still possible to generate 20,000-volt static discharges. Even minimal levels of static dis-\ncharge can destroy electronic equipment.\nUnderstand the need to manage water leakage and flooding.\nWater leakage and flooding \nshould be addressed in your environmental safety policy and procedures. Plumbing leaks are \nnot an everyday occurrence, but when they do happen, they often cause significant damage. \nWater and electricity don’t mix. If your computer systems come in contact with water, espe-\ncially while they are operating, damage is sure to occur. Whenever possible, locate server rooms \nand critical computer equipment away from any water source or transport pipes.\n" }, { "page_number": 696, "text": "Exam Essentials\n651\nUnderstand the importance of fire detection and suppression.\nFire detection and suppression \nmust not be overlooked. Protecting personnel from harm should always be the most important \ngoal of any security or protection system. In addition to protecting people, fire detection and \nsuppression is designed to keep damage caused by fire, smoke, heat, and suppression materials \nto a minimum, especially in regard to the IT infrastructure.\nUnderstand the possible contamination and damage caused by a fire and suppression.\nThe \ndestructive elements of a fire include smoke and heat, but they also include the suppression \nmedium, such as water or soda acid. Smoke is damaging to most storage devices. Heat can dam-\nage any electronic or computer component. Suppression mediums can cause short circuits, ini-\ntiate corrosion, or otherwise render equipment useless. All of these issues must be addressed \nwhen designing a fire response system.\n" }, { "page_number": 697, "text": "652\nChapter 19\n\u0002 Physical Security Requirements\nReview Questions\n1.\nWhich of the following is the most important aspect of security?\nA. Physical security\nB. Intrusion detection\nC. Logical security\nD. Awareness training\n2.\nWhat method can be used to map out the needs of an organization for a new facility?\nA. Log file audit\nB. Critical path analysis\nC. Risk analysis\nD. Inventory\n3.\nWhat type of physical security controls focus on facility construction and selection, site man-\nagement, personnel controls, awareness training, and emergency response and procedures?\nA. Technical\nB. Physical\nC. Administrative\nD. Logical\n4.\nWhich of the following is not a security-focused design element of a facility or site?\nA. Separation of work and visitor areas\nB. Restricted access to areas with higher value or importance\nC. Confidential assets located in the heart or center of a facility\nD. Equal access to all locations within a facility\n5.\nWhich of the following does not need to be true in order to maintain the most efficient and \nsecure server room?\nA. It must be human compatible.\nB. It must include the use of non-water fire suppressants.\nC. The humidity must be kept between 40 and 60 percent.\nD. The temperature must be kept between 60 and 75 degrees Fahrenheit.\n6.\nWhat is a perimeter-defining device used to deter casual trespassing?\nA. Gates\nB. Fencing\nC. Security guards\nD. Motion detectors\n" }, { "page_number": 698, "text": "Review Questions\n653\n7.\nWhich of the following is a double set of doors that is often protected by a guard and is used to \ncontain a subject until their identity and authentication is verified?\nA. Gate\nB. Turnstile\nC. Mantrap\nD. Proximity detector\n8.\nWhat is the most common form of perimeter security devices or mechanisms?\nA. Security guards\nB. Fences\nC. CCTV\nD. Lighting\n9.\nWhich of the following is not a disadvantage of using security guards?\nA. Security guards are usually unaware of the scope of the operations within a facility.\nB. Not all environments and facilities support security guards.\nC. Not all security guards are themselves reliable.\nD. Prescreening, bonding, and training does not guarantee effective and reliable security guards.\n10. What is the most common cause of failure for a water-based fire suppression system?\nA. Water shortage\nB. People\nC. Ionization detectors\nD. Placement of detectors in drop ceilings\n11. What is the most common and inexpensive form of physical access control device?\nA. Lighting\nB. Security guard\nC. Key locks\nD. Fences\n12. What type of motion detector senses changes in the electrical or magnetic field surrounding a \nmonitored object?\nA. Wave\nB. Photoelectric\nC. Heat\nD. Capacitance\n" }, { "page_number": 699, "text": "654\nChapter 19\n\u0002 Physical Security Requirements\n13. Which of the following is not a typical type of alarm that can be triggered for physical security?\nA. Preventative\nB. Deterrent\nC. Repellant\nD. Notification\n14. No matter what form of physical access control is used, a security guard or other monitoring sys-\ntem must be deployed to prevent all but which of the following?\nA. Piggybacking\nB. Espionage\nC. Masquerading\nD. Abuse\n15. What is the most important goal of all security solutions?\nA. Prevention of disclosure\nB. Maintaining integrity\nC. Human safety\nD. Sustaining availability\n16. What is the ideal humidity range for a computer room?\nA. 20–40 percent\nB. 40–60 percent\nC. 60–75 percent\nD. 80–95 percent\n17.\nAt what voltage level can static electricity cause destruction of data stored on hard drives?\nA. 4,000\nB. 17,000\nC. 40\nD. 1,500\n18. A Type B fire extinguisher may use all but which of the following suppression mediums?\nA. Water\nB. CO2\nC. Halon\nD. Soda acid\n" }, { "page_number": 700, "text": "Review Questions\n655\n19. What is the best type of water-based fire suppression system for a computer facility?\nA. Wet pipe system\nB. Dry pipe system\nC. Preaction system\nD. Deluge system\n20. Which of the following is typically not a culprit in causing damage to computer equipment in the \nevent of a fire and a triggered suppression?\nA. Heat\nB. Suppression medium\nC. Smoke\nD. Light\n" }, { "page_number": 701, "text": "656\nChapter 19\n\u0002 Physical Security Requirements\nAnswers to Review Questions\n1.\nA. Physical security is the most important aspect of overall security. Without physical security, \nnone of the other aspects of security is sufficient.\n2.\nB. Critical path analysis can be used to map out the needs of an organization for a new facility. \nA critical path analysis is the process of identifying relationships between mission-critical appli-\ncations, processes, and operations and all of the supporting elements.\n3.\nC. Administrative physical security controls include facility construction and selection, site man-\nagement, personnel controls, awareness training, and emergency response and procedures.\n4.\nD. Equal access to all locations within a facility is not a security-focused design element. Each \narea containing assets or resources of different importance, value, and confidentiality should \nhave a corresponding level of security restriction placed on it.\n5.\nA. A computer room does not need to be human compatible to be efficient and secure. Having \na human-incompatible server room provides a greater level of protection against attacks.\n6.\nB. Fencing is a perimeter-defining device used to deter casual trespassing. Gates, security guards, \nand motion detectors do not define a facility’s perimeter.\n7.\nC. A mantrap is a double set of doors that is often protected by a guard and used to contain a \nsubject until their identity and authentication is verified.\n8.\nD. Lighting is the most common form of perimeter security devices or mechanisms. Your entire \nsite should be clearly lit. This provides for easy identification of personnel and makes it easier \nto notice intrusions.\n9.\nA. Security guards are usually unaware of the scope of the operations within a facility, which \nsupports confidentiality and helps reduce the possibility that a security guard will be involved in \ndisclosure of confidential information.\n10. B. The most common cause of failure for a water-based system is human error. If you turn off \nthe water source after a fire and forget to turn it back on, you’ll be in trouble for the future. Also, \npulling an alarm when there is no fire will trigger damaging water release throughout the office.\n11. C. Key locks are the most common and inexpensive form of physical access control device. \nLighting, security guards, and fences are all much more cost intensive.\n12. D. A capacitance motion detector senses changes in the electrical or magnetic field surrounding \na monitored object.\n13. A. There is no preventative alarm. Alarms are always triggered in response to a detected intru-\nsion or attack.\n14. B. No matter what form of physical access control is used, a security guard or other monitoring \nsystem must be deployed to prevent abuse, masquerading, and piggybacking. Espionage cannot \nbe prevented by physical access controls.\n" }, { "page_number": 702, "text": "Answers to Review Questions\n657\n15. C. Human safety is the most important goal of all security solutions.\n16. B. The humidity in a computer room should ideally be from 40 to 60 percent.\n17.\nD. Destruction of data stored on hard drives can be caused by 1,500 volts of static electricity.\n18. A. Water is never the suppression medium in Type B fire extinguishers because they are used on \nliquid fires.\n19. C. A preaction system is the best type of water-based fire suppression system for a computer \nfacility.\n20. D. Light is usually not damaging to most computer equipment, but fire, smoke, and the sup-\npression medium (typically water) are very destructive.\n" }, { "page_number": 703, "text": "" }, { "page_number": 704, "text": "Glossary\n" }, { "page_number": 705, "text": "660\nGlossary\nNumbers & Symbols\n* (star) Integrity Axiom (* Axiom)\nAn axiom of the Biba model that states that a subject at a \nspecific classification level cannot write data to a higher classification level. This is often short-\nened to “no write up.”\n* (star) Security Property (* Property)\nA property of the Bell-LaPadula model that states \nthat a subject at a specific classification level cannot write data to a lower classification level. \nThis is often shortened to “no write down.”\n1000Base-T\nA form of twisted-pair cable that supports 1000Mbps or 1Gbs throughput at 100 \nmeter distances. Often called Gigabit Ethernet.\n100Base-TX\nAnother form of twisted-pair cable similar to 100Base-T.\n10Base2\nA type of coaxial cable. Often used to connect systems to backbone trunks. 10Base2 \nhas a maximum span of 185 meters with maximum throughput of 10Mpbs. Also called thinnet.\n10Base5\nA type of coaxial cable. Often used as a network’s backbone. 10Base5 has a max-\nimum span of 500 meters with maximum throughput of 10Mpbs. Also called thicknet.\n10Base-T\nA type of network cable that is made up of four pairs of wires that are twisted \naround each other and then sheathed in a PVC insulator. Also called twisted-pair.\nA\nabnormal activity\nAny system activity that does not normally occur on your system. Also \nreferred to as suspicious activity.\nabstraction\nThe collection of similar elements into groups, classes, or roles for the assignment \nof security controls, restrictions, or permissions as a collective.\nacceptance testing\nA form of testing that attempts to verify that a system satisfies the stated \ncriteria for functionality and possibly also for security capabilities of a product. It is used to \ndetermine whether end users or a customer will accept the completed product.\naccepting risk\nThe valuation by management of the cost/benefit analysis of possible safe-\nguards and the determination that the cost of the countermeasure greatly outweighs the possible \ncost of loss due to a risk.\naccess\nThe transfer of information from an object to a subject.\naccess control\nThe mechanism by which subjects are granted or restricted access to objects.\naccess control list (ACL)\nThe column of an access control matrix that specifies what level of \naccess each subject has over an object.\n" }, { "page_number": 706, "text": "Glossary\n661\naccess control matrix\nA table of subjects and objects that indicates the actions or functions \nthat each subject can perform on each object. Each column of the matrix is an ACL. Each row \nof the matrix is a capability list.\naccess tracking\nAuditing, logging, and monitoring the attempted access or activities of a sub-\nject. Also referred to as activity tracking.\naccount lockout\nAn element of the password policy’s programmatic controls that disables a \nuser account after a specified number of failed logon attempts. Account lockout is an effective \ncountermeasure to brute force and dictionary attacks against a system’s logon prompt.\naccountability\nThe process of holding someone responsible (accountable) for something. In this \ncontext, accountability is possible if a subject’s identity and actions can be tracked and verified.\naccreditation\nThe formal declaration by the Designated Approving Authority (DAA) that an \nIT system is approved to operate in a particular security mode using a prescribed set of safe-\nguards at an acceptable level of risk.\nACID model\nThe letters in ACID represent the four required characteristics of database tran-\nsitions: atomicity, consistency, isolation, and durability.\nactive content\nWeb programs that users download to their own computer for execution \nrather than consuming server-side resources.\nActiveX\nMicrosoft’s answer to Sun’s Java applets. It operates in a very similar fashion, but \nActiveX is implemented using any one of a variety of languages, including Visual Basic, C, C++, \nand Java.\nAddress Resolution Protocol (ARP)\nA subprotocol of the TCP/IP protocol suite that operates \nat the Data Link layer (layer 2). ARP is used to discover the MAC address of a system by polling \nusing its IP address.\naddressing\nThe means by which a processor refers to various locations in memory.\nadministrative access controls\nThe policies and procedures defined by an organization’s \nsecurity policy to implement and enforce overall access control. Examples of administrative \naccess controls include hiring practices, background checks, data classification, security \ntraining, vacation history reviews, work supervision, personnel controls, and testing.\nadministrative law\nRegulations that cover a range of topics from procedures to be used \nwithin a federal agency to immigration policies that will be used to enforce the laws passed by \nCongress. Administrative law is published in the Code of Federal Regulations (CFR).\nadministrative physical security controls\nSecurity controls that include facility construction \nand selection, site management, personnel controls, awareness training, and emergency \nresponse and procedures.\n" }, { "page_number": 707, "text": "662\nGlossary\nadmissible evidence\nEvidence that is relevant to determining a fact. The fact that the evi-\ndence seeks to determine must be material (i.e., related) to the case. In addition, the evidence \nmust be competent, meaning that it must have been obtained legally. Evidence that results from \nan illegal search would be inadmissible because it is not competent.\nAdvanced Encryption Standard (AES)\nThe encryption standard selected in October 2000 by \nthe National Institute for Standards and Technology (NIST) that is based on the Rijndael cipher.\nadvisory policy\nA policy that discusses behaviors and activities that are acceptable and \ndefines consequences of violations. An advisory policy discusses the senior management’s \ndesires for security and compliance within an organization. Most policies are advisory.\nagent\nIntelligent code objects that perform actions on behalf of a user. They typically take ini-\ntial instructions from the user and then carry on their activity in an unattended manner for a \npredetermined period of time, until certain conditions are met, or for an indefinite period.\naggregate functions\nSQL functions, such as COUNT(), MIN(), MAX(), SUM(), and AVG(), \nthat can be run against a database to produce an information set.\naggregation\nA number of functions that combine records from one or more tables to produce \npotentially useful information.\nalarm\nA mechanism that is separate from a motion detector and triggers a deterrent, triggers \na repellant, and/or triggers a notification. Whenever a motion detector registers a significant or \nmeaningful change in the environment, it triggers an alarm.\nalarm triggers\nNotifications sent to administrators when a specific event occurs.\namplifier\nSee repeater.\nAND\nThe operation (represented by the ∧ symbol) that checks to see whether two values are \nboth true.\nanalytic attack\nAn algebraic manipulation that attempts to reduce the complexity of a cryp-\ntographic algorithm. This attack focuses on the logic of the algorithm itself.\nannualized loss expectancy (ALE)\nThe possible yearly cost of all instances of a specific real-\nized threat against a specific asset. The ALE is calculated using the formula ALE = single loss \nexpectancy (SLE) * annualized rate of occurrence (ARO).\nannualized rate of occurrence (ARO)\nThe expected frequency that a specific threat or risk \nwill occur (i.e., become realized) within a single year.\nanomaly detection\nSee behavior-based detection.\nAPIPA\nSee automatic private IP addressing.\napplet\nCode objects sent from a server to a client to perform some action. Applets are self-\ncontained miniature programs that execute independently of the server that sent them.\nApplication layer\nLayer 7 of the Open Systems Interconnection (OSI) model.\n" }, { "page_number": 708, "text": "Glossary\n663\napplication-level gateway firewall\nA firewall that filters traffic based on the Internet service \n(i.e., application) used to transmit or receive the data. Application-level gateways are known as \nsecond-generation firewalls.\nassembly language\nA higher-level alternative to machine language code. Assembly languages \nuse mnemonics to represent the basic instruction set of a CPU but still require hardware-specific \nknowledge.\nasset\nAnything within an environment that should be protected. The loss or disclosure of an asset \ncould result in an overall security compromise, loss of productivity, reduction in profits, additional \nexpenditures, discontinuation of the organization, and numerous intangible consequences.\nasset valuation\nA dollar value assigned to an asset based on actual cost and nonmonetary \nexpenses, such as costs to develop, maintain, administer, advertise, support, repair, and replace; \nas well as other values, such as public confidence, industry support, productivity enhancement, \nknowledge equity, and ownership benefits.\nasset value (AV)\nA dollar value assigned to an asset based on actual cost and nonmonetary \nexpenses.\nassigning risk\nSee transferring risk.\nassurance\nThe degree of confidence that security needs are satisfied. Assurance must be con-\ntinually maintained, updated, and reverified.\nasymmetric key\nAlgorithms that provide a cryptologic key solution for public key \ncryptosystems.\nasynchronous dynamic password token\nA token device that generates passwords based on \nthe occurrence of an event. An event token requires that the subject press a key on the token and \non the authentication server. This action advances to the next password value.\nasynchronous transfer mode (ATM)\nA cell-switching technology rather than a packet-\nswitching technology like Frame Relay. ATM uses virtual circuits much like Frame Relay, but \nbecause it uses fixed-size frames or cells, it can guarantee throughput. This makes ATM an \nexcellent WAN technology for voice and video conferencing.\natomicity\nOne of the four required characteristics of all database transactions. A database \ntransaction must be an “all or nothing” affair, hence the use of atomic. If any part of the trans-\naction fails, the entire transaction must be rolled back as if it never occurred.\nattack\nThe exploitation of a vulnerability by a threat agent.\nattacker\nAny person who attempts to perform a malicious action against a system.\nattenuation\nThe loss of signal strength and integrity on a cable due to the length of the cable.\nattribute\nA column within a table of a relational database.\n" }, { "page_number": 709, "text": "664\nGlossary\naudit trails\nThe records created by recording information about events and occurrences into \na database or log file. Audit trails are used to reconstruct an event, to extract information about \nan incident, to prove or disprove culpability, and much more.\nauditing\nA methodical examination or review of an environment to ensure compliance with \nregulations and to detect abnormalities, unauthorized occurrences, or outright crimes.\nauditor\nThe person or group responsible for testing and verifying that the security policy is \nproperly implemented and the derived security solutions are adequate.\nauthentication\nThe process of verifying or testing that the identity claimed by a subject is valid.\nAuthentication Header (AH)\nAn element of IPSec that provides authentication, integrity, and \nnonrepudiation.\nauthentication protocols\nProtocol used to provide the transport mechanism for logon cre-\ndentials. May or may not provide security through traffic encryption.\nAuthentication Service (AS)\nAn element of the Kerberos Key Distribution Center (KDC). \nThe AS verifies or rejects the authenticity and timeliness of tickets.\nauthorization\nA process that ensures that the requested activity or object access is possible \ngiven the rights and privileges assigned to the authenticated identity (i.e., subject).\nautomatic private IP addressing (APIPA)\nA feature of Windows that assigns an IP address to \na system should DHCP address assignment fail. APIPA assigns each failed DHCP client an IP \naddress within the range of 169.254.0.1 to 169.254.255.254 along with a default Class B \nsubnet mask of 255.255.0.0.\nauxiliary alarm system\nAn additional function that can be added to either local or centralized \nalarm systems. The purpose of an auxiliary alarm system is to notify local police or fire services \nwhen an alarm is triggered.\navailability\nThe assurance that authorized subjects are granted timely and uninterrupted \naccess to objects.\nawareness\nA form of security teaching that is a prerequisite to training. The goal of awareness \nis to bring security into the forefront and make it a recognized entity for students/users.\nB\nbadges\nForms of physical identification and/or of electronic access control devices. A badge can \nbe as simple as a name tag indicating whether you are a valid employee or a visitor. Or it can be \nas complex as a smart card or token device that employs multifactor authentication to verify and \nprove your identity and provide authentication and authorization to access a facility, specific \nrooms, or secured workstations. Also referred to as identification cards and security IDs.\n" }, { "page_number": 710, "text": "Glossary\n665\nBase+Offset addressing\nAn addressing scheme that uses a value stored in one of the CPU’s \nregisters as the base location from which to begin counting. The CPU then adds the offset sup-\nplied with the instruction to that base address and retrieves the operand from the computed \nmemory location.\nbaseband\nA communication medium that supports only a single communication signal at a time.\nbaseline\nThe minimum level of security that every system throughout the organization must meet.\nBasic Input/Output System (BIOS)\nThe operating-system-independent primitive instruc-\ntions that a computer needs to start up and load the operating system from disk.\nBasic Rate Interface (BRI)\nAn ISDN service type that provides two B, or data, channels and \none D, or management, channel. Each B channel offers 64Kbps, and the D channel offers 16Kbps.\nbehavior\nIn the context of object-oriented programming terminology and techniques, the \nresults or output from an object after processing a message using a method.\nbehavior-based detection\nAn intrusion discovery mechanism used by IDS. Behavior-based \ndetection finds out about the normal activities and events on your system through watching and \nlearning. Once it has accumulated enough data about normal activity, it can detect abnormal \nand possible malicious activities and events. The primary drawback of a behavior-based IDS is \nthat it produces many false alarms. Also known as statistical intrusion detection, anomaly \ndetection, and heuristics-based detection.\nBell-LaPadula model\nA confidentiality-focused security model based on the state machine \nmodel and employing mandatory access controls and the lattice model.\nbest evidence rule\nA rule that states that when a document is used as evidence in a court pro-\nceeding, the original document must be introduced. Copies will not be accepted as evidence \nunless certain exceptions to the rule apply.\nBiba model\nAn integrity-focused security model based on the state machine model and \nemploying mandatory access controls and the lattice model.\nbind variable\nA placeholder for SQL literal values, such as numbers or character strings.\nbiometrics\nThe use of human physiological or behavioral characteristics as authentication \nfactors for logical access and identification for physical access.\nbirthday attack\nAn attack in which the malicious individual seeks to substitute in a digitally \nsigned communication a different message that produces the same message digest, thereby \nmaintaining the validity of the original digital signature; based on the statistical anomaly that \nin a room with 23 people, the probability of two of more people having the same birthday is \ngreater than 50%.\nblack box testing\nA form of program testing that examines the input and output of a program \nwithout focusing on its internal logical structures.\nblackout\nA complete loss of power.\n" }, { "page_number": 711, "text": "666\nGlossary\nblock cipher\nA cipher that applies the encryption algorithm to an entire message block at the \nsame time. Transposition ciphers are examples of block ciphers.\nBlowfish\nA block cipher that operates on 64-bit blocks of text and uses variable-length keys \nranging from a relatively insecure 32 bits to an extremely strong 448 bits.\nboot sector\nThe portion of a storage device used to load the operating system and the types \nof viruses that attack that process.\nbot\nAn intelligent agent that continuously crawls a variety of websites retrieving and pro-\ncessing data on behalf of the user.\nbounds\nThe limits to the memory and resources a process can access.\nbreach\nThe occurrence of a security mechanism being bypassed or thwarted by a threat agent.\nbridge\nA network device used to connect networks with different speeds, cable types, or \ntopologies that still use the same protocol. A bridge is a layer 2 device.\nbroadband\nA communication medium that supports multiple communication signals \nsimultaneously.\nbroadcast\nA communications transmission to multiple but unidentified recipients.\nbroadcast address\nA broadcast network address that is used during a Smurf attack.\nbrouter\nA network device that first attempts to route and then defaults to bridging if routing fails.\nbrownout\nA period of prolonged low voltage.\nbrute force attack\nAn attack made against a system to discover the password to a known \nidentity (i.e., username). A brute force attack uses a systematic trial of all possible character \ncombinations to discover an account’s password.\nbuffer overflow\nA vulnerability that can cause a system to crash or allow the user to execute \nshell commands and gain access to the system. Buffer overflow vulnerabilities are especially \nprevalent in code developed rapidly for the Web using CGI or other languages that allow \nunskilled programmers to quickly create interactive web pages.\nbusiness attack\nAn attack that focuses on illegally obtaining an organization’s confidential \ninformation.\nBusiness Continuity Planning (BCP)\nThe assessment of a variety of risks to organizational \nprocesses and the creation of policies, plans, and procedures to minimize the impact those risks \nmight have on the organization if they were to occur.\nBusiness Impact Assessment (BIA)\nAn analysis that identifies the resources that are critical \nto an organization’s ongoing viability and the threats posed to those resources. It also assesses \nthe likelihood that each threat will actually occur and the impact those occurrences will have \non the business.\n" }, { "page_number": 712, "text": "Glossary\n667\nC\ncache RAM\nA process by that takes data from slower devices and temporarily stores it in \nhigher-performance devices when its repeated use is expected.\ncampus area network (CAN)\nA network that spans a college, university, or a multi-building \noffice complex.\ncapabilities list\nA list that maintains a row of security attributes for each controlled object. \nAlthough not as flexible as the token approach, capabilities lists generally offer quicker lookups \nwhen a subject requests access to an object.\ncapability list\nEach row of an access control matrix is a capability list. A capability list is tied \nto the subject; it lists valid actions that can be taken on each object.\ncardinality\nThe number of rows in a relational database.\ncell suppression\nThe act of suppressing (or hiding) individual data items inside a database to \nprevent aggregation or inference attacks.\ncentralized access control\nMethod of control in which all authorization verification is per-\nformed by a single entity within a system.\ncentralized alarm system\nAn alarm system that signals a remote or centralized monitoring \nstation when the alarm is triggered.\ncertificate authority\nAn agency that authenticates and distributes digital certificates.\ncertificate revocation list (CRL)\nThe list of certificates that have been revoked by a certificate \nauthority before the lifetimes of the certificates have expired.\ncertificates\nEndorsed copies of an individual’s public key that verifies their identity.\ncertification\nThe comprehensive evaluation, made in support of the accreditation process, of \nthe technical and nontechnical security features of an IT system and other safeguards to estab-\nlish the extent to which a particular design and implementation meets a set of specified security \nrequirements.\nchain of evidence\nThe process by which an object is uniquely identified in a court of law.\nChallenge Handshake Authentication Protocol (CHAP)\nOne of the authentication protocols \nused over PPP links. CHAP encrypts usernames and passwords.\nchallenge-response token\nA token device that generates passwords or responses based on \ninstructions from the authentication system. The authentication system displays a challenge in \nthe form of a code or pass phrase. This challenge is entered into the token device. The token gen-\nerates a response based on the challenge, and then the response is entered into the system for \nauthentication.\nchange control\nSee change management.\n" }, { "page_number": 713, "text": "668\nGlossary\nchange control management\nSee change management.\nchange management\nThe means by which changes to an environment are logged and mon-\nitored in order to ensure that any change does not lead to reduced or compromised security.\nchecklist test\nA process in which copies of the disaster recovery checklists are distributed to \nthe members of the disaster recovery team for their review.\nChildren’s Online Privacy Protection Act (COPPA)\nA law in the United States that places \nspecific demands upon websites that cater to children or knowingly collect information from \nchildren.\nchosen ciphertext attack\nAn attack in which the attacker has the ability to decrypt chosen \nportions of the ciphertext message.\nchosen plaintext attack\nAn attack in which the attacker has the ability to encrypt plaintext \nmessages of their choosing and then analyze the ciphertext output of the encryption algorithm.\nCIA Triad\nThe three essential security principles of confidentiality, integrity, and availability. \nAll three must be properly addressed to establish a secure environment.\ncipher\nA system that hides the true meaning of a message. Ciphers use a variety of techniques \nto alter and/or rearrange the characters or words of a message to achieve confidentiality.\nCipher Block Chaining (CBC)\nA process in which each block of unencrypted text is XORed \nwith the block of ciphertext immediately preceding it before it is encrypted using the DES \nalgorithm.\nCipher Feedback (CFB)\nA mode in which the DES algorithm is used to encrypt the preceding \nblock of ciphertext. This block is then XORed with the next block of plaintext to produce the \nnext block of ciphertext.\nciphertext\nA message that has been encrypted for transmission.\ncivil laws\nLaws that form the bulk of the body of laws in the United States. They are designed \nto provide for an orderly society and govern matters that are not crimes but require an impartial \narbiter to settle disputes between individuals and organizations.\nClark-Wilson model\nAn model that employs limited interfaces or programs to control and \nmaintain object integrity.\nclass\nIn the context of object-oriented programming terminology and techniques, a collection \nof common methods from a set of objects that defines the behavior of those objects.\nclassification\nA label that is applied to a resource to indicate its sensitivity or value to an orga-\nnization and therefore designate the level of security necessary to protect that resource.\nclassification level\nAnother term for a security label. An assigned importance or value placed \non objects and subjects.\n" }, { "page_number": 714, "text": "Glossary\n669\nclean\n1) The act of removing a virus from a system and repairing the damage caused by the virus. \n2) The act of removing data from a storage media for reuse in the same security environment.\nclean power\nNonfluctuating pure power.\nclearing\nA method of sufficiently deleting media that will be reused in the same secured envi-\nronment. Also known as overwriting.\nclick-wrap license agreement\nA software agreement in which the contract terms are either \nwritten on the software box or included in the software documentation. During the installation \nprocess, you are required to click a button indicating that you have read the terms of the agree-\nment and agree to abide by them.\nclipping level\nA threshold value used in violation analysis auditing. Crossing the clipping level \ntriggers recording of relevant event data to an audit log.\nclosed-circuit television (CCTV)\nA security system using video cameras and video recording \ndevices.\nclosed head system\nSee wet pipe system.\nclustering (or key clustering)\nA weakness in cryptography where a plaintext message gener-\nates identical ciphertext messages using the same algorithm but using different keys.\ncoaxial cable\nA cable with a center core of copper wire surrounded by a layer of insulation \nand then by a conductive braided shielding and finally encased in an insulation sheath. Coaxial \ncable is fairly resistant to EMI, has a low cost, and is easy to install.\ncode\nSee cipher.\ncohesive (or cohesiveness)\nAn object is highly cohesive if it can perform a task with little \nor no help from other objects. Highly cohesive objects are not as dependent upon other \nobjects as objects with lower cohesion. Objects with higher cohesion are often better. Highly \ncohesive objects perform tasks alone and have low coupling.\ncognitive password\nA variant of the password authentication factor that asks a series of \nquestions about facts or predefined responses that only the subject should know.\ncold sites\nStandby facilities large enough to handle the processing load of an organization and \nwith appropriate electrical and environmental support systems.\ncollision attack\nSee birthday attack.\ncollusion\nAn agreement between multiple people to perform an unauthorized or illegal action.\ncommercial business/private sector classification\nThe security labels commonly employed \non secure systems used by corporations. Common corporate or commercial security labels are \nconfidential, proprietary, private, sensitive, and public.\nCommitted Information Rate (CIR)\nA contracted minimum guaranteed bandwidth alloca-\ntion for a virtual circuit.\n" }, { "page_number": 715, "text": "670\nGlossary\nCommon Body of Knowledge (CBK)\nThe areas of information prescribed by (ISC)2 as the \nsource of knowledge for the CISSP exam.\ncommon mode noise\nElectromagnetic interference (EMI) noise generated by the difference \nin power between the hot and ground wires of a power source or operating electrical equipment.\nCommon Object Request Broker Architecture (CORBA)\nAn international standard for dis-\ntributed computing. CORBA enables code operating on a computer to locate resources located \nelsewhere on the network.\ncompanion virus\nA variation of the file infector virus. A companion virus is a self-contained \nexecutable file that escapes detection by using a filename similar to, but slightly different from, \na legitimate operating system file.\ncompartmented\nA type of MAC environment. Compartmentalized or compartmented envi-\nronments have no relationship between one security domain and another. To gain access to an \nobject, the subject must have the exact specific clearance for that object’s security domain.\ncompartmented mode\nSee compartmented security mode.\ncompartmented mode workstations\nA computer system in which all users have the same \nclearance. The concept of need-to-know is used to control access to sensitive data and the \nsystem is able to process data from multiple sensitivity levels at the same time.\ncompartmented security mode\nA security mode in which systems process two or more types \nof compartmented information. All system users must have an appropriate clearance to access \nall information processed by the system but do not necessarily have a need to know all of the \ninformation in the system.\ncompensation access control\nA type of access control that provides various options to other \nexisting controls to aid in the enforcement and support of a security policy.\ncompetent\nA distinction of evidence that means that the evidence must be obtained legally. \nEvidence that results from an illegal search would be inadmissible because it is not competent.\ncompiled languages\nA computer language that is converted into machine language before \ndistribution or execution.\ncompliance checking\nThe process by which it is ensured that all of the necessary and required \nelements of a security solution are properly deployed and functioning as expected.\ncompliance testing\nAnother common usage of auditing. Verification that a system complies \nwith laws, regulations, baselines, guidelines, standards, and policies is an important part of \nmaintaining security in any environment.\nComponent Object Model (COM)\nMicrosoft’s standard for the use of components within a \nprocess or between processes running on the same system.\ncompromise\nIf system security has been broken, the system is considered compromised.\n" }, { "page_number": 716, "text": "Glossary\n671\ncomputer architecture\nAn engineering discipline concerned with the construction of com-\nputing systems from the logical level.\ncomputer crime\nAny crime that is perpetrated against or with the use of a computer.\nComputer Fraud and Abuse Act\nA United States law written to exclusively cover computer \ncrimes that cross state boundaries to avoid infringing upon states’ rights.\nComputer Security Act (CSA) of 1987\nA United States law that mandates baseline security \nrequirements for all federal agencies.\nconcentrator\nSee repeater.\nconclusive evidence\nIncontrovertible evidence that overrides all other forms of evidence.\nconcurrency\nA security mechanism that endeavors to make certain that the information \nstored in a database is always correct or at least has its integrity and availability protected. Con-\ncurrency uses a “lock” feature to allow an authorized user to make changes and then “unlocks” \ndata elements only after all changes are complete.\nconfidential\n1) A government/military classification used for data of a confidential nature. \nUnauthorized disclosure of confidential data will have noticeable effects and cause damage to \nnational security. This classification is used for all data between secret and sensitive but unclas-\nsified classifications. 2) The highest level of commercial business/private sector classification. \nUsed for data that is extremely sensitive and for internal use only. A significant negative impact \ncould occur for the company if confidential data is disclosed.\nconfidentiality\nThe assurance that information is protected from unauthorized disclosure and \nthe defined level of secrecy is maintained throughout all subject-object interactions.\nconfiguration management\nThe process of logging, auditing, and monitoring activities \nrelated to security controls and security mechanisms over time. This data is then used to identify \nagents of change, whether objects, subjects, programs, communication pathways, or even the \nnetwork itself.\nconfinement (or confinement property)\nThe principle that allows a process only to read \nfrom and write to certain memory locations and resources. This is an alternate name for the * \n(star) Security Property of the Bell-LaPadula model.\nconfusion\nIt occurs when the relationship between the plaintext and the key is complicated \nenough that an attacker can’t just alter the plaintext and analyze the result in order to determine \nthe key.\nconsistency\nOne of the four required characteristics of all database transactions (the other \nthree are atomicity, isolation, and durability). All transactions must begin operating in an envi-\nronment that is consistent with all of the database’s rules.\ncontamination\nThe result of mixing of data with a different classification level and/or need-\nto-know requirement.\n" }, { "page_number": 717, "text": "672\nGlossary\ncontent-dependent access control\nA form of access control based on the contents or pay-\nload of an object.\ncontext-dependent access control\nA form of access control based on the context or sur-\nroundings of an object.\ncontinuity\nA goal an organization can accomplish by having plans and procedures to help \nmitigate the effects a disaster has on its continuing operations and to speed the return to normal \noperations.\ncontractual license agreement\nA written contract between the software vendor and the cus-\ntomer outlining the responsibilities of each.\ncontrol\nThe use of access rules to limit a subject’s access to an object.\ncontrols gap\nThe difference between total risk and residual risk.\nCopper Distributed Data Interface (CDDI)\nDeployment of FDDI using twisted pair (i.e., \ncopper) wires. Reduces the maximum segment length to 100 meters and is susceptible to \ninterference.\ncopyright\nLaw that guarantees the creators of “original works of authorship” protection \nagainst the unauthorized duplication of their work.\ncorrective access control\nAn access control deployed to restore systems to normal after an \nunwanted or unauthorized activity has occurred. Examples of corrective access controls include \nalarms, mantraps, and security policies.\ncorrective controls\nInstructions, procedures, or guidelines used to reverse the effects of an \nunwanted activity, such as attacks or errors.\ncountermeasures\nActions taken to patch a vulnerability or secure a system against an attack. \nCountermeasures can include altering access controls, reconfiguring security settings, installing \nnew security devices or mechanisms, adding or removing services, and so on.\ncoupling\nThe level of interaction between objects. Lower coupling means less interaction. \nLower coupling delivers better software design because objects are more independent. Lower \ncoupling is easier to troubleshoot and update. Objects with low cohesion require lots of assis-\ntance from other objects to perform tasks and have high coupling.\ncovert channel\nThe means by which data can be communicated outside of normal, expected, \nor detectable methods.\ncovert storage channel\nA channel that conveys information by writing data to a common \nstorage area where another process can read it.\ncovert timing channel\nA channel that conveys information by altering the performance of a \nsystem component or modifying a resource’s timing in a predictable manner. This is generally \na more sophisticated method to covertly pass data and is very difficult to detect.\n" }, { "page_number": 718, "text": "Glossary\n673\ncracker\nMalicious users intent on waging an attack against a person or system. Crackers may \nbe motivated by greed, power, or recognition. Their actions can result in stolen property (data, \nideas, etc.), disabled systems, compromised security, negative public opinion, loss of market \nshare, reduced profitability, and lost productivity.\ncreeping privilege(s)\nWhen a user account accumulates privileges over time as job roles and \nassigned tasks change. This can occur because new tasks are added to a user’s job and the \nrelated or necessary privileges are added as well but no privileges or access are ever removed, \neven if related work tasks are no longer associated with or assigned to the user. Creeping priv-\nileges results in excessive privilege.\ncriminal law\nBody of laws that the police and other law enforcement agencies enforce. Crim-\ninal law contains prohibitions against acts such as murder, assault, robbery, arson, theft, and \nsimilar offenses.\ncritical path analysis\nA systematic effort to identify relationships between mission-critical \napplications, processes, and operations and all of the necessary supporting elements.\ncriticality prioritization\nThe prioritization of mission-critical assets and processes during the \ncreation of BCP/DRP.\nCrossover Error Rate (CER)\nThe point at which the False Acceptance Rate (FAR) equals the \nFalse Rejection Rate (FRR). This is the point from which performance is measured in order to \ncompare the capabilities of different biometric devices.\ncryptanalysis\nThe study of methods to defeat codes and ciphers.\ncryptographic key\nData that has been protected through encryption processing. Often found \non tokens to be used as identification or authentication factors. Cryptographic keys provide the \n“secret” for all cryptography because all good cryptographic algorithms are publicly available \nand known.\ncryptography\nAlgorithms applied to data that are designed to ensure confidentiality, integ-\nrity, authentication, and nonrepudiation. Primarily assures only confidentiality, not necessarily \nintegrity, authentication, and not nonrepudiation in the case of symmetric cryptology.\ncryptology\nThe art and science of hiding the meaning of a message from all but the intended \nrecipient.\ncryptosystem\nSystem in which a shared secret key or pairs of public and private keys are used \nby communicating parties to facilitate secure communication.\ncryptovariable\nAnother name for the key used to perform encryption and decryption activities.\ncustodian\nA subject that has been assigned or delegated the day-to-day responsibility of clas-\nsifying and labeling objects and proper storage and protection of objects. The custodian is typ-\nically the IT staff or the system security administrator.\ncyclic redundancy check (CRC)\nSimilar to a hash total, a value that indicates whether or not \na message has been altered or damaged in transit.\n" }, { "page_number": 719, "text": "674\nGlossary\nD\ndata circuit-terminating equipment (DCE)\nA networking device that performs the actual \ntransmission of data over the Frame Relay as well as establishing and maintaining the virtual \ncircuit for the customer.\ndata classification\nGrouping data under labels for the purpose of applying security controls \nand access restrictions.\ndata custodian\nThe user who is assigned the task of implementing the prescribed protection \ndefined by the security policy and upper management. The data custodian performs any and all \nactivities necessary to provide adequate protection for data and to fulfill the requirements and \nresponsibilities delegated to him from upper management.\nData Definition Language (DDL)\nThe database programming language that allows for the \ncreation and modification of the database’s structure (known as the schema).\ndata dictionary\nCentral repository of data elements and their relationships. Stores critical \ninformation about data usage, relationships, sources, and formats.\ndata diddling\nThe act of changing data.\nData Encryption Standard (DES)\nA standard cryptosystem proposed in 1977 for all govern-\nment communications. Many government entities continue to use DES for cryptographic appli-\ncations today despite the fact that it was superseded by Advanced Encryption Standard (AES) \nin December 2001.\ndata extraction\nThe process of extracting elements of data from a large body of data to con-\nstruct a meaningful representation or summary of the whole.\ndata hiding\nThe process of preventing data from being known by a subject.\nData Link layer\nLayer 2 of the OSI model.\nData Manipulation Language (DML)\nThe database programming language that allows users \nto interact with the data contained within the schema.\ndata mart\nThe storage facility used to secure metadata.\ndata mining\nA technique or tool that allows analysts to comb through data warehouses and \nlook for potential correlated information amid the historical data.\ndata owner\nThe person who is responsible for classifying information for placement and pro-\ntection within the security solution.\ndata terminal equipment (DTE)\nA networking device that acts like a router or a switch and \nprovides the customer’s network access to the Frame Relay network.\ndata warehouse\nLarge databases used to store large amounts of information from a variety \nof databases for use in specialized analysis techniques.\n" }, { "page_number": 720, "text": "Glossary\n675\ndatabase\nAn electronic filing system for organizing collections of information. Most data-\nbases are organized by files, records, and fields.\ndatabase management system (DBMS)\nAn application that enables the storage, modifica-\ntion, and extraction of information from a database.\ndatabase partitioning\nThe act of dividing a database up into smaller sections or individual \ndatabases; often employed to segregate content with varying sensitivity labels.\ndecentralized access control\nSystem of access control in which authorization verification is \nperformed by various entities located throughout a system.\ndecision support system (DSS)\nAn application that analyzes business data and presents it so \nas to make business decisions easier for users. DSS is considered an informational application \nmore so than an operational application. Often a DSS is employed by knowledge workers (such \nas help desk or customer support) and by sales services (such as phone operators).\ndeclassification\nThe process of moving a resource into a lower classification level once its \nvalue no longer justifies the security protections provided by a higher level of classification.\ndecrypt\nThe process of reversing a cryptographic algorithm that was used to encrypt a message.\ndedicated mode\nSee dedicated security mode.\ndedicated security mode\nMode in which the system is authorized to process only a specific clas-\nsification level at a time. All system users must have clearance and a need to know that information.\ndeencapsulation\nThe process of stripping a layer’s header and footer from a PDU as it travels \nup the OSI model layers.\ndegaussing\nThe act of using a magnet to return media to its original pristine unused state.\ndegree\nThe number of columns in a relational database.\ndelegation\nIn the context of object-oriented programming, the forwarding of a request by an \nobject to another object or delegate. An object delegates if it does not have a method to handle \nthe message.\ndelta rule\nAlso known as the learning rule. It is the feature of expert systems that allows them \nto learn from experience.\nDelphi technique\nAn anonymous feedback and response process used to arrive at a group \nconsensus.\ndeluge system\nAnother form of dry pipe (fire suppression) system that uses larger pipes and \ntherefore a significantly larger volume of water. Deluge systems are inappropriate for environ-\nments that contain electronics and computers.\ndenial of service (DoS)\nA type of attack that prevents a system from processing or responding \nto legitimate traffic or requests for resources and objects. The most common forms of denial of \nservice attacks involve transmitting so many data packets to a server that it cannot processes \n" }, { "page_number": 721, "text": "676\nGlossary\nthem all. Other forms of denial of service attacks focus on the exploitation of a known fault or \nvulnerability in an operating system, service, or application.\ndeny risk\nSee reject risk.\ndetective access control\nAn access control deployed to discover unwanted or unauthorized \nactivity. Examples of detective access controls include security guards, supervising users, inci-\ndent investigations, and intrusion detection systems (IDSs).\ndetective control\nSee detective access control.\ndeterrent access control\nAn access control that discourages violations of a security policy.\ndictionary attack\nAn attack against a system designed to discover the password to a known \nidentity (i.e., username). In a dictionary attack, a script of common passwords and dictionary \nwords is used to attempt to discover an account’s password.\ndifferential backup\nA type of backup that stores all files that have been modified since the \ntime of the most recent full backup.\nDiffie-Hellman algorithm\nA key exchange algorithm useful in situations in which two par-\nties might need to communicate with each other but they have no physical means to exchange \nkey material and there is no public key infrastructure in place to facilitate the exchange of \nsecret keys.\ndiffusion\nWhen a change in the plaintext results in multiple changes spread out throughout \nthe ciphertext.\nDigital Millennium Copyright Act\nA law that establishes the prohibition of attempts to cir-\ncumvent copyright protection mechanisms placed on a protected work by the copyright holder \nand limits the liability of Internet service providers when their circuits are used by criminals vio-\nlating the copyright law.\ndigital signature\nA method for ensuring a recipient that a message truly came from the \nclaimed sender and that the message was not altered while in transit between the sender and \nrecipient.\nDigital Signature Standard (DSS)\nA standard that specifies that all federally approved dig-\nital signature algorithms must use the SHA-1 hashing function.\ndirect addressing\nA process by which the CPU is provided with the actual address of the \nmemory location to be accessed.\ndirect evidence\nEvidence that proves or disproves a specific act through oral testimony based \non information gathered through the witness’s five senses.\ndirective access control\nAn access control that directs, confines, or controls the actions of \nsubjects to force or encourage compliance with security policy.\n" }, { "page_number": 722, "text": "Glossary\n677\ndirective control\nA security tool used to guide the security implementation of an organiza-\ntion. The goal or objective of directive controls is to cause or promote a desired result.\ndirectory service\nA centralized database of resources available to the network, much like a \ntelephone directory for network services and assets. Users, clients, and processes consult the \ndirectory service to learn where a desired system or resource resides.\nDirect Memory Access (DMA)\nA mechanism that allows devices to exchange data directly \nwith real memory (RAM) without requiring assistance from the CPU.\ndisaster\nAn event that brings great damage, loss, or destruction to a system or environment.\ndisaster recovery plan\nA document that guides the recovery efforts necessary to restore your \nbusiness to normal operations as quickly as possible.\nDisaster Recovery Planning (DRP)\nTerm that describes the actions an organization takes to \nresume normal operations after a disaster interrupts normal activity.\ndiscretionary access control\nA mechanism used to control access to objects. The owner or \ncreator of an object controls and defines the access other subjects have to it.\nDiscretionary Security Property\nProperty that states that the system uses an access control \nmatrix to enforce discretionary access control.\ndistributed access control\nA form of access control in which authorization verification is \nperformed by various entities located throughout a system.\nDistributed Component Object Model (DCOM)\nAn extension of COM to support distrib-\nuted computing. This is Microsoft’s answer to CORBA.\ndistributed data model\nIn a distributed data model, data is stored in more than one database \nbut remains logically connected. The user perceives the database as a single entity, even though \nit comprises numerous parts interconnected over a network. Each field may have numerous \nchildren as well as numerous parents. Thus, the data mapping relationship is many-to-many.\ndistributed denial of service (DDoS)\nAnother form of DoS. A distributed denial of service \noccurs when the attacker compromises several systems to be used as launching platforms \nagainst one or more victims. The compromised systems used in the attack are often called \nslaves or zombies. A DDoS attack results in the victims being flooded with data from \nnumerous sources.\ndistributed reflective denial of service (DRDoS)\nAnother form of DoS. DRDoS attacks take \nadvantage of the normal operation mechanisms of key Internet services, such as DNS and router \nupdate protocols. DRDoS attacks function by sending numerous update, session, or control \npackets to various Internet service servers or routers with a spoofed source address of the \nintended victim. Usually these servers or routers are part of the high-speed, high-volume \nInternet backbone trunks. What results is a flood of update packets, session acknowledgment \nresponses, or error messages sent to the victim. A DRDoS attack can result in so much traffic \nthat upstream systems are adversely affected by the sheer volume of data focused on the victim.\n" }, { "page_number": 723, "text": "678\nGlossary\nDNS poisoning\nThe act of altering or falsifying the information of DNS to route or misdirect \nlegitimate traffic.\ndocumentary evidence\nAny written items brought into court to prove a fact at hand. This \ntype of evidence must also be authenticated.\ndomain\n1) A realm of trust or a collection of subjects and objects that share a common secu-\nrity policy. Each domain’s access control is maintained independently of other domains’ access \ncontrol. This results in decentralized access control when multiple domains are involved. 2) An \narea of study for the CISSP exam.\ndry pipe system\nA fire suppression system that contains compressed air. Once suppression is \ntriggered, the air escapes, which opens a water valve that in turn causes the pipes to fill and dis-\ncharge water into the environment.\ndue care\nThe steps taken to ensure that assets and employees of an organization have been \nsecured and protected and that upper management has properly evaluated and assumed all \nunmitigated or transferred risks.\ndue diligence\nThe extent to which a reasonable person will endeavor under specific circum-\nstances to avoid harming other people or property.\ndumb cards\nHuman-readable-only card IDs that usually have a photo and written informa-\ntion about the authorized bearer. Dumb cards are for use in environments where automated \ncontrols are infeasible or unavailable but security guards are practical.\ndumpster diving\nThe act of digging through the refuse, remains, or leftovers from an organi-\nzation or operation in order to discover or infer information about the organization.\ndurability\nOne of the four required characteristics of all database transactions (the other three \nare atomicity, consistency, and isolation). The concept that database transactions must be resil-\nient. Once a transaction is committed to the database, it must be preserved. Databases ensure \ndurability through the use of backup mechanisms, such as transaction logs.\ndwell time\nThe length of time a key on the keyboard is pressed. This is an element of the key-\nstroke dynamics biometric factor.\nDynamic Host Configuration Protocol (DHCP)\nA protocol used to assign TCP/IP configura-\ntion settings to systems upon bootup. DHCP uses port 67 for server point-to-point response and \nport 68 for client request broadcast. DHCP supports centralized control and management of \nnetwork addressing.\ndynamic packet-filtering firewalls\nA firewall that enables real-time modification of the fil-\ntering rules based on traffic content. Dynamic packet-filtering firewalls are known as fourth-\ngeneration firewalls.\ndynamic passwords\nPasswords that do not remain static for an extended period of time. \nDynamic passwords can change on each use or at a regular interval, such as every 30 days.\n" }, { "page_number": 724, "text": "Glossary\n679\nE\neavesdropping\nAnother term for sniffing. However, eavesdropping can include more than \njust capturing and recording network traffic. Eavesdropping also includes recording or listening \nto audio communications, faxes, radio signals, and so on.\nEconomic Espionage Act of 1996\nA law that states that anyone found guilty of stealing trade \nsecrets from a U.S. corporation with the intention of benefiting a foreign government or agent \nmay be fined up to $500,000 and imprisoned for up to 15 years and that anyone found guilty \nof stealing trade secrets under other circumstances may be fined up to $250,000 and imprisoned \nfor up to 10 years.\neducation\nA detailed endeavor where students/users learn much more than they actually need \nto know to perform their work tasks. Education is most often associated with users pursuing \ncertification or seeking job promotion.\nEl Gamal\nThe explanation of how the mathematical principles behind the Diffie-Hellman key \nexchange algorithm could be extended to support an entire public key cryptosystem used for the \nencryption and decryption of messages.\nelectronic access control (EAC)\nA type of smart lock that uses a credential reader, a electro-\nmagnet, and a door closed sensor.\nelectronically erasable PROM (EEPROM)\nA storage system that uses electric voltages deliv-\nered to the pins of the chip to force erasure. EEPROMs can be erased without removal from the \ncomputer, giving them much greater flexibility than standard PROM and EPROM chips.\nelectromagnetic interference (EMI)\nA type of electrical noise that can do more than just \ncause problems with how equipment functions; it can also interfere with the quality of commu-\nnications, transmissions, and playback.\nElectronic Codebook (ECB)\nThe simplest encryption mode to understand and the least \nsecure. Each time the algorithm processes a 64-bit block, it simply encrypts the block using the \nchosen secret key. This means that if the algorithm encounters the same block multiple times, \nit produces the exact same encrypted block.\nElectronic Communications Privacy Act (ECPA)\nThe law that makes it a crime to invade an \nindividual’s electronic privacy. It protects against the monitoring of e-mail and voice mail com-\nmunications and prevents providers of those services from making unauthorized disclosures of \ntheir content.\nelectronic vaulting\nA storage scenario in which database backups are transferred to a remote \nsite in a bulk transfer fashion. The remote location may be a dedicated alternative recovery site \n(such as a hot site) or simply an offsite location managed within the company or by a contractor \nfor the purpose of maintaining backup data.\nelliptic curve cryptography\nA new branch of public key cryptography that offers similar \nsecurity to established public key cryptosystems at reduced key sizes.\n" }, { "page_number": 725, "text": "680\nGlossary\nelliptic curve group\nEach elliptic curve has a corresponding elliptic curve group made up of the \npoints on the elliptic curve along with the point O, located at infinity. Two points within the same \nelliptic curve group (P and Q) can be added together with an elliptic curve addition algorithm.\nemployee\nOften referred to as the user when discussing IT issues. See also user.\nemployment agreement\nA document that outlines an organization’s rules and restrictions, secu-\nrity policy, and acceptable use and activities policies; details the job description; outlines violations \nand consequences; and defines the length of time the position is to be filled by the employee.\nEncapsulating Security Payload (ESP)\nAn element of IPSec that provides encryption to pro-\ntect the confidentiality of transmitted data but can also perform limited authentication.\nencapsulation\nThe process of adding a header and footer to a PDU as it travels down the OSI \nmodel layers.\nencrypt\nThe process used to convert a message into ciphertext.\nencryption\nThe art and science of hiding the meaning or intent of a communication from \nrecipients not meant to receive it.\nend user\nSee user.\nend-to-end encryption\nAn encryption algorithm that protects communications between two \nparties (i.e., a client and a server) and is performed independently of link encryption. An \nexample of this would be the use of Privacy Enhanced Mail (PEM) to pass a message between \na sender and a receiver. This protects against an intruder who might be monitoring traffic on \nthe secure side of an encrypted link or traffic sent over an unencrypted link.\nenrollment\nThe process of establishing a new user identity or authentication factor on a \nsystem. Secure enrollment requires physical proof of a person’s identity or authentication \nfactor. Generally, if the enrollment process takes longer than two minutes, the identification or \nauthorization mechanism (typically a biometric device) is not approved.\nentity\nA subject or an object.\nerasable PROM (EPROM)\nA PROM chip that has a small window through which the illumi-\nnation of a special ultraviolet light causes the contents of the chip to be erased. After this process \nis complete, the end user can burn new information into the EPROM.\nerasing\nA delete operation against a file, a selection of files, or the entire media. In most cases, \nthe deletion or erasure process removes only the directory or catalog link to the data. The actual \ndata remains on the drive.\nEscrowed Encryption Standard\nA failed government attempt to create a back door to all \nencryption solutions. The solution employed the Clipper chip, which used the Skipjack algorithm.\nespionage\nThe malicious act of gathering proprietary, secret, private, sensitive, or confiden-\ntial information about an organization for the express purpose of disclosing and often selling \nthat data to a competitor or other interested organization (such as a foreign government).\n" }, { "page_number": 726, "text": "Glossary\n681\nEthernet\nA common shared media LAN technology.\nethical hacking\nSee penetration testing.\nethics\nThe rules that govern personal conduct. Several organizations have recognized the need \nfor standard ethics rules, or codes, and have devised guidelines for ethical behavior. These rules \nare not laws but are minimum standards for professional behavior. They should provide you \nwith a basis for sound, professional, ethical judgment.\nevidence\nIn the context of computer crime, any hardware, software, or data that you can use \nto prove the identity and actions of an attacker in a court of law.\nexcessive privilege(s)\nMore access, privilege, or permission than a user’s assigned work tasks \ndictate. If a user account is discovered to have excessive privilege, the additional and unneces-\nsary benefits should be immediately curtailed.\nexit interview\nAn aspect of a termination policy. The terminated employee is reminded of \ntheir legal responsibilities to prevent disclosure of confidential and sensitive information.\nexpert opinion\nA type of evidence consisting of the opinions and facts offered by an expert. \nAn expert is someone educated in a field and who currently works in that field.\nexpert system\nA system that seeks to embody the accumulated knowledge of mankind on a \nparticular subject and apply it in a consistent fashion to future decisions.\nexposure\nThe condition of being exposed to asset loss due to a threat. Exposure involves \nbeing susceptible to the exploitation of a vulnerability by a threat agent or event.\nexposure factor (EF)\nThe percentage of loss that an organization would experience if a spe-\ncific asset were violated by a realized risk.\nextranet\nA cross between the Internet and an intranet. An extranet is a section of an organi-\nzation’s network that has been sectioned off so that it acts as an intranet for the private network \nbut it also serves information out to the public Internet. Extranets are often used in B2B appli-\ncations, between customers and suppliers.\nF\nface scan\nAn example of a biometric factor, which is a behavioral or physiological character-\nistic that is unique to a subject. A face scan is a process by which the shape and feature layout \nof a person’s face is used to establish identity or provide authentication.\nfail-secure\nThe response of a system to a failure so that it protects the security of the assets.\nfail-safe\nThe response of a system to a failure so that it protects human safety.\nfail-open\nThe response of a system to a failure whereby it no longer keeps assets secure.\n" }, { "page_number": 727, "text": "682\nGlossary\nFair Cryptosystems\nA failed government attempt to create a back door to all encryption solu-\ntions. This technology used a segmented key that was divided among several trustees.\nFalse Acceptance Rate (FAR)\nError that occurs when a biometric device is not sensitive \nenough and an invalid subject is authenticated. Also referred to as a Type 2 error.\nFalse Rejection Rate (FRR)\nError that occurs when a biometric device is too sensitive and a \nvalid subject is not authenticated. Also referred to as a Type 1 error.\nFamily Educational Rights and Privacy Act (FERPA)\nA specialized privacy bill that affects \nany educational institution that accepts any form of funding from the federal government (the \nvast majority of schools). It grants certain privacy rights to students over the age of 18 and the \nparents of minor students.\nfault\nA momentary loss of power.\nFederal Information Processing Standard 140 (FIPS-140)\nFIPS-140 defines the hardware \nand software requirements for cryptographic modules that the federal government uses.\nFederal Sentencing Guidelines\nA 1991 law that provides punishment guidelines for \nbreaking federal laws.\nfence\nA perimeter-defining device. Fences are used to clearly differentiate between areas that \nare under a specific level of security protection and those that are not. Fencing can include a \nwide range of components, materials, and construction methods. It can be in the form of stripes \npainted on the ground, chain link fences, barbed wire, concrete walls, and even invisible perim-\neters using laser, motion, or heat detectors.\nFiber Distributed Data Interface (FDDI)\nA high-speed token-passing technology that employs \ntwo rings with traffic flowing in opposite directions. FDDI offers transmission rates of 100Mbps \nand is often used as a backbone to large enterprise networks.\nfiber-optic\nA cabling form that transmits light instead of electrical signals. Fiber-optic cable \nsupports throughputs up to 2Gbps and lengths of up to 2 kilometers.\nfile infector\nVirus that infects different types of executable files and triggers when the oper-\nating system attempts to execute them. For Windows-based systems, these files end with .EXE \nand .COM extensions.\nfinancial attack\nA crime that is carried out to unlawfully obtain money or services.\nfingerprints\nThe patterns of ridges on the fingers of humans. Often used as a biometric \nauthentication factor.\nfirewall\nA network device used to filter traffic. A firewall is typically deployed between a pri-\nvate network and a link to the Internet, but it can be deployed between departments within an \norganization. Firewalls filter traffic based on a defined set of rules.\nfirmware\nSoftware that is stored in a ROM chip.\n" }, { "page_number": 728, "text": "Glossary\n683\nFlaw Hypothesis Methodology of Penetration Testing\n“The Flaw Hypothesis Method-\nology is a system analysis and penetration technique where specifications and documentation \nfor the system are analyzed and then flaws in the system are hypothesized. The list of hypoth-\nesized flaws is then prioritized on the basis of the estimated probability that a flaw actually \nexists and, assuming a flaw does exist, on the ease of exploiting it and on the extent of control \nor compromise it would provide. The prioritized list is used to direct the actual testing of the \nsystem.” (Quoted from the NCSC/DOD/NIST Orange Book/TCSEC.)\nflight time\nThe length of time between key presses. This is an element of the keystroke \ndynamics form of biometrics.\nflooding\nAn attack that involves sending enough traffic to a victim to cause a DoS. Also \nreferred to as a stream attack.\nfortress mentality\nIn a fortress mentality security approach, a single giant master wall is built \naround the assets like the massive rock walls of a castle fortress. The major flaw in such an \napproach is that large massive structures often have minor weakness and flaws; are difficult if \nnot impossible to reconfigure, adjust, or move; and are easily seen and avoided by would be \nattackers (i.e., they find easier ways into the protected area).\nFourth Amendment\nAn amendment to the U.S. constitution that prohibits government \nagents from searching private property without a warrant and probable cause. The courts have \nexpanded their interpretation of the Fourth Amendment to include protections against wiretap-\nping and other invasions of privacy.\nfraggle\nA form of denial of service attack similar to Smurf, but it uses UDP packets instead \nof ICMP.\nfragment\nWhen a network receives a packet larger than its maximum allowable packet size, it \nbreaks it up into two or more fragments. These fragments are each assigned a size (corresponding \nto the length of the fragment) and an offset (corresponding to the starting location of the fragment).\nfragmentation attacks\nAn attack that exploits vulnerabilities in the fragment reassembly \nfunctionality of the TCP/IP protocol stack.\nFrame Relay\nA shared connection medium that uses packet-switching technology to establish \nvirtual circuits for customers.\nfrequency analysis\nA cryptographic analysis or attack that looks for repetition of letters in an \nencrypted message and compares that with the statistics of letter usage for a specific language, \nsuch as the frequency of the letters E, T, A, O, N, R, I, S, and H in the English language.\nfull backup\nA complete copy of data contained on the protected device on the backup media. \nAlso refers to the process of making a complete copy of data, as in “performing a full backup.”\nfull-interruption tests\nA disaster recovery test that involves actually shutting down opera-\ntions at the primary site and shifting them to the recovery site.\nfun attacks\nAn attack launched by crackers with few true skills. The main motivation behind \nfun attacks is the thrill of getting into a system.\n" }, { "page_number": 729, "text": "684\nGlossary\nG\nGantt chart\nA type of bar chart that shows the interrelationships over time between projects \nand schedules. It provides a graphical illustration of a schedule that helps to plan, coordinate, and \ntrack specific tasks in a project.\ngate\nA controlled exit and entry point in a fence.\ngateway\nA networking device that connects networks that are using different network protocols.\nGovernment Information Security Reform Act of 2000\nAct that amends the United States \nCode to implement additional information security policies and procedures.\ngovernment/military classification\nThe security labels commonly employed on secure sys-\ntems used by the military. Military security labels range from highest sensitivity to lowest: top \nsecret, secret, confidential, sensitive but unclassified, and unclassified (top secret, secret, confi-\ndential are collectively known as classified).\nGramm-Leach-Bliley (GLB) Act\nA law passed in 1999 that eased the strict governmental barriers \nbetween financial institutions. Banks, insurance companies, and credit providers were severely lim-\nited in the services they could provide and the information they could share with each other. GLB \nsomewhat relaxed the regulations concerning the services each organization could provide.\ngranular object control\nA very specific and highly detailed level of control over the security \nsettings of an object.\nground\nThe wire in an electrical circuit that is grounded (that is, connected with the earth).\ngroup\nAn access control management simplification mechanism similar to a role. Similar \nusers are made members of a group. A group is assigned access to an object. Thus, all members \nof the group are granted the same access to an object. The use of groups greatly simplifies the \nadministrative overhead of managing user access to objects.\ngrudge attack\nAttack usually motivated by a feeling of resentment and carried out to damage \nan organization or a person. The damage could be in the loss of information or harm to the \norganization or a person’s reputation. Often the attacker is a current or former employee or \nsomeone who wishes ill will upon an organization.\nguideline\nA document that offers recommendations on how standards and baselines are imple-\nmented. Guidelines outline methodologies, include suggested actions, and are not compulsory.\nH\nhacker\nA technology enthusiast who does not have malicious intent. Many authors and the \nmedia often use the term hacker when they are actually discussing issues relating to crackers.\nHalon\nA fire-suppressant material that converts to toxic gases at 900 degrees Fahrenheit and \ndepletes the ozone layer of the atmosphere and is therefore usually replaced by an alternative material.\n" }, { "page_number": 730, "text": "Glossary\n685\nhand geometry\nA type of biometric control that recognizes the physical dimensions of a \nhand. This includes width and length of the palm and fingers. It can be a mechanical or image-\nedge (i.e., visual silhouette) graphical solution.\nhandshaking\nA three-way process utilized by the TCP/IP protocol stack to set up connections \nbetween two hosts.\nhardware\nAn actual physical device, such as a hard drive, LAN card, printer, and so on.\nhardware segmentation\nA technique that implements process isolation at the hardware level \nby enforcing memory access constraints.\nhash\nSee hash function.\nhash function\nThe process of taking a potentially long message and generating a unique \noutput value derived from the content of the message. This value is commonly referred to as the \nmessage digest.\nhash total\nA checksum used to verify the integrity of a transmission. See also cyclic redun-\ndancy check (CRC).\nhash value\nA number that is generated from a string of text and is substantially smaller than \nthe text itself. A formula creates a hash value in a way that it is extremely unlikely that any other \ntext will produce the same hash value.\nHashed Message Authentication Code (HMAC)\nAn algorithm that implements a partial dig-\nital signature—it guarantees the integrity of a message during transmission, but it does not pro-\nvide for nonrepudiation.\nHealth Insurance Portability and Accountability Act (HIPAA)\nA law passed in 1996 that \nmade numerous changes to the laws governing health insurance and health maintenance orga-\nnizations (HMOs). Among the provisions of HIPAA are privacy regulations requiring strict \nsecurity measures for hospitals, physicians, insurance companies, and other organizations that \nprocess or store private medical information about individuals.\nhearsay evidence\nEvidence consisting of statements made to a witness by someone else out-\nside of court. Computer log files that are not authenticated by a system administrator can also \nbe considered hearsay evidence.\nheart/pulse pattern\nAn example of a biometric factor, which is a behavioral or physiological \ncharacteristic that is unique to a subject. The heart/pulse pattern of a person is used to establish \nidentity or provide authentication.\nheuristics-based detection\nSee behavior-based detection.\nhierarchical\nA form of MAC environment. Hierarchical environments relate the various clas-\nsification labels in an ordered structure from low security to medium security to high security. \nEach level or classification label in the structure is related. Clearance in a level grants the subject \naccess to objects in that level as well as to all objects in all lower levels but prohibits access to \nall objects in higher levels.\n" }, { "page_number": 731, "text": "686\nGlossary\nhierarchical data model\nA form of database that combines records and fields that are related \nin a logical tree structure. This is done so that each field can have one child or many or no chil-\ndren but each field can have only a single parent. Therefore, the data mapping relationship is \none-to-many.\nHigh-Speed Serial Interface (HSSI)\nA layer 1 protocol used to connect routers and multi-\nplexers to ATM or Frame Relay connection devices.\nHigh-Level Data Link Control (HDLC)\nA layer 2 protocol used to transmit data over synchro-\nnous communication lines. HDLC is an ISO standard based on IBM’s SDLC. HDLC supports \nfull-duplex communications, supports both point-to-point and multipoint connections, offers \nflow control, and includes error detection and correction.\nhigh-level languages\nProgramming languages that are not machine languages or assembly \nlanguages. These languages are not hardware dependent and are more understandable by \nhumans. Such languages must be converted to machine language before or during execution.\nhijack attack\nAn attack in which a malicious user is positioned between a client and server \nand then interrupts the session and takes it over. Often, the malicious user impersonates the \nclient so they can extract data from the server. The server is unaware that any change in the \ncommunication partner has occurred.\nhoney pot\nIndividual computers or entire networks created to serve as a snare for intruders. \nThe honey pot looks and acts like a legitimate network, but it is 100 percent fake. Honey pots \ntempt intruders with unpatched and unprotected security vulnerabilities as well as hosting \nattractive, tantalizing, but faux data. Honey pots are designed to grab an intruder’s attention \nand direct them into the restricted playground while keeping them away from the legitimate net-\nwork and confidential resources.\nhost-based IDS\nAn intrusion detection system (IDS) that is installed on a single computer and \ncan monitor the activities on that computer. A host-based IDS is able to pinpoint the files and \nprocesses compromised or employed by a malicious user to perform unauthorized activity.\nhostile applet\nAny piece of mobile code that attempts to perform unwanted or malicious \nactivities.\nhot site\nA configuration in which a backup facility is maintained in constant working order, \nwith a full complement of servers, workstations, and communications links ready to assume pri-\nmary operations responsibilities.\nhub\nA network device used to connect multiple systems together in a star topology. Hubs \nrepeat inbound traffic over all outbound ports.\nhybrid\nA type of MAC environment. A hybrid environment combines the hierarchical and \ncompartmentalized concepts so that each hierarchical level may contain numerous subcompart-\nments that are isolated from the rest of the security domain. A subject must have not only the \ncorrect clearance but also the need-to-know for the specific compartment in order to have access \nto the compartmentalized object.\n" }, { "page_number": 732, "text": "Glossary\n687\nhybrid attack\nA form of password attack in which a dictionary attack is first attempted and \nthen a type of brute force attack is performed. The follow-up brute force attack is used to add \nprefix or suffix characters to passwords from the dictionary in order to discover one-upped con-\nstructed passwords, two-upped constructed passwords, and so on.\nHypertext Transfer Protocol\nThe protocol used to transmit web page elements from a web \nserver to web browsers (over the well-known service TCP/UDP port address 80).\nHypertext Transfer Protocol over Secure Sockets Layer (HTTPS)\nA standard that uses \nport 443 to negotiate encrypted communications sessions between web servers and browser clients.\nI\nidentification\nThe process by which a subject professes an identity and accountability is ini-\ntiated. The identification process can consist of a user providing a username, a logon ID, a PIN, \nor a smart card or a process providing a process ID number.\nidentification card\nA form of physical identification, generally contains a picture of the sub-\nject and/or a magnetic strip that contains additional information about a subject.\nIdentity Theft and Assumption Deterrence Act\nAn act that makes identity theft a crime \nagainst the person whose identity was stolen and provides severe criminal penalties (up to a 15-\nyear prison term and/or a $250,000 fine) for anyone found guilty of violating it.\nignore risk\nDenying that a risk exists and hoping that by ignoring a risk it will never be realized.\nInternet Mail Authentication Protocol 4 (IMAP 4)\nA protocol used to pull e-mail messages \nfrom an inbox on an e-mail server down to an e-mail client. IMAP is more secure than POP3, \nuses port 143, and offers the ability to pull headers down from the e-mail server as well as to \nstore and manage messages on the e-mail server without having to download to the local \nclient first.\nimmediate addressing\nA way of referring to data that is supplied to the CPU as part of an \ninstruction.\nimpersonation\nThe assumption of someone’s identity or online account, usually through the \nmechanisms of spoofing and session replay. An impersonation attack is considered a more \nactive attack than masquerading.\nimplementation attack\nThis type of attack exploits weaknesses in the implementation of a \ncryptography system. It focuses on exploiting the software code, not just errors and flaws but \nmethodology employed to program the encryption system.\ninappropriate activities\nActions that may take place on a computer or over the IT infrastruc-\nture and that may not be actual crimes but are often grounds for internal punishments or ter-\nmination. Some types of inappropriate activities include viewing inappropriate content, sexual \nand racial harassment, waste, and abuse.\n" }, { "page_number": 733, "text": "688\nGlossary\nincident\nThe occurrence of a system intrusion.\nincremental backups\nA backup that stores only those files that have been modified since the \ntime of the most recent full or incremental backup. Also the process of creating such a backup.\nindirect addressing\nThe memory address that is supplied to the CPU as part of the instruction \nand doesn’t contain the actual value that the CPU is to use as an operand. Instead, the memory \naddress contains another memory address (perhaps located on a different page). The CPU then \nretrieves the actual operand from that address.\nindustrial espionage\nThe act of someone using illegal means to acquire competitive information.\ninference\nAn attack that involves using a combination of several pieces of nonsensitive infor-\nmation to gain access to information that should be classified at a higher level.\ninference engine\nThe second major component of an expert system that analyzes information \nin the knowledge base to arrive at the appropriate decision.\ninformation flow model\nA model that focuses on the flow of information to ensure that secu-\nrity is maintained and enforced no matter how information flows. Information flow models are \nbased on a state machine model.\ninformation hiding\nPlacing data and a subject at different security domains for the purpose of \nhiding the data from that subject.\ninformative policy\nA policy that is designed to provide information or knowledge about a \nspecific subject, such as company goals, mission statements, or how the organization interacts \nwith partners and customers. An informative policy is nonenforceable.\ninherit (or inheritance)\nIn object-oriented programming, inheritance refers to a class having \none or more of the same methods from another class. So when a method has one or more of the \nsame methods from another class, it is said to have “inherited” them.\ninitialization vector (IV)\nA nonce used by numerous cryptography solutions to increase the \nstrength of encrypted data by increasing the randomness of the input.\ninrush\nAn initial surge of power usually associated with connecting to a power source, \nwhether primary or alternate/secondary.\ninstance\nIn object-oriented programming, an instance can be an object, example, or represen-\ntation of a class.\nIntegrated Services Digital Network (ISDN)\nA digital end-to-end communications mecha-\nnism. ISDN was developed by telephone companies to support high-speed digital communica-\ntions over the same equipment and infrastructure that is used to carry voice communications.\nintegrity\nA state characterized by the assurance that modifications are not made by unautho-\nrized users and authorized users do not make unauthorized modifications.\nintellectual property\nIntangible assets, such as secret recipes or production techniques.\n" }, { "page_number": 734, "text": "Glossary\n689\nInternational Data Encryption Algorithm (IDEA)\nA block cipher that was developed in \nresponse to complaints about the insufficient key length of the DES algorithm. IDEA operates \non 64-bit blocks of plain-/ciphertext, but it begins its operation with a 128-bit key.\nInternational Organization for Standardization (ISO)\nAn independent oversight organiza-\ntion that defines and maintains computer, networking, and technology standards, along with \nmore than 13,000 other international standards for business, government, and society.\nInternet Key Exchange (IKE)\nA protocol that provides for the secure exchange of crypto-\ngraphic keys between IPSec participants.\nInternet Message Access Protocol (IMAP)\nA protocol used to transfer e-mail messages from \nan e-mail server to an e-mail client.\nInternet Security Association and Key Management Protocol (ISAKMP)\nA protocol that \nprovides background security support services for IPSec.\ninterpreted languages\nProgramming languages that are converted to machine language one \ncommand at a time at the time of execution.\ninterrupt (IRQ)\nA mechanism used by devices and components in a computer to get the atten-\ntion of the CPU.\nintranet\nA private network that is designed to host the same information services found on the \nInternet.\nintrusion\nThe condition in which a threat agent has gained access to an organization’s infra-\nstructure through the circumvention of security controls and is able to directly imperil assets. \nAlso referred to as penetration.\nintrusion detection\nA specific form of monitoring both recorded information and real-time \nevents to detect unwanted system access.\nintrusion detection system (IDS)\nA product that automates the inspection of audit logs and \nreal-time system events. IDSs are generally used to detect intrusion attempts, but they can also \nbe employed to detect system failures or rate overall performance.\nIP header protocol field value\nAn element in an IP packet header that identifies the protocol \nused in the IP packet payload (usually this will be 6 for TCP, 17 for UDP, or 1 for ICMP, or any \nof a number of other valid routing protocol numbers).\nIP Payload Compression (IPcomp) protocol\nA protocol that allows IPSec users to achieve \nenhanced performance by compression packets prior to the encryption operation.\nIP probes\nAn attack technique that uses automated tools to ping each address in a range. Sys-\ntems that respond to the ping request are logged for further analysis. Addresses that do not pro-\nduce a response are assumed to be unused and are ignored.\nIP Security (IPSec)\nA standards-based mechanism for providing encryption for point-to-\npoint TCP/IP traffic.\n" }, { "page_number": 735, "text": "690\nGlossary\nIP spoofing\nThe process by which a malicious individual reconfigures their system so that it \nhas the IP address of a trusted system and then attempts to gain access to other external \nresources.\niris scans\nAn example of a biometric factor, which is a behavioral or physiological character-\nistic that is unique to a subject. The colored portion of the eye that surrounds the pupil is used \nto establish identity or provide authentication.\nisolation\nA concept that ensures that any behavior will affect only the memory and resources \nassociated with the process.\nJ\nJava\nA platform-independent programming language developed by Sun Microsystems.\njob description\nA detailed document outlining a specific position needed by an organization. \nA job description includes information about security classification, work tasks, and so on.\njob responsibilities\nThe specific work tasks an employee is required to perform on a regular basis.\njob rotation\nA means by which an organization improves its overall security by rotating \nemployees among numerous job positions. Job rotation serves two functions. First, it provides \na type of knowledge redundancy. Second, moving personnel around reduces the risk of fraud, \ndata modification, theft, sabotage, and misuse of information.\nK\nKerchoff’s assumption\nThe idea that all algorithms should be public but all keys should remain \nprivate. Kerchoff’s assumption is held by a large number of cryptologists, but not all of them.\nKerberos\nA ticket based authentication mechanism that employs a trusted third party to pro-\nvide identification and authentication.\nkernel\nThe part of an operating system that always remains resident in memory (so that it can \nrun on demand at any time).\nkernel proxy firewalls\nA firewall that is integrated into an operating system’s core to provide \nmultiple levels of session and packet evaluation. Kernel proxy firewalls are known as fifth-\ngeneration firewalls.\nkey\nA secret value used to encrypt or decrypt messages.\nKey Distribution Center (KDC)\nAn element of the Kerberos authentication system. The KDC \nmaintains all the secret keys of enrolled subjects and objects. A KDC is also a COMSEC facility \nthat distributes symmetric crypto keys, especially for government entities.\n" }, { "page_number": 736, "text": "Glossary\n691\nkey escrow system\nA cryptographic recovery mechanism by which keys are stored in a database \nand can be recovered only by authorized key escrow agents in the event of key loss or damage.\nkeystroke dynamics\nA biometric factor that measures how a subject uses a keyboard by ana-\nlyzing flight time and dwell time.\nkeystroke monitoring\nThe act of recording the keystrokes a user performs on a physical key-\nboard. The act of recording can be visual (such as with a video recorder) or logical/technical \n(such as with a capturing hardware device or a software program).\nkeystroke patterns\nAn example of a biometric factor, which is a behavioral or physiological \ncharacteristic that is unique to a subject. The pattern and speed of a person typing a pass phrase \nis used to establish identity or provide authentication.\nknowledge base\nA component of an expert system, the knowledge base contains the rules \nknown by an expert system and seeks to codify the knowledge of human experts in a series of \n“if/then” statements.\nknowledge-based detection\nAn intrusion discovery mechanism used by IDS and based on a \ndatabase of known attack signatures. The primary drawback to a knowledge-based IDS is that \nit is effective only against known attack methods.\nknown plaintext attack\nAn attack in which the attacker has a copy of the encrypted message \nalong with the plaintext message used to generate the ciphertext (the copy). This greatly assists \nthe attacker in breaking weaker codes.\nKryptoKnight\nA ticket-based authentication mechanism similar to Kerberos but based on \npeer-to-peer authentication.\nL\nLAN extender\nA remote access, multilayer switch used to connect distant networks over \nWAN links. This is a strange beast of a device in that it creates WANs but marketers of this \ndevice steer clear of the term WAN and use only the terms LAN and extended LAN. The idea \nbehind this device was to make the terminology easier to understand and thus make the device \neasier to sell than a more conventional WAN device grounded in complex concepts and terms.\nland attack\nA type of DoS. A land attack occurs when the attacker sends numerous SYN \npackets to a victim and the SYN packets have been spoofed to use the same source and desti-\nnation IP address and port number as the victim’s. This causes the victim to think it sent a TCP/\nIP session opening packet to itself, which causes a system failure, usually resulting in a freeze, \ncrash, or reboot.\nlattice-based access control\nA variation of nondiscretionary access controls. Lattice-based \naccess controls define upper and lower bounds of access for every relationship between a subject \nand object. These boundaries can be arbitrary, but they usually follow the military or corporate \nsecurity label levels.\n" }, { "page_number": 737, "text": "692\nGlossary\nlayer 1\nThe Physical layer of the OSI model.\nlayer 2\nThe Data Link layer of the OSI model.\nlayer 3\nThe Network layer of the OSI model.\nlayer 4\nThe Transport layer of the OSI model.\nlayer 5\nThe Session layer of the OSI model.\nlayer 6\nThe Presentation layer of the OSI model.\nlayer 7\nThe Application layer of the OSI model.\nLayer 2 Forwarding (L2F)\nA protocol developed by Cisco as a mutual authentication tun-\nneling mechanism. L2F does not offer encryption.\nLayer 2 Tunneling Protocol (L2TP)\nA point-to-point tunnel protocol developed by com-\nbining elements from PPTP and L2F. L2TP lacks a built-in encryption scheme but typically \nrelies upon IPSec as its security mechanism.\nlayering\nThe use of multiple security controls in series to provide for maximum effectiveness \nof security deployment.\nlearning rule\nSee delta rule.\nlicensing\nA contract that states how a product is to be used.\nlighting\nOne of the most commonly used forms of perimeter security control. The primary \npurpose of lighting is to discourage casual intruders, trespassers, prowlers, and would-be \nthieves who would rather perform their malicious activities in the dark.\nlink encryption\nAn encryption technique that protects entire communications circuits by cre-\nating a secure tunnel between two points. This is done by using either a hardware or software \nsolution that encrypts all traffic entering one end of the tunnel and decrypts all traffic exiting \nthe other end of the tunnel.\nlocal alarm systems\nAlarm systems that broadcast an audible signal that can be easily heard up \nto 400 feet away. Additionally, local alarm systems must be protected from tampering and dis-\nablement, usually by security guards. In order for a local alarm system to be effective, there must \nbe a security team or guards positioned nearby who can respond when the alarm is triggered.\nlocal area network (LAN)\nA network that is geographically limited, such as within a single \noffice, building, or city block.\nlog analysis\nA detailed and systematic form of monitoring. The logged information is ana-\nlyzed in detail to look for trends and patterns as well as abnormal, unauthorized, illegal, and \npolicy-violating activities.\nlogging\nThe activity of recording information about events or occurrences to a log file or \ndatabase.\n" }, { "page_number": 738, "text": "Glossary\n693\nlogic bomb\nMalicious code objects that infect a system and lie dormant until they are trig-\ngered by the occurrence of one or more conditions.\nlogical access control\nA hardware or software mechanism used to manage access to \nresources and systems and provide protection for them. They are the same as technical access \ncontrols. Examples of logical or technical access controls include encryption, smart cards, pass-\nwords, biometrics, constrained interfaces, access control lists, protocols, firewalls, routers, \nintrusion detection systems, and clipping levels.\nlogon credentials\nThe identity and the authentication factors offered by a subject to estab-\nlish access.\nlogon script\nA script that runs at the moment of user logon. A logon script is often used to \nmap local drive letters to network shares, to launch programs, or to open links to often accessed \nsystems.\nloopback address\nThe IP address used to create a software interface that connects to itself via \nthe TCP/IP protocol. The loopback address is handled by software alone. It permits testing of the \nTCP/IP protocol stack even if network interfaces or their device drivers are missing or damaged.\nLow Water-Mark Mandatory Access Control (LOMAC)\nA loadable kernel module for Linux \ndesigned to protect the integrity of processes and data. It is an OS security architecture exten-\nsion or enhancement that provides flexible support for security policies.\nM\nmachine language\nA programming language that can be directly executed by a computer.\nmacro viruses\nA virus that utilizes crude technologies to infect documents created in the \nMicrosoft Word environment.\nmailbombing\nAn attack in which sufficient numbers of messages are directed to a single user’s \ninbox or through a specific STMP server to cause a denial of service.\nmaintenance\nThe variety of tasks that are necessary to ensure continued operation in the face \nof changing operational, data processing, storage, and environmental requirements.\nmaintenance hooks\nEntry points into a system that only the developer of the system knows; \nalso called back doors.\nmalicious code\nCode objects that include a broad range of programmed computer security \nthreats that exploit various network, operating system, software, and physical security vulner-\nabilities to spread malicious payloads to computer systems.\nmandatory access control\nAn access control mechanism that uses security labels to regulate \nsubject access to objects.\n" }, { "page_number": 739, "text": "694\nGlossary\nmandatory vacations\nA security policy that requires all employees to take vacations annually \nso their work tasks and privileges can be audited and verified. This often results in easy detec-\ntion of abuse, fraud, or negligence.\nman-in-the-middle attack\nA type of attack that occurs when malicious users are able to posi-\ntion themselves between the two endpoints of a communication’s link. The client and server are \nunaware that there is a third party intercepting and facilitating their communication session.\nman-made disasters\nDisasters cause by humans, including explosions, electrical fires, ter-\nrorist acts, power outages, utility failures, hardware/software failures, labor difficulties, theft, \nand vandalism.\nmantrap\nA double set of doors that is often protected by a guard. The purpose of a mantrap \nis to contain a subject until their identity and authentication is verified.\nmasquerading\nUsing someone else’s security ID to gain entry into a facility or system.\nmassively parallel processing (MPP)\nTechnology used to create systems that house hundreds \nor even thousands of processors, each of which has its own operating system and memory/bus \nresources.\nMaster Boot Record (MBR)\nThe portion of a hard drive or floppy disk that the computer uses \nto load the operating system during the boot process.\nMaster Boot Record (MBR) virus\nVirus that attacks the MBR. When the system reads the \ninfected MBR, the virus instructs it to read and execute the code stored in an alternate location, \nthereby loading the entire virus into memory and potentially triggering the delivery of the virus’s \npayload.\nmaximum tolerable downtime (MTD)\nThe maximum length of time a business function can \nbe inoperable without causing irreparable harm to the business.\nMD2 (Message Digest 2)\nA hash algorithm developed by Ronald Rivest in 1989 to provide a \nsecure hash function for 8-bit processors.\nMD4\nAn enhanced version of the MD2 algorithm, released in 1990. MD4 pads the message \nto ensure that the message length is 64 bits smaller than a multiple of 512 bits.\nMD5\nThe next version the MD algorithm, released in 1991, which processes 512-bit blocks of \nthe message, but it uses four distinct rounds of computation to produce a digest of the same \nlength as the MD2 and MD4 algorithms (128 bits).\nmean time to failure (MTTF)\nThe length of time or number of uses a hardware or media com-\nponent can endure before its reliability is questionable and it should be replaced.\nMedia Access Control (MAC) address\nA 6-byte address written in hexadecimal. The first \nthree bytes of the address indicate the vendor or manufacturer of the physical network interface. \nThe last three bytes make up a unique number assigned to that interface by the manufacturer. \nNo two devices on the same network can have the same MAC address.\n" }, { "page_number": 740, "text": "Glossary\n695\nmeet-in-the-middle attack\nAn attack in which the attacker uses a known plaintext message. \nThe plaintext is then encrypted using every possible key (k1), while the equivalent ciphertext is \ndecrypted using all possible keys (k2). When a match is found, the corresponding pair (k1, k2) \nrepresents both portions of the double encryption. This type of attack generally takes only \ndouble the time necessary to break a single round of encryption (or 2(n+1) rather than the antic-\nipated 2n * 2n) , offering minimal added protection.\nmemory\nThe main memory resources directly available to a system’s CPU. Primary memory \nnormally consists of volatile random access memory (RAM) and is usually the most high-\nperformance storage resource available to a system.\nmemory card\nA device that can store data but cannot process it; often built around some form \nof flash memory.\nmemory page\nA single chunk of memory that can be moved to and from RAM and the paging \nfile on a hard drive as part of a virtual memory system.\nmemory-mapped I/O\nA technique used to manage input/output between system components \nand the CPU.\nMessage\nThe communications to or input for an object (in the context of object-oriented pro-\ngramming terminology and concepts).\nmessage digest (MD)\nA summary of a message’s content (not unlike a file checksum) pro-\nduced by a hashing algorithm.\nmetadata\nThe results of a data mining operation on a data warehouse.\nmeta-model\nA model of models. Because the spiral model encapsulates a number of iterations \nof another model (the waterfall model), it is known as a meta-model.\nmethods\nThe actions or functions performed on input (messages) to produce output (behav-\niors) by objects in an object-oriented programming environment.\nmicrocode\nA term used to describe software that is stored in a ROM chip. Also called firmware.\nmiddle management\nSee security professional.\nmilitary and intelligence attacks\nAttacks that are launched primarily to obtain secret and \nrestricted information from law enforcement or military and technological research sources.\nMIME Object Security Services (MOSS)\nStandard that provides authenticity, confidenti-\nality, integrity, and nonrepudiation for e-mail messages.\nmitigated\nThe process by which a risk is removed.\nmitigate risk\nSee reducing risk.\nmobile sites\nNon-mainstream alternatives to traditional recovery sites that typically consist \nof self-contained trailers or other easily relocated units.\n" }, { "page_number": 741, "text": "696\nGlossary\nmodule testing\nIn module testing, each independent or self-contained segment of code for \nwhich there exists a distinct and separate specification is tested independently of all other mod-\nules. This can also be called component testing. This can be seen as a parent or super-class of \nunit testing.\nmodulo\nThe remainder value left over after a division operation is performed.\nMONDEX\nA type of electronic payment system and protocol designed to manage cash on \nsmart cards.\nmonitoring\nThe activity of manually or programmatically reviewing logged information \nlooking for specific information.\nmotion detector\nA device that senses the occurrence of motion in a specific area.\nmotion sensor\nSee motion detector.\nmulticast\nA communications transmission to multiple identified recipients.\nmultilevel mode\nSee multilevel security mode.\nmultilevel security mode\nA system that is authorized to process information at more than \none level of security even when all system users do not have appropriate clearances or a need to \nknow for all information processed by the system.\nmultipartite virus\nA virus that uses more than one propagation technique in an attempt to \npenetrate systems that defend against only one method or the other.\nmultiprocessing\nA technology that makes it possible for a computing system to harness the \npower of more than one processor to complete the execution of a single application.\nmultiprogramming\nThe pseudo-simultaneous execution of two tasks on a single processor \ncoordinated by the operating system for the purpose of increasing operational efficiency. Mul-\ntiprogramming is considered a relatively obsolete technology and is rarely found in use today \nexcept in legacy systems.\nmultistate\nTerm used to describe a system that is certified to handle multiple security levels \nsimultaneously by using specialized security mechanisms that are designed to prevent informa-\ntion from crossing between security levels.\nmultitasking\nA system handling two or more tasks simultaneously.\nmultithreading\nA process that allows multiple users to make use of the same process without \ninterfering with each other.\nMutual Assistance Agreement (MAA)\nAn agreement in which two organizations pledge to \nassist each other in the event of a disaster by sharing computing facilities or other technological \nresources.\n" }, { "page_number": 742, "text": "Glossary\n697\nN\nnatural disaster\nA disaster that is not caused by man, such as earthquakes, mud slides, sink \nholes, fires, floods, hurricanes, tornadoes, falling rocks, snow, rainfall, ice, humidity, heat, \nextreme cold, and so on.\nneed-to-know\nThe requirement to have access to, knowledge about, or possession of data or \na resource in order to perform specific work tasks. A user must have a need to know in order \nto gain access to data or resources. Even if that user has an equal or greater security classifica-\ntion than the requested information, if they do not have a need to know, they are denied access.\nnegligence\nFailure to exercise the degree of care considered reasonable under the circum-\nstances, resulting in an unintended injury to another party.\nNetSP\nA single sign-on product based on KryptoKnight.\nNetwork Address Translation (NAT)\nA mechanism for converting the internal nonroutable \nIP addresses found in packet headers into public IP addresses for transmission over the Internet.\nNetwork layer\nLayer 3 of the OSI model.\nnetwork-based IDS\nAn IDS installed onto a host to monitor a network. Network-based IDSs \ndetect attacks or event anomalies through the capture and evaluation of network packets.\nneural network\nA system in which a long chain of computational decisions that feed into each \nother and eventually add up to produce the desired output is set up.\nnoise\nA steady interfering disturbance.\nnonce\nA random number generator variable used in cryptography software and creates a new \nand unique value every time it is used often based on a timestamp based seed value.\nnondisclosure agreement (NDA)\nA document used to protect the confidential information \nwithin an organization from being disclosed by a former employee. When a person signs an \nNDA, they agree not to disclose any information that is defined as confidential to anyone out-\nside of the organization. Often, violations of an NDA are met with strict penalties.\nnondiscretionary access control\nAn access control mechanism that regulates subject access \nto objects by using roles or tasks.\nnoninterference model\nA model loosely based on the information flow model. The noninter-\nference model is concerned with the actions of one subject affecting the system state or actions \nof another subject.\nnonrepudiation\nA feature of a security control or an application that prevents the sender of \na message or the subject of an activity or event from denying that the event occurred.\nnonvolatile\nSee nonvolatile storage.\n" }, { "page_number": 743, "text": "698\nGlossary\nnonvolatile storage\nA storage system that does not depend upon the presence of power to \nmaintain its contents, such as magnetic/optical media and nonvolatile RAM (NVRAM).\nnormalization\nThe database process that removes redundant data and ensures that all \nattributes are dependent on the primary key.\nNOT\nAn operation (represented by the ~ or ! symbol) that reverses the value of an input vari-\nable. This function operates on only one variable at a time.\nO\nobject\nA passive entity that provides information or data to subjects. An object can be a file, \na database, a computer, a program, a process, a file, a printer, a storage media, and so on.\nobject linking and embedding (OLE)\nA Microsoft technology used to link data objects into \nor from multiple files or sources on a computer.\nobject-oriented programming (OOP)\nA method of programming that uses encapsulated \ncode sets called objects. OOP is best suited for eliminating error propagation and mimicking or \nmodeling the real world.\nobject-relational database\nA relational database combined with an object-oriented program-\nming environment.\none-time pad\nAn extremely powerful type of substitution cipher that uses a different key for \neach message. The key length is the same length as the message.\none-time password\nA variant of dynamic passwords that is changed every time it is used.\none-upped constructed password\nA password with a single-character difference from its \npresent form in a dictionary list.\none-way encryption\nA mathematical function performed on passwords, messages, CRCs, \nand so on that creates a cryptographic code that cannot be reversed.\none-way function\nA mathematical operation that easily produces output values for each pos-\nsible combination of inputs but makes it impossible to retrieve the input values. Public key cryp-\ntosystems are all based upon some sort of one-way function.\nOpen Systems Interconnection (OSI) model\nA standard model developed to establish a \ncommon communication structure or standard for all computer systems.\noperational plans\nShort-term and highly detailed plans based on the strategic and tactical \nplans. Operational plans are valid or useful only for a short time. They must be updated often \n(such as monthly or quarterly) to retain compliance with tactical plans. Operational plans are \ndetailed plans on how to accomplish the various goals of the organization.\noperations security triple\nThe relationship between asset, vulnerability, and threat.\n" }, { "page_number": 744, "text": "Glossary\n699\nOR\nAn operation (represented by the ∨ symbol) that checks to see whether at least one of the \ninput values is true.\norganizational owner\nSee senior management.\nOSI model\nSee Open Systems Interconnection (OSI) model.\nOutput Feedback (OFB)\nA mode in which DES XORs plaintext with a seed value. For the first \nencrypted block, an initialization vector is used to create the seed value. Future seed values are \nderived by running the DES algorithm on the preceding seed value. The major advantage of \nOFB mode is that transmission errors do not propagate to affect the decryption of future blocks.\novert channel\nAn obvious, visible, detectable, known method of communicating that is \naddressed by a security policy and subsequently controlled by logical or technical access controls.\noverwriting\nSee clearing.\nowner\nThe person who has final corporate responsibility for the protection and storage of \ndata. The owner may be liable for negligence if they fail to perform due diligence in establishing \nand enforcing security policy to protect and sustain sensitive data. The owner is typically the \nCEO, president, or department head.\nP\npackage\nIn the context of the Common Criteria for information technology security evalua-\ntion, a package is a set of security features that can be added or removed from a target system.\npacket\nA portion of a message that contains data and the destination address; also called a \ndatagram.\npadded cell\nSimilar to a honey pot. When an intruder is detected by an IDS, the intruder is \ntransferred to a padded cell. The padded cell has the look and layout of the actual network, but \nwithin the padded cell the intruder can neither perform malicious activities nor access any con-\nfidential data. A padded cell is a simulated environment that may offer fake data to retain an \nintruder’s interest.\npalm geography\nAn example of a biometric factor, which is a behavioral or physiological \ncharacteristic that is unique to a subject. The shape of a person’s hand is used to establish iden-\ntity or provide authentication.\npalm scan\nSee palm topography.\npalm topography\nAn example of a biometric factor, which is a behavioral or physiological \ncharacteristic that is unique to a subject. The layout of ridges, creases, and grooves on a person’s \npalm is used to establish identity or provide authentication. Same as a palm scan and similar to \na fingerprint.\n" }, { "page_number": 745, "text": "700\nGlossary\nparallel run\nA type of new system deployment testing in which the new system and the old \nsystem are run in parallel.\nparallel tests\nTesting that involves actually relocating personnel to an alternate recovery site \nand implementing site activation procedures.\nparole evidence rule\nAn rule that states that when an agreement between parties is put into \nwritten form, the written document is assumed to contain all of the terms of the agreement and \nno verbal agreements may modify the written agreement.\npass phrase\nA string of characters usually much longer than a password. Once the pass \nphrase is entered, the system converts it into a virtual password for use by the authentication \nprocess. Pass phrases are often natural language sentences to allow for simplified memorization.\npassword\nA string of characters entered by a subject as an authentication factor.\nPassword Authentication Protocol (PAP)\nA standardized authentication protocol for PPP. \nPAP transmits usernames and passwords in the clear. PAP offers no form of encryption; it simply \nprovides a means to transport the logon credentials from the client to the authentication server.\npassword policy\nThe section of an organization’s security policy that dictates the rules, \nrestrictions, and requirements of passwords. Can also indicate the programmatic controls \ndeployed on a system to improve the strength of passwords.\npassword restrictions\nThe rules that define the minimal requirements of passwords, such as \nlength, character composition, and age.\npatent\nA governmental grant that bestows upon an invention’s creator the sole right to make, \nuse, and sell that invention for a set period of time.\npattern-matching detection\nSee knowledge-based detection.\npenetration\nSee intrusion.\npenetration testing\nAn activity used to test the strength and effectiveness of deployed security \nmeasures with an authorized attempted intrusion attack. Penetration testing should be per-\nformed only with the consent and knowledge of the management staff.\npermanent virtual circuit (PVC)\nA predefined virtual circuit that is always available for a \nFrame Relay customer.\npersonal identification number (PIN)\nA number or code assigned to a person to be used as an \nidentification factor. PINs should be kept secret.\npersonnel management\nAn important factor in maintaining operations security. Personnel \nmanagement is a form of administrative control or administrative management.\nphone phreaking\nThe process of breaking into telephone company computers to place free calls.\n" }, { "page_number": 746, "text": "Glossary\n701\nphysical access control\nA physical barrier deployed to prevent direct contact with systems. \nExamples of physical access controls include guards, fences, motion detectors, locked doors, sealed \nwindows, lights, cable protection, laptop locks, swipe cards, dogs, CCTV, mantraps, and alarms.\nphysical controls for physical security\nSee physical access control.\nPhysical layer\nLayer 1 of the OSI model.\npiggybacking\nThe act of following someone through a secured gate or doorway without \nbeing identified or authorized personally.\nping\nA utility used to troubleshoot a connection to test whether a particular IP address is \naccessible.\nping of death attack\nA type of DoS. A ping of death attack employs an oversized ping packet. \nUsing special tools, an attacker can send numerous oversized ping packets to a victim. In many \ncases, when the victimized system attempts to process the packets, an error occurs causing the \nsystem to freeze, crash, or reboot.\nplain old telephone service (POTS)\nNormal telephone service.\nplaintext\nA message that has not been encrypted.\nplayback attack\nSee replay attack.\nPoint-to-Point Protocol (PPP)\nA full-duplex protocol used for the transmission of TCP/IP \npackets over various non-LAN connections, such as modems, ISDN, VPNs, Frame Relay, and so on. \nPPP is widely supported and is the transport protocol of choice for dial-up Internet connections.\nPoint to Point Tunneling Protocol (PPTP)\nAn enhancement of PPP that creates encrypted tun-\nnels between communication endpoints. PPTP is used on VPNs but is often replaced by L2TP.\npolicy\nSee security policy.\npolyalphabetic substitution\nA cryptographic transformation that encrypts a message using \nletter-by-letter conversion and multiple alphabets from different languages or countries.\npolyinstantiation\nThe event that occurs when two or more rows in the same table appear to \nhave identical primary key elements but contain different data for use at differing classification \nlevels. Polyinstantiation is often used as a defense against some types of inference attacks.\npolymorphic virus\nA virus that modifies its own code as it travels from system to system. The \nvirus’s propagation and destruction techniques remain exactly the same, but the signature of the \nvirus is somewhat different each time it infects a new system.\npolymorphism\nIn the context of object-oriented programming terminology and concepts, the \ncharacteristic of an object to provide different behaviors based upon the same message and \nmethods owing to variances in external conditions.\nport\nA connection address within a protocol.\n" }, { "page_number": 747, "text": "702\nGlossary\nPort Address Translation (PAT)\nA mechanism for converting the internal nonroutable IP \naddresses found in packet headers into public IP addresses and port numbers for transmission \nover the Internet. PAT supports a many-to-one mapping of internal to external IP addresses by \nusing ports.\nport scan\nSoftware used by an intruder to probe all of the active systems on a network and \ndetermine what public services are running on each machine.\npostmortem review\nAn analysis and review of an activity after its completion to determine \nits success and whether processes and procedures need to be improved.\nPost Office Protocol, version 3 (POP3)\nA protocol used to transfer e-mail messages from an \ne-mail server to an e-mail client.\npreaction system\nA combination dry pipe/wet pipe system. The system exists as a dry pipe \nuntil the initial stages of a fire (smoke, heat, etc.) are detected and then the pipes are filled with \nwater. The water is released only after the sprinkler head activation triggers are melted by suf-\nficient heat. If the fire is quenched before the sprinklers are triggered, the pipes can be manually \nemptied and reset. This also allows for manual intervention to stop the release of water before \nsprinkler triggering occurs. Preaction systems are the most appropriate water-based system for \nenvironments that include both computers and humans in the same locations.\nPresentation layer\nLayer 6 of the OSI model.\nPretty Good Privacy (PGP)\nA public/private key system that uses the IDEA algorithm to \nencrypt files and e-mail messages. PGP is not a standard but rather an independently developed \nproduct that has wide Internet grassroots support.\npreventative access control\nAn access control deployed to stop an unwanted or unautho-\nrized activity from occurring. Examples of preventative access controls include fences, security \npolicies, security awareness training, and anti-virus software.\npreventive access control\nSee preventative access control.\npreventive control\nSee preventative access control.\nprimary memory\nStorage that normally consists of volatile random access memory (RAM) \nand is usually the most high-performance storage resource available to a system.\nPrimary Rate Interface (PRI)\nAn ISDN service type that provides up to 23 B channels and one \nD channel. Thus, a full PRI ISDN connection offers 1.544Mbps throughput, the same as a T1 line.\nprimary storage\nThe RAM that a computer uses to keep necessary information readily available.\nprinciple of least privilege\nAn access control philosophy that states that subjects are granted \nthe minimal access possible for the completion of their work tasks.\nprivacy\nAn element of confidentiality aimed at preventing personal or sensitive information \nabout an individual or organization from being disclosed.\n" }, { "page_number": 748, "text": "Glossary\n703\nPrivacy Act of 1974\nA law that mandates that agencies maintain only records that are neces-\nsary for the conduct of their business and destroy those records when they are no longer needed \nfor a legitimate function of government. It provides a formal procedure for individuals to gain \naccess to records the government maintains about them and to request that incorrect records be \namended. The Privacy Act also restricts the way the federal government can deal with private \ninformation about individual citizens.\nPrivacy Enhanced Mail (PEM)\nAn e-mail encryption mechanism that provides authentica-\ntion, integrity, confidentiality, and nonrepudiation. PEM is a layer 7 protocol. PEM uses RSA, \nDES, and X.509.\nprivate\nA commercial business/private sector classification used for data of a private or per-\nsonal nature that is intended for internal use only. A significant negative impact could occur for \nthe company or individuals if private data is disclosed.\nprivate branch exchange (PBX)\nA sophisticated telephone system often used by organiza-\ntions to provide inbound call support, extension-to-extension calling, conference calling, and \nvoice mail. Implemented as a stand-alone phone system network or can be integrated with the \nIT infrastructure.\nprivate key\nA secret value that is used to encrypt or decrypt messages and is kept secret and \nknown only to the user; used in conjunction with a public key in asymmetrical cryptography.\nprivileged entity controls\nSee privileged operations functions.\nprivileged mode\nThe mode designed to give the operating system access to the full range of \ninstructions supported by the CPU.\nprivileged operations functions\nActivities that require special access or privilege to perform \nwithin a secured IT environment. In most cases, these functions are restricted to administrators \nand system operators.\nproblem state\nThe state in which a process is actively executing.\nprocedure\nIn the context of security, a detailed step-by-step how-to document describing the \nexact actions necessary to implement a specific security mechanism, control, or solution.\nprocess isolation\nOne of the fundamental security procedures put into place during system \ndesign. Basically, using process isolation mechanisms (whether part of the operating system or \npart of the hardware itself) ensures that each process has its own isolated memory space for \nstorage of data and the actual executing application code itself.\nprocessor\nThe central processing unit in a PC; it handles all functions on the system.\nProgram Evaluation Review Technique (PERT)\nA project scheduling tool. It is a method \nused to judge the size of a software product in development and calculate the Standard Devia-\ntion (SD) for risk assessment. PERT relates the estimated lowest possible size, the most likely \nsize, and the highest possible size of each component. PERT is used to direct improvements to \n" }, { "page_number": 749, "text": "704\nGlossary\nproject management and software coding in order to produce more efficient software. As the \ncapabilities of programming and management improve, the actual produced size of software \nshould be smaller.\nprogrammable read-only memory (PROM)\nA PROM chip that does not have its contents \n“burned in” at the factory as is done with standard ROM chips. Instead, special functionality \nis installed that allows the end user to burn in the contents of the chip.\nproprietary\nA form of commercial business/private sector confidential information. If propri-\netary data is disclosed, it can have drastic effects on the competitive edge of an organization.\nprotection profile\nFrom the Common Criteria for information technology security evalua-\ntion, the evaluation element in which a subject states its security needs.\nprotocol\nA set of rules and restrictions that define how data is transmitted over a network \nmedium (e.g., twisted-pair cable, wireless transmission, etc.). Protocols make computer-to-\ncomputer communications possible.\nproximity reader\nA passive device, field-powered device, or transponder that detects the pres-\nence of authorized personnel and grants them physical entry into a facility. The proximity \ndevice is worn or held by the authorized bearer. When they pass a proximity reader, the reader \nis able to determine who the bearer is and whether they have authorized access.\nproxy\nA mechanism that copies packets from one network into another. The copy process \nalso changes the source and destination address to protect the identity of the internal or pri-\nvate network.\nprudent man rule\nInvoked by the Federal Sentencing Guidelines, the rule that requires senior \nofficials to perform their duties with the care that ordinary, prudent people would exercise \nunder similar circumstances.\npseudo-flaws\nA technique often used on honey pot systems and on critical resources to emu-\nlate well-known operating system vulnerabilities.\npublic\nThe lowest level of commercial business/private sector classification. Used for all data \nthat does not fit in one of the higher classifications. This information is not readily disclosed, \nbut if it is it should not have a serious negative impact on the organization.\npublic IP addresses\nThe addresses defined in RFC 1918, which are not routed over the \nInternet.\npublic key\nA value that is used to encrypt or decrypt messages and is made public to any user \nand used with a private key in asymmetric cryptography.\npublic key infrastructure (PKI)\nA hierarchy of trust relationships that makes it possible to \nfacilitate communication between parties previously unknown to each other.\npurging\nThe process of erasing of media so it can be reused in a less secure environment.\n" }, { "page_number": 750, "text": "Glossary\n705\nQ\nqualitative decision making\nA decision making process that takes nonnumerical factors, such \nas emotions, investor/customer confidence, workforce stability, and other concerns, into account. \nThis type of data often results in categories of prioritization (such as high, medium, and low).\nqualitative risk analysis\nScenario-oriented analysis using ranking and grading for exposure \nratings and decisions.\nquality assurance check\nA form of personnel management and project management that \noversees the development of a product. QA checks ensure that the product in development is \nconsistent with stated standards, methods of practice, efficiency, and so on.\nquantitative decision making\nThe use of numbers and formulas to reach a decision. Options \nare often expressed in terms of the dollar value to the business.\nquantitative risk analysis\nA method that assigns real dollar figures to the loss of an asset.\nR\nradiation monitoring\nA specific form of sniffing or eavesdropping that involves the detection, \ncapture, and recording of radio frequency signals and other radiated communication methods, \nincluding sound and light.\nradio frequency interference (RFI)\nA type of noise that is generated by a wide number of \ncommon electrical appliances, including florescent lights, electrical cables, electric space \nheaters, computers, elevators, motors, electric magnets, and so on. RFI can affect many of the \nsame systems EMI affects.\nRADIUS\nSee Remote Authentication Dial-In User Service (RADIUS).\nrandom access memory (RAM)\nReadable and writeable memory that contains information \nthe computer uses during processing. RAM retains its contents only when power is continu-\nously supplied to it.\nrandom access storage\nDevices, such as RAM and hard drives, that allow the operating \nsystem to request contents from any point within the media.\nread-only memory (ROM)\nMemory that can be read but cannot be written to.\nready state\nThe state in which a process is ready to execute but is waiting for its turn on the CPU.\nreal evidence\nItems that can actually be brought into a court of law; also known as object evidence.\nreal memory\nTypically the largest RAM storage resource available to a computer. It is nor-\nmally composed of a number of dynamic RAM chips and therefore must be refreshed by the \nCPU on a periodic basis; also known as main memory or primary memory.\n" }, { "page_number": 751, "text": "706\nGlossary\nrealized risk\nThe incident, occurrence, or event when a risk becomes a reality and a breach, \nattack, penetration, or intrusion has occurred that may or may not result in loss, damage, or dis-\nclosure of assets.\nrecord\nContents of a table in a relational database.\nrecord retention\nThe organizational policy that defines what information is maintained and \nfor how long. In most cases, the records in question are audit trails of user activity. This may \ninclude file and resource access, logon patterns, e-mail, and the use of privileges.\nrecord sequence checking\nSimilar to hash total checking, but instead of verifying content \nintegrity, it involves verifying packet or message sequence integrity.\nrecovery access control\nA type of access control that is used to repair or restore resources, \nfunctions, and capabilities after a security policy violation.\nrecovery time objective (RTO)\nSee maximum tolerable downtime.\nreducing risk\nThe implementation of safeguards and countermeasures. Also referred to as \nmitigating risk.\nreference monitor\nA portion of the security kernel that validates user requests against the \nsystem’s access control mechanisms.\nreference profile\nThe digitally stored sample of a biometric factor.\nreference template\nSee reference profile.\nreferential integrity\nUsed to enforce relationships between two tables. One table in the relation-\nship contains a foreign key that corresponds to the primary key of the other table in the relationship.\nregister\nA limited amount of onboard memory in a CPU.\nregister address\nThe address of a register, which is a small memory locations directly on the \nCPU. When the CPU needs information from one of those registers to complete an operation, \nit can simply use the register address (e.g., “register one”) to access the information.\nregistration authority (RA)\nA read-only version of a certificate authority that is able to dis-\ntribute the CRL and perform certificate verification processes but is not able to create new cer-\ntificates. An RA is used to share the workload of a CA.\nregulatory policy\nA policy that is required whenever industry or legal standards are appli-\ncable to your organization. This policy discusses the regulations that must be followed and out-\nlines the procedures that should be used to elicit compliance.\nreject risk\nTo deny that a risk exists or hope that by ignoring a risk, it will never be realized. \nIt is an unacceptable response to risk. Also referred to as deny risk.\nrelational database\nA database that consists of tables that contain a set of related records.\nrelationship\nThe association of information in tables of a relational database.\n" }, { "page_number": 752, "text": "Glossary\n707\nrelevant\nCharacteristic of evidence that is applicable in determining a fact in a court of law.\nRemote Authentication Dial-In User Service (RADIUS)\nA service used to centralize the \nauthentication of remote dial-up connections.\nremote journaling\nTransferring copies of the database transaction logs containing the trans-\nactions that occurred since the previous bulk transfer.\nremote mirroring\nMaintaining a live database server at the backup site. It is the most \nadvanced database backup solution.\nrepeater\nA network device used to amplify signals on network cabling to allow for longer dis-\ntances between nodes. Can also be called a concentrator or amplifier.\nreplay attack\nAn attack in which a malicious user records the traffic between a client and \nserver. The packets sent from the client to the server are then played back or retransmitted to \nthe server with slight variations of the time stamp and source IP address (i.e., spoofing). In some \ncases, this allows the malicious user to restart an old communication link with a server. Also \nreferred to as a playback attack.\nresidual risk\nRisk that comprises specific threats to specific assets against which upper man-\nagement chooses not to implement a safeguard. In other words, residual risk is the risk that \nmanagement has chosen to accept rather than mitigate.\nrestricted interface model\nA model that uses classification-based restrictions to offer only \nsubject-specific authorized information and functions. One subject at one classification level will \nsee one set of data and have access to one set of functions while another subject at a different clas-\nsification level will see a different set of data and have access to a different set of functions.\nretina scan\nAn example of a biometric factor, which is a behavioral or physiological charac-\nteristic that is unique to a subject. The blood vessel pattern at the back of the eyeball is used to \nestablish identity or provide authentication.\nReverse Address Resolution Protocol (RARP)\nA subprotocol of the TCP/IP protocol suite \nthat operates at the Data Link layer (layer 2). RARP is used to discover the IP address of a \nsystem by polling using its MAC address.\nreverse engineering\nThis is considered an unethical form of engineering. Programmers \ndecompile code to understand all the intricate details of its functionality, especially when \nemployed for the purpose of creating a similar, competing, or compatible product.\nreverse hash matching\nThe process of discovering the original message that has been hashed \nby generating potential messages, hashing them, and comparing their hash value to the original. \nWhen H(M)=H(M'), then M=M'.\nrevocation\nA mechanism that allows a PKI certificate to be canceled, effectively removing a \nuser from the system.\nRFC 1918\nThe public standard that defines public and private IP addresses.\n" }, { "page_number": 753, "text": "708\nGlossary\nRijndael block cipher\nA block cipher that was selected to replace DES. The Rijndael cipher \nallows the use of three key strengths: 128 bits, 192 bits, and 256 bits.\nrisk\nThe likelihood that any specific threat will exploit a specific vulnerability to cause harm to an \nasset. Risk is an assessment of probability, possibility, or chance. Risk = threat + vulnerability.\nrisk analysis\nAn element of risk management that includes analyzing an environment for \nrisks, evaluating each risk as to its likelihood of occurring and cost of damage, assessing the cost \nof various countermeasures for each risk, and creating a cost/benefit report for safeguards to \npresent to upper management.\nrisk management\nA detailed process of identifying factors that could damage or disclose \ndata, evaluating those factors in light of data value and countermeasure cost, and implementing \ncost-effective solutions for mitigating or reducing risk.\nrisk tolerance\nThe ability of an organization to absorb the losses associated with realized risks.\nRivest, Shamir, and Adleman (RSA)\nA public key encryption algorithm named after Rivest, \nShamir, and Adleman, its inventors.\nrole-based access control\nA form of nondiscretionary access controls that employs job func-\ntion roles to regulate subject access to objects.\nroot\nThe administrator level of a system.\nrootkit\nA specialized software package that allows hackers to gain expanded access to a system.\nrouter\nA network device used to control traffic flow on networks. Routers are often used to \nconnect similar networks together and control traffic flow between them. They can function \nusing statically defined routing tables or employ a dynamic routing system.\nRSA\nSee Rivest, Shamir, and Adleman (RSA).\nrule-based access control\nA variation of mandatory access controls. A rule-based system \nuses a set of rules, restrictions, or filters to determine what can and cannot occur on the system, \nsuch as granting subject access, performing an action on an object, or accessing a resource. Fire-\nwalls, proxies, and routers are common examples of rule-based access control systems.\nrunning key cipher\nA form of cryptography in which the key is a designation of a changing \nsource, such as the third page of the New York Times.\nrunning state\nThe state in which a process is actively executing. This is another name for \nproblem state.\nS\nS/MIME\nSee Secure Multipurpose Internet Mail Extensions (S/MIME).\nsafeguard\nAnything that removes a vulnerability or protects against one or more specific \nthreats. Also referred to as a countermeasure.\n" }, { "page_number": 754, "text": "Glossary\n709\nsag\nMomentary low voltage.\nsalami attack\nAn attack performed by gathering small amounts of data to construct some-\nthing of greater value or higher sensitivity.\nsalt\nA random number appended to a password before hashing to increase randomness and \nensure uniqueness in the resulting stored hash value.\nsampling\nA form of data reduction that allows an auditor to quickly determine the important \nissues or events from an audit trail.\nsandbox\nA security boundary within which a Java applet executes.\nsanitization\nAny number of processes that prepares media for destruction. Sanitization is the \nprocess that ensures that data cannot be recovered by any means from destroyed or discarded \nmedia. Sanitization can also be the actual means by which media is destroyed. Media can be san-\nitized by purging or degaussing without physically destroying the media.\nscanning\nSimilar to “casing” a neighborhood prior to a burglary, the process by which a potential \nintruder looks for possible entryways into a system. Scanning can indicate that illegal activity will \nfollow, so it is a good idea to treat scans as incidents and to collect evidence of scanning activity.\nscavenging\nA form of dumpster diving performed electronically. Online scavenging searches \nfor useful information in the remnants of data left over after processes or tasks are completed. \nThis could include audit trails, log files, memory dumps, variable settings, port mappings, \ncached data, and so on.\nschema\nThe structure that holds the data that defines or describes a database. The schema is \nwritten using a Data Definition Language (DDL).\nscripted access\nA method to automate the logon process with a script that provides the logon \ncredentials to a system. It is considered a form of single sign-on.\nsearch warrant\nA document obtained through the judicial system that allows law enforce-\nment personnel to acquire evidence from a location without first alerting the individual believed \nto have perpetrated a crime.\nsecondary evidence\nA copy of evidence or an oral description of the contents of best evidence.\nsecondary memory\nMagnetic/optical media and other storage devices that contain data not \nimmediately available to the CPU.\nsecondary storage\nData repositories that include magnetic and optical media, such as tapes, \ndisks, hard drives, and CD/DVD storage.\nsecond-tier attack\nAn assault that relies upon information or data gained from eavesdrop-\nping or other similar data-gathering techniques. In other words, it is an attack that is launched \nonly after some other attack is completed.\nsecret\nA government/military classification, used for data of a secret nature. Unauthorized \ndisclosure of secret data could cause serious damage to national security.\n" }, { "page_number": 755, "text": "710\nGlossary\nsecure communication protocol\nA protocol that uses encryption to provide security for the \ndata transmitted by it.\nSecure Electronic Transaction (SET)\nA security protocol for the transmission of transactions \nover the Internet. SET is based on RSA encryption and DES. SET has the support of major credit \ncard companies, such as Visa and MasterCard.\nSecure Hash Algorithm (SHA)\nA government standard hash function developed by the \nNational Institute of Standards and Technology (NIST) and specified in an official government \npublication.\nSecure HTTP (S-HTTP)\nThe second major protocol used to provide security on the World \nWide Web.\nSecure Multipurpose Internet Mail Extensions (S/MIME)\nA protocol used to secure the \ntransmission of e-mail and attachments.\nSecure Remote Procedure Call (S-RPC)\nAn authentication service. S-RPC is simply a means \nto prevent unauthorized execution of code on remote systems.\nSecure Shell (SSH)\nAn end-to-end encryption technique. This suite of programs provide \nencrypted alternatives to common Internet applications like FTP, Telnet, and rlogin. There are \nactually two versions of SSH. SSH1 supports the DES, 3DES, IDEA, and Blowfish algorithms. \nSSH2 drops support for DES and IDEA but adds support for several other algorithms.\nSecure Sockets Layer (SSL)\nAn encryption protocol developed by Netscape to protect the \ncommunications between a web server and a web browser.\nsecurity association (SA)\nIn an IPSec session, the representation of the communication ses-\nsion and process of recording any configuration and status information about the connection.\nsecurity ID\nA form of physical identification, generally contains a picture of the subject and/\nor a magnetic strip that contains additional information about a subject.\nsecurity kernel\nThe core set of operating system services that handles all user/application \nrequests for access to system resources.\nsecurity label\nAn assigned classification or sensitivity level used in security models to deter-\nmine the level of security required to protect an object and prevent unauthorized access.\nsecurity management planning\nThe act of thoroughly and systematically designing proce-\ndural and policy documentation to reduce risk and then to maintain risk at an acceptable level \nfor a given environment.\nsecurity perimeter\nThe imaginary boundary that separates the trusted computing base from \nthe rest of the system.\nsecurity policy\nA document that defines the scope of security needs of an organization, pre-\nscribes solutions to manage security issues, and discusses the assets that need protection, and the \nextent to which security solutions should go to provide the necessary protection.\n" }, { "page_number": 756, "text": "Glossary\n711\nsecurity professional\nTrained and experienced network, systems, and security engineer who \nis responsible for following the directives mandated by senior management.\nsecurity role\nThe part an individual plays in the overall scheme of security implementation \nand administration within an organization.\nsecurity target\nThe evaluation element from the Common Criteria for information tech-\nnology security evaluation in which a vendor states the security features of its product.\nsenior management\nA person or group who is ultimately responsible for the security main-\ntained by an organization and who should be most concerned about the protection of its assets. \nThey must sign off on all policy issues, and they will be held liable for overall success or failure \nof a security solution. It is the responsibility of senior management to show prudent due care. \nAlso referred to as organizational owner and upper management.\nsensitive\nA commercial business/private sector classification used for data that is more sensi-\ntive than public data. A negative impact could occur for the company if sensitive data is dis-\nclosed.\nsensitive but unclassified\nA government/military classification used for data of a sensitive or \nprivate nature but significant damage would not occur if disclosed.\nsensitivity\nIn regard to biometric devices, the level at which the device is configured for scanning.\nseparation of duties and responsibilities\nA common practice to prevent any single subject \nfrom being able to circumvent or disable security mechanisms. By dividing core administration \nor high-authority responsibilities among several subjects, no one subject has sufficient access to \nperform significant malicious activities or bypass imposed security controls.\nseparation of privilege\nThe principle that builds upon the principle of least privilege. It \nrequires the use of granular access permissions; that is, different permissions for each type of \nprivileged operation. This allows designers to assign some processes rights to perform certain \nsupervisory functions without granting them unrestricted access to the system.\nSequenced Packet Exchange (SPX)\nThe Transport layer protocol of the IPX/SPX protocol \nsuite from Novell.\nsequential storage\nDevices that require that you read (or speed past) all of the data physically \nstored prior to the desired location. A common example of a sequential storage device is a mag-\nnetic tape drive.\nSerial Line Internet Protocol (SLIP)\nAn older technology developed to support TCP/IP com-\nmunications over asynchronous serial connections, such as serial cables or modem dial-up.\nService Level Agreement (SLA)\nA contractual obligation to your clients that requires you to \nimplement sound BCP practices. Also used to assure acceptable levels of service from suppliers \nfor sound BCP practices.\nSESAME\nA ticket-based authentication mechanism similar to Kerberos.\n" }, { "page_number": 757, "text": "712\nGlossary\nsession hijacking\nAn attack that occurs when a malicious individual intercepts part of a com-\nmunication between an authorized user and a resource and then uses a hijacking technique to \ntake over the session and assume the identity of the authorized user.\nSession layer\nLayer 5 of the OSI model.\nshielded twisted-pair (STP)\nA twisted-pair wire that includes a metal foil wrapper inside of \nthe outer sheath to provide additional protection from EMI.\nshoulder surfing\nThe act of gathering information from a system by observing the monitor or \nthe use of the keyboard by the operator.\nshrink-wrap license agreement\nA license written on the outside of software packaging. Such \nlicenses get their name because they commonly include a clause stating that you acknowledge \nagreement to the terms of the contract simply by breaking the shrink-wrap seal on the package.\nsignature-based detection\nThe process used by antivirus software to identify potential virus \ninfections on a system.\nsignature dynamics\nWhen used as a biometric, the use of the pattern and speed of a person \nwriting their signature to establish identity or provide authentication.\nSimple Integrity Axiom (SI Axiom)\nAn axiom of the Biba model that states that a subject at \na specific classification level cannot read data with a lower classification level. This is often \nshortened to “no read down.”\nSimple Key Management for IP (SKIP)\nAn encryption tool used to protect sessionless data-\ngram protocols.\nSimple Mail Transfer Protocol (SMTP)\nThe primary protocol used to move e-mail messages \nfrom clients to servers and from server to server.\nSimple Security Property (SS property)\nA property of the Bell-LaPadula model that states \nthat a subject at a specific classification level cannot read data with a higher classification level. \nThis is often shortened to “no read up.”\nsimulation tests\nA test in which disaster recovery team members are presented with a sce-\nnario and asked to develop an appropriate response. Some of these response measures are then \ntested. This may involve the interruption of noncritical business activities and the use of some \noperational personnel.\nsingle loss expectancy (SLE)\nThe cost associated with a single realized risk against a specific \nasset. The SLE indicates the exact amount of loss an organization would experience if an asset \nwere harmed by a specific threat. SLE = asset value ($) * exposure factor (EF).\nSingle Sign On (SSO)\nA mechanism that allows subjects to authenticate themselves only once \nto a system. With SSO, once subjects are authenticated, they can freely roam the network and \naccess resources and service without being rechallenged for authentication.\n" }, { "page_number": 758, "text": "Glossary\n713\nsingle state\nSystems that require the use of policy mechanisms to manage information at dif-\nferent levels. In this type of arrangement, security administrators approve a processor and \nsystem to handle only one security level at a time.\nsingle-use passwords\nA variant of dynamic passwords that are changed every time they are used.\nSkipjack\nAssociated with the Escrowed Encryption Standard, an algorithm that operates on \n64-bit blocks of text. It uses an 80-bit key and supports the same four modes of operation sup-\nported by DES. Skipjack was proposed but never implemented by the U.S. government. It pro-\nvides the cryptographic routines supporting the Clipper and Capstone high-speed encryption \nchips designed for mainstream commercial use.\nsmart card\nCredit-card-sized ID, badge, or security pass that has a magnetic strip, bar code, \nor integrated circuit chip embedded in it. Smart cards can contain information about the autho-\nrized bearer that can be used for identification and/or authentication purposes.\nSmurf attack\nA type of DoS. A Smurf attack occurs when an amplifying server or network is \nused to flood a victim with useless data.\nsniffer attack\nAny activity that results in a malicious user obtaining information about a net-\nwork or the traffic over that network. A sniffer is often a packet-capturing program that dupli-\ncates the contents of packets traveling over the network medium into a file. Also referred to as \na snooping attack.\nsniffing\nA form of network traffic monitoring. Sniffing often involves the capture or duplica-\ntion of network traffic for examination, re-creation, and extraction.\nsnooping attack\nSee sniffer attack.\nsocial engineering\nA skill by which an unknown person gains the trust of someone inside of your \norganization and encourages them to make a change to IT system in order to grant them access.\nsocket\nAnother name for a port.\nsoftware IP encryption (SWIPE)\nA layer 3 security protocol for IP. It provides authentica-\ntion, integrity, and confidentiality using an encapsulation protocol.\nspam\nThe term describing unwanted e-mail, newsgroup, or discussion forum messages. Spam \ncan be as innocuous as an advertisement from a well-meaning vendor or as malignant as floods \nof unrequested messages with viruses or Trojan horses attached.\nspamming attacks\nSending significant amounts of spam to a system in order to cause a DoS or \ngeneral irritation, consume storage space, or consume bandwidth and processing capabilities.\nspike\nMomentary high voltage.\nsplit knowledge\nThe specific application of the ideas of separation of duties and two-man \ncontrol into a single solution. The basic idea is that the information or privilege required to per-\nform an operation is divided among multiple users. This ensures that no single person has suf-\nficient privileges to compromise the security of the environment.\n" }, { "page_number": 759, "text": "714\nGlossary\nspoofing\nThe act of replacing the valid source and/or destination IP address and node num-\nbers with false ones.\nspoofing attack\nAny attack that involves spoofed or modified packets.\nstandards\nDocuments that define compulsory requirements for the homogenous use of hard-\nware, software, technology, and security controls. They provide a course of action by which \ntechnology and procedures are uniformly implemented throughout an organization. Standards \nare tactical documents that define steps or methods to accomplish the goals and overall direc-\ntion defined by security policies.\nstate\nA snapshot of a system at a specific instance in time.\nstate machine model\nA system that is designed so that no matter what function is performed, \nit is always a secure system.\nstateful inspection firewall\nA firewall that evaluates the state or the context of network \ntraffic. By examining source and destination address, application usage, source of origin, and \nthe relationship between current packets with the previous packets of the same session, stateful \ninspection firewalls are able to grant a broader range of access for authorized users and activ-\nities and actively watch for and block unauthorized users and activities. Stateful inspection fire-\nwalls are known as third-generation firewalls.\nstatic packet-filtering firewall\nA firewall that filters traffic by examining data from a message \nheader. Usually the rules are concerned with source, destination, and port addresses. Static \npacket-filtering firewalls as known as first-generation firewalls.\nstatic password\nPassword that does not change over time or that remains the same for a sig-\nnificant period of time.\nstatic token\nA physical means to provide identity, usually not employed as an authentication \nfactor. Examples include a swipe card, a smart card, a floppy disk, a USB RAM dongle, or even \nsomething as simple as a key to operate a physical lock.\nstatistical attack\nThis type of attack exploits statistical weaknesses in a cryptosystem, such as \nsuch as floating point errors or an inability to produce random numbers. It attempts to find vul-\nnerabilities in the hardware or operating system hosting the cryptography application.\nstatistical intrusion detection\nSee behavior-based detection.\nstealth virus\nA virus that hides itself by actually tampering with the operating system to fool \nantivirus packages into thinking that everything is functioning normally.\nsteganography\nThe act of embedding messages within another message, commonly used \nwithin an image or a WAV file.\nstop error\nThe security response of an operating system, such as Windows, when an applica-\ntion performs an illegal operation, such as accessing hardware or modifying/accessing the \nmemory space of another process.\n" }, { "page_number": 760, "text": "Glossary\n715\nstopped state\nThe state in which a process is finished or must be terminated. At this point, the \noperating system can recover all memory and other resources allocated to the process and reuse \nthem for other processes as needed.\nstrategic plan\nA long-term plan that is fairly stable. It defines the organization’s goals, mis-\nsion, and objectives. A strategic plan is useful for about five years if it is maintained and updated \nannually. The strategic plan also serves as the planning horizon.\nstream attack\nA type of DoS. A stream attack occurs when a large number of packets are sent \nto numerous ports on the victim system using random source and sequence numbers. The pro-\ncessing performed by the victim system attempting to make sense of the data will result in a DoS. \nAlso referred to as flooding.\nstream ciphers\nCiphers that operate on each character or bit of a message (or data stream) \none character/bit at a time.\nstrong password\nPassword that is resistant to dictionary and brute force attacks.\nStructured Query Language (SQL)\nThe standard language used by relational databases to \nenter and extract the information stored in them.\nstructured walk-through\nA type of disaster recovery test, often referred to as a “table-top \nexercise,” in which members of the disaster recovery team gather in a large conference room \nand role-play a disaster scenario.\nsubject\nAn active entity that seeks information about or data from passive objects through the \nexercise of access. A subject can be a user, a program, a process, a file, a computer, a database, \nand so on.\nsubpoena\nA court order that compels an individual or organization to surrender evidence or \nto appear in court.\nsubstitution cipher\nCipher that uses an encryption algorithm to replace each character or bit \nof the plaintext message with a different character, such as a Caesar cipher.\nsupervisor state (or supervisory state)\nThe state in which a process is operating in a privi-\nleged, all-access mode.\nsupervisory mode\nMode in which processes at layer 0 run, which is the ring where the oper-\nating system itself resides.\nsurge\nProlonged high voltage.\nSWIPE\nSee software IP encryption (SWIPE).\nswitch\nA network device that is an intelligent hub because it knows the addresses of the sys-\ntems connected on each outbound port. Instead of repeating traffic on every outbound port, a \nswitch repeats only traffic out of the port on which the destination is known to exist. Switches \n" }, { "page_number": 761, "text": "716\nGlossary\noffer greater efficiency for traffic delivery, create separate broadcast and collision domains, and \nimprove the overall throughput of data.\nSwitched Multimegabit Data Services (SMDS)\nA connectionless network communication \nservice. SMDS provides bandwidth on demand. SMDS is a preferred connection mechanism for \nlinking remote LANs that communicate infrequently.\nswitched virtual circuit (SVC)\nA virtual circuit that must be rebuilt each time it is used; sim-\nilar to a dial-up connection.\nsemantic integrity mechanisms\nA common security feature of a DBMS. This feature ensures \nthat no structural or semantic rules are violated. It also checks that all stored data types are \nwithin valid domain ranges, that only logical values exist, and that any and all uniqueness con-\nstraints are met.\nsymmetric key\nAn algorithm that relies upon a “shared secret” encryption key that is distrib-\nuted to all members who participate in communications. This key is used by all parties to both \nencrypt and decrypt messages.\nsymmetric multiprocessing (SMP)\nA type of system in which the processors share not only \na common operating system, but also a common data bus and memory resources. In this type \nof arrangement, it is not normally possible to use more than 16 processors.\nSYN flood attack\nA type of DoS. A SYN flood attack is waged by not sending the final ACK \npacket, which breaks the standard three-way handshake used by TCP/IP to initiate communi-\ncation sessions.\nSynchronous Data Link Control (SDLC)\nA layer 2 protocol employed by networks with ded-\nicated or leased lines. SDLC was developed by IBM for remote communications with SNA sys-\ntems. SDLC is a bit-oriented synchronous protocol.\nsynchronous dynamic password token\nTokens used in a token device that generates pass-\nwords at fixed time intervals. Time interval tokens require that the clock of the authentication \nserver and the token device be synchronized. The generated password is entered by the subject \nalong with a PIN, pass phrase, or password.\nsystem call\nA process by which an object in a less-trusted protection ring requests access to \nresources or functionality by objects in more-trusted protection rings.\nsystem high mode\nSee system-high security mode.\nsystem-high security mode\nMode in which systems are authorized to process only informa-\ntion that all system users are cleared to read and have a valid need to know. Systems running \nin this mode are not trusted to maintain separation between security levels, and all information \nprocessed by these systems must be handled as if it were classified at the same level as the most \nhighly classified information processed by the system.\n" }, { "page_number": 762, "text": "Glossary\n717\nT\ntable\nThe main building block of a relational database; also known as a relation.\nTACACS\nSee Terminal Access Controller Access Control System (TACACS).\ntactical plan\nA midterm plan developed to provide more details on accomplishing the goals \nset forth in the strategic plan. A tactical plan is typically useful for about a year. It often pre-\nscribes and schedules the tasks necessary to accomplish organizational goals.\nTake-Grant model\nA model that employs a directed graph to dictate how rights can be passed \nfrom one subject to another or from a subject to an object. Simply put, a subject with the grant \nright can grant another subject or another object any other right they possess. Likewise, a sub-\nject with the take right can take a right from another subject.\ntask-based\nAn access control methodology in which access is granted based on work tasks or \noperations.\nTCP wrapper\nAn application that can serve as a basic firewall by restricting access based on \nuser IDs or systems IDs.\nteardrop attack\nA type of DoS. A teardrop attack occurs when an attacker exploits a bug in \noperating systems. The bug exists in the routines used to reassemble fragmented packets. An \nattacker sends numerous specially formatted fragmented packets to the victim, which causes the \nsystem to freeze or crash.\ntechnical access control\nThe hardware or software mechanisms used to manage access to \nresources and systems and provide protection for those resources and systems. Examples of log-\nical or technical access controls include encryption, smart cards, passwords, biometrics, con-\nstrained interfaces, access control lists, protocols, firewalls, routers, IDEs, and clipping levels. \nThe same as logical access control.\ntechnical physical security controls\nSecurity controls that use technology to implement \nsome form of physical security, including intrusion detection systems, alarms, CCTV, moni-\ntoring, HVAC, power supplies, and fire detection and suppression.\nTEMPEST\nThe study and control of electronic signals produced by various types of electronic \nhardware, such as computers, televisions, phones, and so on. Its primary goal is to prevent EM \nand RF radiation from leaving a strictly defined area so as to eliminate the possibility of external \nradiation monitoring, eavesdropping, and signal sniffing.\nTerminal Access Controller Access Control System (TACACS)\nAn alternative to RADIUS. \nTACACS is available in three versions: original TACACS, XTACACS (eXtended TACACS), \nand TACACS+. TACACS integrates the authentication and authorization processes. XTACACS \nkeeps the authentication, authorization, and accounting processes separate. TACACS+ improves \nXTACACS by adding two-factor authentication.\n" }, { "page_number": 763, "text": "718\nGlossary\nterrorist attacks\nAttacks that differ from military and intelligence attacks in that the purpose \nis to disrupt normal life, whereas a military or intelligence attack is designed to extract secret \ninformation.\ntest data method\nA form of program testing that examines the extent of the system testing to \nlocate untested program logic.\ntestimonial evidence\nEvidence that consists of the testimony of a witness, either verbal testi-\nmony in court or written testimony in a recorded deposition.\nthicknet\nSee 10Base5.\nthin client\nA term used to describe a workstation that has little or no local processing or \nstorage capacity. A thin client is used to connect to and operate a remote system.\nthinnet\nSee 10Base2.\nthreat\nA potential occurrence that may cause an undesirable or unwanted outcome on an \norganization or to a specific asset.\nthreat agents\nPeople, programs, hardware, or systems that intentionally exploit vulnerabilities.\nthreat events\nAccidental exploitations of vulnerabilities.\nthroughput rate\nThe rate at which a biometric device can scan and authenticate subjects. A rate \nof about six seconds or faster is required for general acceptance of a specific biometric control.\nticket\nA electronic authentication factor used by the Kerberos authentication system.\nTicket Granting Service (TGS)\nAn element of the Kerberos authentication system. The TGS \nmanages the assignment and expiration of tickets. Tickets are used by subjects to gain access to \nobjects.\ntime-of-check (TOC)\nThe time at which a subject checks on the status of an object.\ntime-of-check-to-time-of-use (TOCTTOU)\nA timing vulnerability that occurs when a pro-\ngram checks access permissions too far in advance of a resource request.\ntime-of-use (TOU)\nThe time at which the decision is made by a subject to access an object.\ntime slice\nA single chunk or division of processing time.\ntoken\nSee token device.\ntoken device\nA password-generating device that subjects must carry with them. Token \ndevices are a form of a “something you have” (Type 2) authentication factor.\ntoken ring\nA token-passing LAN technology.\ntop secret\nThe highest level of government/military classification. Unauthorized disclosure of \ntop secret data will cause exceptionally grave damage to national security.\n" }, { "page_number": 764, "text": "Glossary\n719\ntopology\nThe physical layout of network devices and connective cabling. The common net-\nwork topologies are ring, bus, star, and mesh.\ntotal risk\nThe amount of risk an organization would face if no safeguards were implemented. \nThreats * vulnerabilities * asset value = total risk.\ntrade secret\nIntellectual property that is absolutely critical to a business and would cause sig-\nnificant damage if it were disclosed to competitors and/or the public.\ntrademark\nA registered word, slogan, or logos used to identify a company and its products \nor services.\ntraffic analysis\nA form of monitoring that in which the flow of packets rather than the actual \ncontent of packets is examined. Also referred to as trend analysis.\ntraining\nThe task of teaching employees to perform their work tasks and to comply with the \nsecurity policy. All new employees require some level of training so they will be able to properly \ncomply with all standards, guidelines, and procedures mandated by the security policy.\ntransferring risk\nPlacing the cost of loss from a realized risk onto another entity or organiza-\ntion, such as purchasing insurance. Also referred to as assigning risk.\ntransient\nA short duration of line noise disturbance.\nTransmission Control Protocol (TCP)\nA connection-oriented protocol located at layer 4 of \nthe OSI model.\ntransmission error correction\nA capability built into connection- or session-oriented proto-\ncols and services. If it is determined that a message, in whole or in part, was corrupted, altered, \nor lost, a request can be made for the source to resend all or part of the message.\ntransmission logging\nA form of auditing focused on communications. Transmission logging \nrecords the details about source, destination, time stamps, identification codes, transmission \nstatus, number of packets, size of message, and so on.\ntransparency\nA characteristic of a service, security control, or access mechanism that is \nunseen by users. Transparency is often a desirable feature for security controls.\nTransport layer\nLayer 4 of the OSI model.\ntransport mode\nA mode of IPSec when used in a VPN. In transport mode, the IP packet data \nis encrypted but the header of the packet is not.\ntransposition cipher\nCipher that uses an encryption algorithm to rearrange the letters of a \nplaintext message to form the ciphertext message.\ntrap door\nUndocumented command sequence that allows software developers to bypass \nnormal access restrictions.\ntraverse mode noise\nEMI noise generated by the difference in power between the hot and \nneutral wires of a power source or operating electrical equipment.\n" }, { "page_number": 765, "text": "720\nGlossary\ntrend analysis\nSee traffic analysis.\nTriple DES (3DES)\nA standard that uses three iterations of DES with two or three different \nkeys to increase the effective key strength to 112 bits.\nTrojan horse\nA malicious code object that appears to be a benevolent program—such as a \ngame or simple utility that performs the “cover” functions as advertised but also carries an \nunknown payload, such as a virus.\ntrust\nA security bridge established to share resources from one domain to another. A trust is \nestablished between two domains to allow users from one domain to access resources in \nanother. Trusts can be one-way only or they can be two-way.\ntrusted computing base (TCB)\nThe combination of hardware, software, and controls that \nform a trusted base that enforces your security policy.\ntrusted path\nSecure channel used by the TCB to communicate with the rest of the system.\ntrusted recovery process\nOn a secured system, a process that ensures the system always \nreturns to a secure state after an error, failure, or reboot.\ntrusted system\nA secured computer system.\ntunnel mode\nA mode of IPSec when used in a VPN. In tunnel mode, the entire IP packet is \nencrypted and a new header is added to the packet to govern transmission through the tunnel.\ntunneling\nA network communications process that protects the contents of protocol packets \nby encapsulating them in packets of another protocol.\nturnstile\nA form of gate that prevents more than one person at a time from gaining entry and \noften restricts movement in one direction.\ntwisted-pair\nSee 10Base-T.\ntwo-factor authentication\nAuthentication that requires two factors.\nType 1 authentication factor\nSomething you know, such as a password, personal identifi-\ncation number (PIN), combination lock, pass phrase, mother’s maiden name, and favorite \ncolor.\nType 2 authentication factor\nSomething you have, such as a smart card, ATM card, token \ndevice, and memory card.\nType 3 authentication factor\nSomething you are, such as fingerprints, voice print, retina pat-\ntern, iris pattern, face shape, palm topology, and hand geometry.\nType 1 error\nSee False Rejection Rate (FRR).\nType 2 error\nSee False Acceptance Rate (FAR).\n" }, { "page_number": 766, "text": "Glossary\n721\nU\nunclassified\nThe lowest level of government/military classification. Used for data that is nei-\nther sensitive nor classified. Disclosure of unclassified data does not compromise confidenti-\nality, nor does it cause any noticeable damage.\nunicast\nA communications transmission to a single identified recipient.\nUniform Computer Information Transactions Act (UCITA)\nA federal law designed for adop-\ntion by each of the 50 states to provide a common framework for the conduct of computer-\nrelated business transactions.\nuninterruptible power supply (UPS)\nA type of self-charging battery that can be used to \nsupply consistent clean power to sensitive equipment. A UPS functions basically by taking \npower in from the wall outlet, storing it in a battery, pulling power out of the battery, and then \nfeeding that power to whatever devices are connected to it. By directing current through its bat-\ntery, it is able to maintain a consistent clean power supply.\nunit testing\nA method of testing software. Each unit of code is tested independently to dis-\ncover any errors or omissions and to ensure that it functions properly. Unit testing should be \nperformed by the development staff.\nunshielded twisted-pair (UTP)\nA twisted-pair wire that does not include additional EMI pro-\ntection. Most twisted-pair wiring is UTP.\nupper management\nSee senior management.\nUSA Patriot Act of 2001\nAn act implemented after the September 11, 2001 terrorist attacks. \nIt greatly broadened the powers of law enforcement organizations and intelligence agencies \nacross a number of areas, including the monitoring of electronic communications.\nuser\nAny person who has access to the secured system. A user’s access is tied to their work \ntasks and is limited so they have only enough access to perform the tasks necessary for their job \nposition (i.e., principle of least privilege). Also referred to as end user and employee.\nUser Datagram Protocol (UDP)\nA connectionless protocol located at layer 4 of the OSI model.\nuser mode\nThe basic mode used by the CPU when executing user applications.\nV\nVernam cipher\nA device that implements a 26-character modulo 26 substitution cipher. It \nfunctions as a one-time pad.\nview\nA client interface used to interact with a database. The view limits what clients can see \nand what functions they can perform.\n" }, { "page_number": 767, "text": "722\nGlossary\nVigenere cipher\nA polyalphabetic substitution cipher.\nviolation analysis\nA form of auditing that uses clipping levels.\nvirtual machine\nA software simulation of a computer within which a process executes. Each \nvirtual machine has its own memory address space and communication between virtual \nmachines is securely controlled.\nvirtual memory\nA special type of secondary memory that is managed by the operating system \nin such a manner that it appears to be real memory.\nvirtual private network (VPN)\nA network connection established between two systems over \nan existing private or public network. A VPN provides confidentiality and integrity for network \ntraffic through the use of encryption.\nvirtual private network (VPN) protocol\nThe protocols, such as PPTP, L2TP, and IPSec, that \nare used to create VPNs.\nvirus\nThe oldest form of malicious code objects that plague cyberspace. Once they are in a \nsystem, they attach themselves to legitimate operating system and user files and applications \nand normally perform some sort of undesirable action, ranging from the somewhat innocuous \ndisplay of an annoying message on the screen to the more malicious destruction of the entire \nlocal file system.\nVoice over IP (VoIP)\nA network service that provides voice communication services by trans-\nporting the voice traffic as network packets over an IP network.\nvoice pattern\nAn example of a biometric factor, which is a behavioral or physiological char-\nacteristic that is unique to a subject. The speech, tone, modulation, and pitch patterns of a \nperson’s voice is used to establish identity or provide authentication.\nvolatile\nSee volatile storage.\nvolatile storage\nA storage medium, such as RAM, that loses its contents when power is \nremoved from the resource.\nvoluntarily surrender\nThe act of willingly handing over evidence.\nvulnerability\nThe absence or weakness of a safeguard or countermeasure. In other words, a \nvulnerability is the existence of a flaw, loophole, oversight, error, limitation, frailty, or suscep-\ntibility in the IT infrastructure or any other aspect of an organization.\nvulnerability scan\nA test performed on a system to find weaknesses in the security \ninfrastructure.\nvulnerability scanner\nA tool used to test a system for known security vulnerabilities and \nweaknesses. Vulnerability scanners are used to generate reports that indicate the areas or \naspects of the system that need to be managed to improve security.\n" }, { "page_number": 768, "text": "Glossary\n723\nW\nwait state\nThe state in which a process is ready to execute but is waiting for an operation such \nas keyboard input, printing, or file writing to complete.\nwar dialing\nThe act of using a modem to search for a system that will accept inbound con-\nnection attempts.\nwarm site\nA middle ground between hot sites and cold sites for disaster recovery specialists. \nA warm site always contains the equipment and data circuits necessary to rapidly establish oper-\nations but does not typically contain copies of the client’s data.\nwarning banners\nMessages used to inform would-be intruders or attempted security policy \nviolators that their intended activities are restricted and that any further activities will be \naudited and monitored. A warning banner is basically an electronic equivalent of a no tres-\npassing sign.\nwell-known ports\nThe first 1,024 ports of TCP and UDP. They are usually assigned to com-\nmonly used services and applications.\nwet pipe system\nA fire suppression system that is always full of water. Water discharges \nimmediately when triggered by a fire or smoke. Also known as a closed head system.\nwhite box testing\nA form of program testing that examines the internal logical structures of \na program.\nwide area network (WAN)\nA network or a network of LANs that is geographically diverse. \nOften dedicated leased lines are used to establish connections between distant components.\nWinNuke attack\nA type of DoS. A WinNuke attack is a specialized assault against Windows 95 \nsystems. Out-of-band TCP data is sent to a victim’s system, which causes the OS to freeze.\nWireless Application Protocol (WAP)\nA protocol used by portable devices like cell phones \nand PDAs to support Internet connectivity via your telco or carrier provider.\nWired Equivalency Protocol (WEP)\nA protocol that provides both 40- and 128-bit encryp-\ntion options to protect communications within the wireless LAN.\nwork function or work factor\nA way of measuring the strength of a cryptography system by \nmeasuring the effort in terms of cost and/or time. Usually the time and effort required to per-\nform a complete brute force attack against an encryption system is what the work function \nrating represents. The security and protection offered by a cryptosystem is directly proportional \nto the value of the work function/factor.\nworm\nA form of malicious code that is self-replicating but is not designed to impose direct \nharm on host systems. The primary purpose of a worm is to replicate itself to other systems and \ngather information. Worms are usually very prolific and often cause a denial of service due to \ntheir consumption of system resources and network bandwidth in their attempt to self-replicate.\n" }, { "page_number": 769, "text": "724\nGlossary\nX\nX.25\nAn older WAN protocol that uses carrier switching to provide end-to-end connections \nover a shared network medium.\nXOR\nA function that returns a true value when only one of the input values is true. If both \nvalues are false or both values are true, the output of the XOR function is false.\nZ\nzero knowledge proof\nA concept of communication whereby a specific type of information \nis exchanged but no real data is exchanged. Great examples of this idea are digital signatures \nand digital certificates.\n" }, { "page_number": 770, "text": "Index\nNote to the Reader: Throughout this index boldfaced page numbers indicate primary discussions of \na topic. Italicized page numbers indicate illustrations.\nA\nA1 (verified protection) systems, 426\nabnormal activities, 614, 660\nabstraction\ndefined, 660\nfor efficiency, 160\nin object-oriented programming, 392\nin security control architecture, 246\nabuse in voice communications, 137–138\nacceptable use, 29, 133, 184\nacceptance testing, 660\naccepting risk, 195\nin business continuity planning, 525\ndefined, 660\naccess control\naccount administration, 29–30\nauditing and accountability, 8–9\ncompensations, 58\ndefined, 660\ne-mail, 133\nin European Union privacy law, 590\nexam essentials for, 34–35\nidentification and authentication techniques, 5–7\nbiometrics, 13–18, 16–17\npasswords, 10–13\nSSO, 20–23\ntokens, 18–20\nimplementations, 27–28\nin layered environment, 5\nmonitoring, 30\noverview, 2–5\nreview questions, 36–41\nrights and permissions in, 30–32, 33\nin security models, 404–405\nsummary, 32–33\ntechniques, 23\nDACs, 23\nlattice-based access control, 26, 26\nmandatory access controls, 24–25\nnondiscretionary access controls, 24\nrole-based access controls, 25–26\naccess control lists (ACLs), 4, 23, 399, 660\naccess control matrices, 8, 399–400, 661\naccess logs, 638\naccess tracking, 661\naccessibility in physical security, 630\naccountability\nauditing in, 8–9, 479\nauthentication techniques, 6–7\nauthorization in, 7–8\nin computer design, 394–395\ndefined, 661\nidentification in, 5–6\nin monitoring, 44\nrecords on, 458\nin security management, 159\naccounts\nadministering, 29–30\ncreating, 29\nlocking out, 12, 52, 661\nmaintaining, 29–30\nmonitoring, 30\nperiodic reviews of, 453\naccreditation\ndefined, 661\nin system evaluation, 432–434\nACID model, 219–220, 661\nACK packets, 53–54, 91, 271\nACLs (access control lists), 4, 23, 399, 660\nACLU (American Civil Liberties Union), 585\nacting phase in IDEAL model, 240\nactive content\ndefined, 661\nmalicious code in, 267\nActiveX controls, 214, 661\nAddress Resolution Protocol (ARP)\nin Data Link layer, 74–75, 92\ndefined, 661\nspoofing with, 141\naddressing\ndefined, 661\nmemory, 384–385\nAdleman, Leonard, 337\nadministrative access controls, 4, 661\n" }, { "page_number": 771, "text": "726\nadministrative law – applied cryptography\nadministrative law, 574, 661\nadministrative management, 450\naccount management, 29–30\nexam essentials for, 467–469\noperations security, 450\nantivirus management, 451\nbackup maintenance, 452–453\nconfiguration and change control \nmanagement, 455–456\ndue care and due diligence, 456–457\nillegal activities, 457\nlegal requirements, 457\nneed to know and principle of least \nprivileged, 453\noperational and life cycle assurance, 452\noperations controls, 462–464\nprivacy and protection, 457\nprivileged operations functions, 454\nrecord retention, 458\nsecurity control types, 461\nsensitive information and media, 458–461\ntrusted recovery, 455\nworkstation/location changes, 453\npersonnel controls, 464–465\nphysical security controls, 629, 661\nreview questions, 470–475\nsummary, 466–467\nadmissible evidence, 591, 662\nAdvanced Encryption Standard (AES), 316, \n320–322, 662\nadvisory policies, 183, 662\nagents, 212–213, 662\naggregate functions, 223, 662\naggregation in databases, 223–224, 662\nAHs (Authentication Headers) in IPSec\ndefined, 664\npurpose of, 103–104, 356\nalarm triggers, 478, 662\nalarms, 635–636, 662\nALE (annualized loss expectancy)\ndefined, 662\nin impact assessment, 518\nin resource prioritization, 519\nin risk management, 191\nalternate sites\nin business continuity planning, 521\nin recovery strategy, 547–550\nAmerican Civil Liberties Union (ACLU), 585\nAmerican Standard Code for Information \nInterchange (ASCII), 77\namplifiers, 100\nanalog communications, 85\nanalytic attacks\non algorithms, 359\ndefined, 662\nAND operations, 300, 662\nannualized loss expectancy (ALE)\ndefined, 662\nin impact assessment, 518\nin resource prioritization, 519\nin risk management, 191\nannualized rate of occurrence (ARO)\ndefined, 662\nin impact assessment, 518\nin likelihood assessment, 517\nin risk management, 191\nanomaly detection, 48\nantivirus management, 451\nantivirus mechanisms, 262–263\nAPIPA (Automatic Private IP Addressing), 93\napplets, 213–214\ndefined, 662\nmalicious code in, 267\napplication controls, 463–464\nApplication layer\ndefined, 662\ngateway firewalls at, 98, 663\nin OSI model, 77–78\nin TCP/IP model, 78\napplications\nattacks on, 277\nbuffer overflows, 277\nrootkits, 278\ntime-of-check-to-time-of-use, 278\ntrap doors, 278\nsecurity for, 210\ndistributed environments, 212–216\nlocal/nondistributed environment, 210–212\napplied cryptography, 350\ncircuit encryption, 355–356\ne-commerce, 354–355\ne-mail, 351–352\nexam essentials for, 361–362\nISAKMP, 357\nMOSS, 352\nnetworking, 355–357\n" }, { "page_number": 772, "text": "ARO – authentication\n727\nPretty Good Privacy, 351\nPrivacy Enhanced Mail, 351–352\nreview questions, 363–368\nS/MIME, 352–353\nsummary, 360–361\nWeb, 353–354\nwireless networking, 357–358\nARO (annualized rate of occurrence)\ndefined, 662\nin impact assessment, 518\nin likelihood assessment, 517\nin risk management, 191\nARP (Address Resolution Protocol)\nin Data Link layer, 74–75, 92\ndefined, 661\nspoofing with, 141\nartificial intelligence, 226\nAS (Authentication Service), 21, 664\nASCII (American Standard Code for Information \nInterchange), 77\nassembly language, 232, 663\nasset valuation, 186–190, 663\nasset value (AV), 516, 663\nassets, 186, 663\nassigning risk, 195\nassurance\ndefined, 663\nin security models, 423\nin software development, 229–230, 231\nassurance levels in Common Criteria, 430–432\nasymmetric cryptography, 336\nEl Gamal, 338–339\nelliptic curve, 339–340\nkeys in, 313–315, 337, 337, 663\nRSA, 337–338\nasynchronous communications, 85\nasynchronous dynamic password tokens, 19, 663\nasynchronous transfer mode (ATM)\nin Data Link layer, 74\ndefined, 663\nin WANs, 79, 108, 128, 130\natomicity, 219–220, 663\nattackers, 606, 663\nattacks, 50–51\napplication, 277–278\nbrute force and dictionary, 51–52\nbusiness, 607–608\ncrackers, 58\ncryptographic, 359–360\ndecoy techniques, 281–282\ndefined, 663\nDoS, 52–55, 55, 271–277, 272, 274–275\nexam essentials for, 59–61, 283\nfinancial, 608\nfun, 609\ngrudge, 609\ninference, 224\nmalicious code, 258–268\nman-in-the-middle, 56–57\nmasquerading, 280–281\nmilitary and intelligence, 607\nnetwork, 139–142\npassword, 268–271\nreconnaissance, 278–280\nreview questions, 62–67, 284–290\nscanning, 611\nsniffer, 57\nspamming, 57–58\nspoofing, 55–56\nsummary, 59, 282\nterrorist, 608–609\nwritten lab for, 284, 291\nattenuation, 83, 663\nattributes in databases, 217, 663\naudio motion detectors, 635\nauditing, 12, 44, 159, 478\nin access control, 8–9\naccountability in, 479\naudit trails in, 458, 480–481, 638, 664\ncompliance testing in, 479\nconfiguration, 243\ndefined, 664\nexam essentials for, 499–501\nexternal auditors in, 484\nrecord retention in, 483\nreporting concepts in, 481–482\nreview questions, 502–507\nsampling in, 482\nsummary, 497\ntime frames in, 480\nauditors, 180, 185, 484, 664\nauthentication, 158\ncryptography for, 297, 297\ndefined, 664\ninformation types in, 6\nKerberos, 21–22\n" }, { "page_number": 773, "text": "728\nAuthentication Headers in IPSec – breaches\nlogical locations in, 7\nmultiple-factor, 7\nprotection, 102\n“something” and “somewhere” in, 6–7\ntechniques, 9–10\nbiometrics, 13–18, 16–17\npasswords, 10–13\nSSO, 20–23\ntokens, 18–20\ntwo-factor, 52\nAuthentication Headers (AHs) in IPSec\ndefined, 664\npurpose of, 103–104, 356\nAuthentication Service (AS), 21, 664\nauthorization, 7–8, 158, 664\nautomated attack tools, 50\nautomated monitoring and auditing systems, 486\nautomated recovery, 455\nautomated recovery without undue loss, 455\nAutomatic Private IP Addressing (APIPA), 93\nauxiliary alarm systems, 636, 638, 664\nAV (asset value), 516, 663\navailability\nin access control, 2–3\ndefined, 664\nin security management, 156–157\nin security models, 422\nAVG function, 223\nawareness\ndefined, 664\ntraining for, 196–197\nB\nB channels, 129\nB1 (labeled security) systems, 426\nB2 (structured protection) systems, 426\nB3 (security domain) systems, 426\nback doors, 438, 609\nBack Orifice Trojan horse, 265\nbackground checks, 177–178\nbackups\nfor access control violations, 58\nin disaster recovery planning, 554–557\nin electronic vaulting, 551–552\nin operations security, 452–453\nbadges, 635, 664\nBallista tool, 487\nBase+Offset addressing, 385, 665\nbaseband cable, 80–81, 665\nbaseband communications, 85\nbaseline security, 184, 665\nBasic Input/Output System (BIOS), 391, 665\nBasic Rate Interface (BRI) ISDN, 129, 665\nbastion hosts, 98–99\n.BAT files, 260\nBCP. See Business Continuity Planning (BCP)\nbehavior-based intrusion detection, 48, 665\nbehaviors in OOP, 234, 665\nBell-LaPadula model, 400–402, 401, 419, 426, 665\nbest evidence rule, 591–592, 665\nBGP (Border Gateway Protocol), 75\nBIA (business impact assessment), 515–516\ndefined, 666\nimpact assessment in, 518–519\nlikelihood assessment, 517\npriority identification, 516\nresource prioritization, 519\nrisk identification, 516–517\nBiba model, 402, 403, 419–420, 665\nbinary mathematics in cryptography, 299\nbind variables, 665\nbiometrics, 4, 13–15, 665\nappropriate usage, 17–18, 17\nfactor ratings, 15–16, 16\nregistration with, 16–17\nBIOS (Basic Input/Output System), 391, 665\nbirthday attacks, 360, 665\nblack box doctrine, 392\nblack box testing, 244, 665\nblack boxes, 138\nblackouts, 544, 641, 665\nblock ciphers, 310, 319–320, 666\nBlowfish cipher, 319–320, 666\nblue boxes, 138\nBlue Screen of Death (BSOD), 230\nBoehm, Barry, 237–239\nbombings, 541\nboot sectors, 259–260, 666\nBootstrap Protocol (BootP) protocol, 96\nBorder Gateway Protocol (BGP), 75\nbots, 212–213, 666\nboundaries, 139\nbounds, 422, 666\nbreaches, 187, 666\n" }, { "page_number": 774, "text": "Brewer and Nash model – CDIs\n729\nBrewer and Nash model, 403–404\nBRI (Basic Rate Interface) ISDN, 129, 665\nbridges, 75, 100–101, 666\nbroadband cable, 80–81\nbroadband communications, 85\nbroadcast addresses\ndefined, 666\nin Smurf attacks, 273\nbroadcast communications, 85\nbroadcast domains, 84\nbroadcast transmissions, 666\nbrouters, 76, 101\nbrownouts, 641, 666\nbrute force attacks, 12\non cryptography, 359\ndefined, 666\nDoS, 271\non passwords, 51–52\nBSA (Business Software Alliance), 584\nBSOD (Blue Screen of Death), 230\nbuffer overflows\nin application attacks, 277\nchecking for, 437\ndefined, 666\nfor worms, 266\nbuildings\nin business continuity planning, 521\nin physical security, 628–631\nburglary, 608\nbus topology, 88, 88\nbusiness attacks, 607–608, 666\nBusiness Continuity Planning (BCP), 510–511\ncontinuity strategy in, 519–523\ndefined, 666\ndocumentation in, 523–526\nexam essentials for, 526–527\nimpact assessment in, 515–516\nimpact assessment phase, 518–519\nlikelihood assessment, 517\npriority identification, 516\nresource prioritization, 519\nrisk identification, 516–517\nlegal and regulatory requirements in, 514–515\norganization analysis in, 511–512\nresource requirements in, 513–514\nreview questions, 528–533\nsummary, 526\nteam selection in, 512\nbusiness impact assessment (BIA), 515–516\ndefined, 666\nimpact assessment in, 518–519\nlikelihood assessment, 517\npriority identification, 516\nresource prioritization, 519\nrisk identification, 516–517\nBusiness Software Alliance (BSA), 584\nbusiness unit priorities in recovery strategy, 545\nC\nC1 (discretionary security protection) systems, 425\nC2 (controlled access protection) systems, 425\ncabling, network\nbaseband and broadband, 80–81\ncoaxial, 80\nconductors, 82–83\ntwisted-pair, 81–82\nwireless, 83\ncache RAM, 383–384, 667\nCaesar cipher, 294–295, 306\nCALEA (Communications Assistance for Law \nEnforcement Act) of 1994, 586\ncall-back authorization, 135\ncameras, 636\ncampus area networks (CANs), 667\ncandidate keys, 217\ncapabilities lists, 418, 667\ncapability lists, 399, 667\nCapability Maturity Model, 237\ncapacitance motion detectors, 635\ncardinality\nof databases, 217\ndefined, 667\nCarrier Sense Multiple Access (CSMA) \ntechnologies, 86\ncascading composition theory, 399\nCASE (computer aided software engineering), 233\ncategories, UTP, 82\nCBC (Cipher Block Chaining), 317–318, 668\nCBK (Common Body of Knowledge), 670\nCCTV (closed-circuit television), 632–633, 636, 669\nCDDI (Copper Distributed Data Interface)\nin Data Link Layer, 74\ndefined, 672\nCDIs (constrained data items), 420\n" }, { "page_number": 775, "text": "730\nCDR media for backups – cohesiveness\nCDR media for backups, 556\nCDs\nfor backups, 556\nlegal issues, 580\ncell phone security, 138\ncell suppression for databases, 221\ncentral processing units (CPUs). See processors\ncentralized access control, 27, 667\ncentralized alarm systems, 636, 638, 667\nCER (Crossover Error Rate), 16, 16, 673\ncertificate authorities, 348–349, 667\ncertificate revocation lists (CRLs), 349, 667\ncertificates, 346–347\ndefined, 667\ngeneration and destruction of, 348–350\ncertification\ndefined, 667\nin system evaluation, 416–417, 432–433\nCFAA (Computer Fraud and Abuse Act) of 1984, \n575–576, 671\nCFB (Cipher Feedback) mode, 317–318, 668\nCFR (Code of Federal Regulations), 574\nchain of evidence, 592, 667\nChallenge Handshake Authentication Protocol \n(CHAP), 105–106, 124, 667\nchallenge-response tokens, 19, 667\nchange control, 161–162\ncomponents of, 243–244\ndefined, 667\nsteps in, 455–456\nchanges, workstation and location, 453\nchargen service, 274\ncharts, Gantt, 240, 241\nChauvaud, Pascal, 342\nchecklists, 554, 560, 668\nChildren's Online Privacy Protection Act (COPPA) \nof 1998, 587, 668\nChinese Wall model, 403–404\nchoice requirements in European Union privacy \nlaw, 590\nchosen ciphertext attacks, 359, 668\nchosen plaintext attacks, 359, 668\nCIA Triad, 3, 154\navailability in, 156–157\nconfidentiality in, 154–155\ndefined, 668\nintegrity in, 155–156\nCipher Block Chaining (CBC), 317–318, 668\nCipher Feedback (CFB) mode, 317–318, 668\nciphers\nvs. codes, 305\nin cryptography, 305–308\ndefined, 668\nsubstitution, 306–308\ntransposition, 306\nciphertext messages, 297, 668\nCIR (Committed Information Rate) contracts, \n107, 669\ncircuit encryption, 355–356\ncircuit-level gateway firewalls, 98\ncircuit switching, 126\nCIRTs (Computer Incident Response Teams), 612\ncivil law, 573–574, 668\nCivil War, cryptography in, 295\nClark-Wilson model, 403, 420, 668\nclasses in OOP, 234\nclassification\nfor confidentiality, 155\ndefined, 668, 674\nin physical security, 631\nin security management, 162–165\nclassification levels, 668\nclassified data, 164\nclean power, 641, 669\ncleaning, 669\nclearances, security, 178\nclearing media, 460–461, 669\nclick-wrap licenses, 584, 669\nclient systems, countermeasures on, 267\nclipping levels\nin auditing, 482\ndefined, 669\nclosed-circuit television (CCTV), 632–633, 636, 669\nclosed systems, 421, 646\nclustering, 304–305, 669\ncoaxial cabling, 80, 669\nCode of Ethics, 616–617\nCode of Federal Regulations (CFR), 574\nCode Red worm, 265\ncode review walk-throughs, 236\ncodes vs. ciphers, 305\ncoding flaws, 435–437\ncognitive passwords, 11, 669\ncohesiveness\ndefined, 669\nin OOP, 234\n" }, { "page_number": 776, "text": "cold sites – concentrators\n731\ncold sites, 547, 669\ncold-swappable RAID, 111\ncollision attacks, 360\ncollision domains, 84\ncollusion, 177, 493, 669\ncolumns in databases, 217\nCOM (Component Object Model), 215, 670\n.COM files, 260\ncombination locks, 634–635\ncommercial business/private sector classification, \n163–164, 669\nCOMMIT command, 219\nCommitted Information Rate (CIR) contracts, \n107, 669\nCommon Body of Knowledge (CBK), 670\nCommon Criteria, 429\ncommon mode noise, 642, 670\nCommon Object Broker Architecture (CORBA), \n214–215, 215, 670\ncommunication disconnects, 439\ncommunications, 79\ncabling in\nbaseband and broadband, 80–81\ncoaxial, 80\nconductors, 82–83\ntwisted-pair, 81–82\nwireless, 83\nin disaster recovery planning, 558\nLAN technologies, 84–87\nin recovery strategy, 546\nsecurity in, 122\nboundaries, 139\ne-mail, 132–135\nexam essentials for, 143–145\nfacsimiles, 135\nmiscellaneous, 131–132\nNAT for, 125–126\nnetwork attacks and countermeasures, \n139–142\nreview questions, 146–151\nsummary, 142–143\nswitching technologies, 126–127\nvoice, 136–138\nVPNs for, 122–125\nTCP/IP, 89–96, 90\ntopologies, 87–89, 87–89\nCommunications Assistance for Law Enforcement \nAct (CALEA) of 1994, 586\nCompactFlash cards, 383\ncompanion viruses, 260, 670\ncompartmentalized environments, 25, 246, \n379–380, 670\ncompensation access control, 4, 670\ncompetent evidence, 591, 670\ncompiled languages, 232, 670\ncomplex gateway firewalls, 97\ncompliance checking, 670\ncompliance testing, 479, 670\nComponent Object Model (COM), 215, 670\ncomposition theories, 399\ncompromises, system, 611–612, 670\ncomputer aided software engineering (CASE), 233\ncomputer crime, 606–607\nbusiness attacks, 607–608\ndefined, 671\nevidence of, 610\nexam essentials for, 619–620\nfinancial attacks, 608\nfun attacks, 609\ngrudge attacks, 609\nincident handling. See incidents\nlaws for, 575–578\nmilitary and intelligence attacks, 607\nreview questions, 621–626\nsummary, 618–619\nterrorist attacks, 608–609\ncomputer design, 370\ndistributed architecture, 395–396\nexam essentials for, 406–407\nfirmware, 391\nhardware, 371\ninput and output devices, 388–389\nmemory, 382–386\nprocessors, 371–382\nstorage, 386–388\ninput/output structures, 389–390\nprotection mechanisms, 391–396\nreview questions, 408–413\nsecurity models, 397–404\nsummary, 405\ncomputer export controls, 584–585\nComputer Fraud and Abuse Act (CFAA) of 1984, \n575–576, 671\nComputer Incident Response Teams (CIRTs), 612\nComputer Security Act (CSA) of 1987, 576, 671\nComputer Security Incident Response Teams \n(CSIRTs), 612\nconcentrators, 83, 100\n" }, { "page_number": 777, "text": "732\nconceptual definition phase in system development life cycle – cryptography\nconceptual definition phase in system development \nlife cycle, 235\nconclusive evidence, 591, 671\nconcurrency\nin databases, 221\ndefined, 671\nconductors, 82–83\nConfidential classification, 164, 671\nconfidentiality, 154–155\nin access control, 2–3\ncryptography for, 296\ndefined, 671\nin MAAs, 551\nin security models, 422\nconfiguration management\ncomponents of, 243–244\ndefined, 671\nsteps in, 455–456\nconfinement, 422, 671\nconfiscation, 614–615\nconfusion, 303, 671\nconnectivity issues, 102\nconsistency in ACID model, 219–220, 671\nconstrained data items (CDIs), 420\ncontamination, 220, 671\ncontent-dependent access control\nfor databases, 221\ndefined, 672\ncontent filters, 267\ncontext-dependent access control\nfor databases, 221\ndefined, 672\ncontinuity\nin business continuity planning, 519–523\ndefined, 672\ncontractual license agreements, 584, 672\ncontrol zones for TEMPEST, 640\ncontrolled access protection (C2) systems, 425\ncontrols gap, 196, 672\ncontrols in secure systems, 423, 672\nCOPPA (Children's Online Privacy Protection Act) \nof 1998, 587, 668\nCopper Distributed Data Interface (CDDI)\nin Data Link Layer, 74\ndefined, 672\ncopyrights, 579–581, 672\nCORBA (Common Object Broker Architecture), \n214–215, 215, 670\ncorrective access control, 4, 672\ncorrective controls, 461, 672\ncorrosion, 643\ncosts of assets, 188\nCOUNT function, 223\ncountermeasures, 54, 492–496\ncosts, 185\ndefined, 672\nmalicious code, 267–268\nnetworks, 139–142\npassword attacks, 270–271\nselecting, 196\ncoupling\ndefined, 672\nin OOP, 234\ncovert channels, 435\ndefined, 672\nstorage, 226, 435, 672\ntiming, 435, 672\nCPUs (central processing units). See processors\nCrack program, 270\ncrackers, 58, 495, 673\nCRCs (cyclic redundancy checks), 131, 673\ncredentials, logon, 7, 693\ncreeping privileges, 673\ncrime. See computer crime; laws\ncriminal law, 572–573, 673\ncrisis management, 546\ncritical path analysis, 629, 673\ncriticality prioritization, 673\nCRLs (certificate revocation lists), 349, 667\nCrossover Error Rate (CER), 16, 16, 673\ncrosstalk, 81\ncryptanalysis, 298, 673\ncryptography, 294\napplied. See applied cryptography\nasymmetric, 336\nEl Gamal, 338–339\nelliptic curve, 339–340\nkeys in, 313–315, 337, 337, 663\nRSA, 337–338\nattacks on, 359–360\nfor authentication, 297, 297\nconcepts in, 297–299\ndefined, 673\nexam essentials for, 325–326\ngoals of, 296–297\nhashing algorithms for, 316\n" }, { "page_number": 778, "text": "cryptosystems – DCE\n733\nhistory of, 294–295\nkeys in, 19, 311, 673\nmathematics in, 299–305\nreview questions, 328–333\nsummary, 324\nsymmetric, 316\nAES, 320–322\nBlowfish, 319–320\nDES, 316–318\nIDEA, 319\nkeys in, 312–313, 312, 322–323, 716\nSkipjack, 320\nTriple DES, 318–319\nwritten lab for, 327, 334\ncryptosystems, 298, 673\ncryptovariables, 673\nCSA (Computer Security Act) of 1987, 576, 671\nCSIRTs (Computer Security Incident Response \nTeams), 612\nCSMA (Carrier Sense Multiple Access) \ntechnologies, 86\ncustodians, 31, 180, 673\ncyclic redundancy checks (CRCs), 131, 673\nD\nD channels, 129\nDACK lines, 390\nDACs (discretionary access controls)\naccess in, 23\ndefined, 677\nvs. mandatory, 423\nDARPA model, 78\nDAT (Digital Audio Tape) for backups, 556\ndata\nclassification of\nfor confidentiality, 155\ndefined, 674\nin physical security, 631\nin security management, 162–165\nconfiscating, 614–615\nextraction of, 482, 674\nhiding, 160, 246, 392–393, 674\nintegrity of, 155–156\nin access control, 2–3\ncryptography for, 296\ndefined, 688\nin European Union privacy law, 590\nin incidents, 615\nmining, 225–226, 674\nowners of, 180, 674\nsecurity for, 210\ndata storage, 225–226\nknowledge-based systems, 226–229\nsystem development controls. See system \ndevelopment controls\nstorage. See storage\ndata circuit-terminating equipment (DCE), 107, 674\ndata custodians, 180, 674\nData Definition Language (DDL), 219, 674\ndata dictionaries, 224\ndata diddling, 438, 674\nData Encryption Standard (DES), 21\ndefined, 674\nmodes of, 316–318\nsecurity of, 311\nData Link layer, 74–75, 674\nData Manipulation Language (DML), 219, 674\ndata marts, 225, 674\ndata mining tools, 44\ndata remanence, 388\ndata terminal equipment (DTE), 107, 674\ndata warehouses, 225–226, 674\ndatabase management systems (DBMSs), \n216–219, 675\ndatabases, 218\naggregation in, 223–224\nconcurrency in, 221\ndata mining, 225–226\nDBMS architecture, 216–219\ndefined, 675\ninference attacks in, 224\nmultilevel security for, 220\nnormalization, 218\nODBC for, 222, 223\nrecovering, 551–552\nsecurity mechanisms for, 221–222\ntransactions, 219–220\nviews for, 221\ndate stamps, 221\nDBMSs (database management systems), \n216–219, 675\nDCE (data circuit-terminating equipment), \n107, 674\n" }, { "page_number": 779, "text": "734\nDCOM – Disaster Recovery Planning\nDCOM (Distributed Component Object Model), \n215–216, 677\nDDL (Data Definition Language), 219, 674\nDDoS (distributed denial of service) attacks, 53, 677\ndecentralized access control, 27, 675\ndecision making, 515\nDecision Support Systems (DSSs), 228\ndeclassification, 460, 675\ndecoy techniques, 281–282\ndecryption, 297, 675\ndedicated lines, 128\ndedicated security mode, 246, 379, 675\ndeencapsulation, 675\ndefense in depth, 160\ndefined phase in Capability Maturity Model, 239–240\ndegaussing, 460–461, 675\ndegrees of databases, 217, 675\ndelegation in OOP, 234, 675\nDelphi technique, 194, 675\ndelta rule, 675\ndeluge systems, 646, 675\ndenial of service (DoS) attacks, 52–55, 55, 265, \n271, 612\nand availability, 156\ndefined, 675–676\ndistributed DoS toolkits, 272–273\nDNS poisoning, 276\nfrom e-mail, 134\non Gibson Research, 613\nLand attacks, 276\nping of death attacks, 276–277\nSmurf attacks, 273–274, 274\nSYN floods, 271–272\nteardrop, 274–275, 275\ndeployment values for safeguards, 192–193\nDES (Data Encryption Standard), 21\ndefined, 674\nmodes of, 316–318\nsecurity of, 311\ndesign\ncomputer. See computer design\nfacility, 630\nflaws in, 435–437\nin system development, 236\ndestruction of media, 460–461\ndetective access control, 3, 461, 676\ndeterrent access control, 3, 676\ndevelopment phase in business continuity \nplanning, 513\ndevice firmware, 391\nDHCP (Dynamic Host Configuration Protocol), \n95, 678\ndiagnosing phase in IDEAL model, 240\ndictionaries, data, 224, 674\ndictionary attacks\ndefined, 676\nin Internet worm, 266\non passwords, 12, 51–52, 269–270\ndifferential backups, 555, 676\nDiffie-Hellman encryption, 323, 676\ndiffusion, 303, 676\nDigital Audio Tape (DAT) for backups, 556\ndigital certificates, 346–347\ndefined, 667\ngeneration and destruction of, 348–350\ndigital communications, 85\nDigital Linear Tape (DLT) for backups, 556\nDigital Millennium Copyright Act (DMCA) of \n1998, 580–581, 676\nDigital Signature Standard (DSS), 345–346, 676\ndigital signatures, 344\nin asymmetric key algorithms, 314\ndefined, 676\nDSS, 345–346\nHMAC, 345\nin message digests, 341\ndirect addressing, 385, 676\ndirect evidence, 593, 676\nDirect Memory Access (DMA), 390, 677\ndirective access control, 676\ndirective controls, 4, 461, 677\ndirectory services, 22, 677\nDisaster Recovery Planning (DRP), 510, 536–537\ndefined, 677\ndevelopment of, 552–559\nemergency response in, 553\nexam essentials for, 562\nexternal communications in, 558\nlogistics and supplies in, 558\nfor man-made disasters, 541–545\nfor natural disasters, 537–541, 540\npersonnel notification in, 553–554\nrecovery strategy. See recovery strategy\nrecovery vs. restoration in, 558\nreview questions, 564–569\n" }, { "page_number": 780, "text": "disaster recovery plans – edit control for databases\n735\nsoftware escrow arrangements in, 557–558\nstorage in, 554–557\nsummary, 561–562\ntesting and maintenance in, 560–561\ntraining and documentation in, 559–560\nutilities in, 558\nwritten lab for, 563, 570\ndisaster recovery plans, 677\ndisasters, 677\ndiscretionary access controls (DACs)\naccess in, 23\ndefined, 677\nvs. mandatory, 423\ndiscretionary protection systems, 425\nDiscretionary Security Property, 677\ndisgruntled employees, 609\ndistributed access control, 27, 677\ndistributed application security, 212–216\ndistributed architecture, 395–396\nDistributed Component Object Model (DCOM), \n215–216, 677\ndistributed databases, 216\ndistributed denial of service (DDoS) attacks, 53, 677\ndistributed DoS toolkits, 272–273\ndistributed reflective denial of service (DRDoS) \nattacks, 53, 273–274, 677\nDLT (Digital Linear Tape) for backups, 556\nDMA (Direct Memory Access), 390, 677\nDMCA (Digital Millennium Copyright Act) of \n1998, 580–581, 676\nDML (Data Manipulation Language), 219, 674\nDMQ lines, 390\nDMZs, 99–100\nDNS poisoning, 276, 678\nDNS spoofing, 141\nDobbertin, Hans, 343\ndocumentary evidence, 591–592, 678\ndocumentation\nin business continuity planning, 523–526\nin disaster recovery planning, 559–560\nDOD model, 78\ndogs, 632–634\ndomains\nin access control, 27\nbroadcast, 84\ndefined, 678\nof relations, 217\nDoS attacks. See denial of service (DoS) attacks\nDouble DES (2DES), 359\nDRDoS (distributed reflective denial of service) \nattacks, 53, 273–274, 677\nDRP. See Disaster Recovery Planning (DRP)\ndry pipe systems, 646, 678\nDSS (Digital Signature Standard), 345–346, 676\nDSSs (Decision Support Systems), 228\nDTE (data terminal equipment), 107, 674\ndue care, 180, 456–457, 577, 678\ndue diligence, 31, 456–457, 513, 678\ndumb cards, 637, 678\ndumpster diving, 280, 490, 678\ndurability in ACID model, 219–220, 678\nDVDs\nfor backups, 556\nlegal issues, 580\ndwell time, 678\nDynamic Host Configuration Protocol (DHCP), \n95, 678\ndynamic NAT, 93\ndynamic packet-filtering firewalls, 678\ndynamic password tokens, 19\ndynamic passwords, 10, 678\ndynamic RAM, 384\nE\ne-commerce, 354–355\ne-mail\ncryptography for, 351–352\nsecurity for, 105, 132–135\nEACs (Electronic Access Control) locks, 634\nEALs (evaluation assurance levels), 430–432\nEAP (Extensible Authentication Protocol), 106, 124\nearthquakes, 537–538\neavesdropping, 140, 489, 679\nEBC (Electronic Codebook), 317, 679\nEBCDIC (Extended Binary-Coded Data \nInterchange Mode), 77\necho service, 274\nEconomic and Protection of Proprietary \nInformation Act of 1996, 587\nEconomic Espionage Act of 1996, 583, 679\nECPA (Electronic Communications Privacy Act) of \n1986, 586, 679\nEDI (Electronic Data Interchange), 77\nedit control for databases, 221\n" }, { "page_number": 781, "text": "736\neducation – Escrowed Encryption Standard\neducation. See training and education\nEEPROMs (electronically erasable PROMs), \n383, 679\nEF (exposure factor)\ndefined, 681\nin impact assessment, 518\nin risk analysis, 190–192\neigenfeatures, 14\n8mm tape for backups, 556\nEl Gamal algorithm, 338–339, 679\nelectromagnetic interference (EMI)\ncoaxial cable for, 80\ndefined, 679\nproblems from, 642\nin radiation monitoring, 490, 639–640\nin TEMPEST technology, 370, 388–389, \n439–440, 490, 639–640\nelectromagnetic pulse (EMP), 639\nElectronic Access Control (EACs) locks, 634\nElectronic Codebook (EBC), 317, 679\nElectronic Communications Privacy Act (ECPA) of \n1986, 586, 679\nElectronic Data Interchange (EDI), 77\nelectronic mail\ncryptography for, 351–352\nsecurity for, 105, 132–135\nelectronic serial numbers (ESNs), 138\nelectronic vaulting, 551–552, 679\nelectronically erasable PROMs (EEPROMs), \n383, 679\nelliptic curve cryptography, 339–340, 679\nelliptic curve groups, 340, 680\nemanation security, 639–640\nemergency communications, 546\nemergency response\nin business continuity planning, 525\nin disaster recovery planning, 553\nEMI (electromagnetic interference)\ncoaxial cable for, 80\ndefined, 679\nproblems from, 642\nin radiation monitoring, 490, 639–640\nin TEMPEST technology, 370, 388–389, \n439–440, 490, 639–640\nEMP (electromagnetic pulse), 639\nemployees\ndefined, 680\ndisgruntled, 609\nsabotage by, 493\nemployment agreements, 178, 680\nemployment policies and practices, 176\nawareness training, 196–197\nfor employees, 176–179\nexam essentials for, 199–201\npolicies, 182–185\nreview questions, 202–207\nroles, 179–180\nsecurity management planning, 181–182\nsummary, 197–198\nEncapsulating Security Payloads (ESPs)\ndefined, 680\nin IPSec, 356\nin VPNs, 103\nencapsulation, 130–131, 246\ndefined, 680\nin OSI model, 72–73, 72–73\nin tunneling, 123\nencrypted viruses, 264\nencryption, 161, 297. See also cryptography\ncircuit, 355–356\nfor confidentiality, 155\ndefined, 680\nfor e-mail, 105, 134–135\nexport controls on, 585\nfor facsimiles, 135\none-way, 12, 698\npassword files, 51\nend-to-end encryption, 355, 680\nenforcement requirements in European Union \nprivacy law, 590\nEnigma codes, 295–296\nenrollment\nwith biometric devices, 16–17\nfor certificates, 348\ndefined, 680\nof users, 11, 29\nenticement, 49\nentities, 2, 680\nentrapment, 49\nenvironment in physical security, 640–647\nEPROM (erasable programmable read-only \nmemory), 383, 680\nequipment\nconfiscating, 614–615\nfailures in, 647–648\nerasing media, 460–461, 680\nerrors and omissions, 492–493\nEscrowed Encryption Standard, 324, 680\n" }, { "page_number": 782, "text": "ESNs – federal laws\n737\nESNs (electronic serial numbers), 138\nespionage, 495\ndefined, 680\nindustrial, 608\nESPs (Encapsulating Security Payloads)\ndefined, 680\nin IPSec, 356\nin VPNs, 103\nestablishing phase in IDEAL model, 240\n/etc/passwd file, 268–271\n/etc/shadow file, 271\nEthernet technology, 74\ndefined, 681\nfor LANs, 84\nethical hacking, 488\nethics, 616–618, 681\nEuropean Union privacy law, 588–590\nevaluation assurance levels (EALs), 430–432\nevidence\nadmissible, 591, 662\nof computer crimes, 610\ndefined, 681\ntypes of, 591–593\nexam essentials\naccess control, 34–35\nadministrative management, 467–469\napplied cryptography, 361–362\nattacks, 59–61, 283\nauditing, 499–501\nbusiness continuity planning, 526–527\ncommunications security, 143–145\ncomputer crime, 619–620\ncomputer design, 406–407\ncryptography, 325–326\ndisaster recovery planning, 562\nemployment policies and practices, 199–201\nlaws, 595–596\nmonitoring, 499–501\nnetworks, 112–113\nphysical security, 649–651\nsecurity management, 166–167\nsecurity models, 441–442\nsystem development controls, 248–249\nexcessive privileges, 681\nexclusive OR operations, 301–302\n.EXE files, 260\nexit interviews, 179, 681\nexpert opinions, 593, 681\nexpert systems, 227, 681\nexplosions, 541\nexport laws, 584–585\nexposure, 186, 681\nexposure factor (EF)\ndefined, 681\nin impact assessment, 518\nin risk analysis, 190–192\nExtended Binary-Coded Data Interchange Mode \n(EBCDIC), 77\nExtended Terminal Access Controller Access \nControl System (XTACACS), 106\nExtensible Authentication Protocol (EAP), 106, \n124\nexternal auditors, 484\nexternal audits, 479\nexternal communications, 558\nextranets, 96–101, 681\nF\nface scans, 14, 681\nfacilities\nin business continuity planning, 521\nin physical security, 628–631\nfacsimile security, 135\nfactor ratings, biometric, 15–16, 16\nfail-open conditions, 230–231, 231, 681\nfail-safe features, 109, 681\nfail-secure conditions, 230–231, 231, 681\nfail-soft features, 109\nfailover solutions, 109–110\nfailure recognition and response, 486\nFair Cryptosystems escrow system, 324, 682\nFalse Acceptance Rate (FAR), 16, 16, 682\nfalse alarms in intrusion detection, 48\nFalse Rejection Rate (FRR), 16, 16, 682\nFamily Educational Rights and Privacy Act \n(FERPA), 588, 682\nFaraday cages, 639\nFault Resistant Disk Systems (FRDS), 111\nfaults, 641, 682\nFDDI (Fiber Distributed Data Interface)\nin Data Link Layer, 74\ndefined, 682\nin LANs, 84\nfederal laws, 573\n" }, { "page_number": 783, "text": "738\nFederal Sentencing Guidelines – grudge attacks\nFederal Sentencing Guidelines, 577\nfeedback and response processes, 194\nfeedback composition theory, 399\nfences, 632, 682\nFERPA (Family Educational Rights and Privacy \nAct), 588, 682\nFiber Distributed Data Interface (FDDI)\nin Data Link Layer, 74\ndefined, 682\nin LANs, 84\nfiber-optic cable, 81, 83, 682\nfield-powered proximity readers, 637\nfields in databases, 217\nfile infector viruses, 260, 682\nFile Transfer Protocol (FTP), 77, 95\nfilters, 267\nfinancial attacks, 608, 682\nfinancial institutions, regulatory requirements for, 514\nFinger utility, 266\nfingerprints, 6, 13–14, 682\nfinite state machines (FSMs), 397\nfire detection and suppression, 643–647\nfire extinguishers, 643–645\nfires, 540\nfirewalls, 90\ndefined, 682\nworking with, 97–100, 99\nfirmware, 391, 682\nflag signals, 295\nflame actuated systems, 645\nflame stage in fires, 643, 644\nflash floods, 537\nFlask architecture, 496\nFlaw Hypothesis Methodology of Penetration \nTesting, 683\nflight time, 683\nflood attacks\ndefined, 683\nDoS, 53–54\nSYN, 271–272, 272\nfloods, 537, 539, 643\nforeign keys, 218\nformats\nfor backups, 556\nreporting, 481–482\nfortress mentality, 683\nFourth Amendment, 586, 594, 683\nFraggle attacks, 54, 273–274\nfraggles, 683\nfragmentation, 274–275, 683\nfragmentation attacks, 274–275, 275, 683\nFrame Relay, 79, 107–108, 128, 683\nfraud\nthreat of, 493\nin voice communications, 137–138\nFRDS (Fault Resistant Disk Systems), 111\nfrequency analysis, 295, 683\nFRR (False Rejection Rate), 16, 16, 682\nFSMs (finite state machines), 397\nFTP (File Transfer Protocol), 77, 95\nfuel in fire triangle, 643, 644\nfull backups, 555, 683\nfull-duplex session mode, 76\nfull-interruption tests, 561, 683\nfull knowledge teams, 488\nfun attacks, 609, 683\nfunctional requirements in system development life \ncycle, 235\nfuzzy logic techniques, 228\nG\nGantt charts, 240, 241, 684\ngap in wap, 358\ngas discharge systems, 646–647\ngates, 632, 684\ngateways, 101, 684\nGBL (Gramm-Leach-Bliley) Act, 587, 684\nGeneral Protection Faults (GPFs), 245\nGFS (Grandfather-Father-Son strategy), 557\nGibson Research, 613\nGood Times virus warning, 264\nGovernment Information Security Reform Act \n(GISRA) of 2000, 577–578, 684\ngovernment/military classification, 163–164, 684\nGPFs (General Protection Faults), 245\nGramm-Leach-Bliley (GBL) Act of 1989, 587, 684\nGrandfather-Father-Son strategy (GFS), 557\ngranular object access control\nfor databases, 221\ndefined, 684\nGreen Book, 427–428\nground connections, 641, 684\ngroups, 23, 684\ngrudge attacks, 609, 684\n" }, { "page_number": 784, "text": "guards – hybrid environments\n739\nguards, 634\nguidelines, 184\nfor computer security, 576\ndefined, 684\nH\nhack backs, 594\nhackers, 58\ndefined, 684\nfor penetration testing, 487\nthreats from, 495\nhail storms, 539\nhalf-duplex session mode, 76\nHalon, 631, 646–647, 684\nhand geometry, 14, 685\nhandling sensitive information, 458–459\nhandshaking process\ndefined, 685\nin SYN flood attacks, 271, 272\nharassment, 492\nhardening provisions, 521\nhardware, 371\ndefined, 685\nfailures in, 543\ninput and output devices, 388–389\nmemory, 382–386\nprocessors, 371–382\nin recovery strategy, 550\nstorage, 386–388\nhardware controls, 463\nhardware segmentation, 244, 393, 685\nhash functions, 340–341\ndefined, 685\nMD2, 342\nMD4, 342–343\nMD5, 343\nSHA, 341–342\nhash totals, 131, 685\nhash values, 685\nHashed Message Authentication Code (HMAC), \n345, 685\nhashing algorithms, 316\nHDLC (High-Level Data Link Control) protocol\ndefined, 686\nin WANs, 79, 108, 130\nHealth Insurance Portability and Accountability \nAct (HIPAA) of 1996, 587, 685\nhearsay evidence, 593, 685\nheart/pulse patterns, 14, 685\nheartbeat sensors, 639\nheat-based motion detectors, 635\nheat damage, 642, 647\nheat stage in fires, 643, 644\nheuristics-based intrusion detection, 48\nhiding data, 160, 246, 392–393\nhierarchical databases, 216, 686\nhierarchical environments, 25, 685\nHierarchical Storage Management (HSM) system, 557\nhigh-level attacks, 686\nHigh-Level Data Link Control (HDLC) protocol\ndefined, 686\nin WANs, 79, 108, 130\nhigh-level languages, 232\nHigh Speed Serial Interface (HSSI) protocol, 108, \n130, 686\nhijack attacks, 56, 686\nHIPAA (Health Insurance Portability and \nAccountability Act) of 1996, 587, 685\nhiring practices, 177–178, 465\nHMAC (Hashed Message Authentication Code), \n345, 685\nhoaxes, 264\nhoney pots, 48–49, 282, 686\nhookup composition theory, 399\nhost-based IDSs, 46, 686\nHost-to-Host layer, 78\nhostile applets, 267, 686\nhot sites, 548, 686\nhot-swappable RAID, 111\nHSM (Hierarchical Storage Management) system, 557\nHSSI (High Speed Serial Interface) protocol, 108, \n130, 686\nHTTP (Hypertext Transfer Protocol), 77, 95, 687\nHTTPS (Hypertext Transfer Protocol over Secure \nSockets Layer), 353, 687\nhubs, 100\ndefined, 686\nin Physical layer, 74\nhumidity, 642\nhurricanes, 539\nhybrid attacks, 12, 687\nhybrid environments, 25, 686\n" }, { "page_number": 785, "text": "740\nhyperlink spoofing – integrity\nhyperlink spoofing, 141–142\nHypertext Transfer Protocol (HTTP), 77, 95, 687\nHypertext Transfer Protocol over Secure Sockets \nLayer (HTTPS), 353, 687\nI\nI Love You virus, 261\nIAB (Internet Advisory Board), 617\nICMP (Internet Control Message Protocol)\nin Network layer, 75, 92\npings of death in, 276–277\nSmurf attacks in, 273\nIDEA (International Data Encryption Algorithm), \n319, 689\nIDEAL model, 240, 241\nidentification, 157–158\nin access control, 5–6\ndefined, 687\ntechniques, 9–10\nbiometrics, 13–18, 16–17\npasswords, 10–13\nSSO, 20–23\ntokens, 18–20\nidentification cards, 635, 687\nIdentity Theft and Assumption Deterrence Act, \n588, 687\nIDL (Interface Definition Language), 214\nIDSs (intrusion detection systems), 45–48, \n638–639, 689\nIGMP (Internet Group Management Protocol), 75, 92\nignore risk, 195, 687\nIKE (Internet Key Exchange) protocol, 356, 689\nillegal activities, 457\nIMAP (Internet Message Access Protocol), 77, 95, \n132, 689\nimmediate addressing, 385, 687\nimpact assessment, 515–516\nimpact assessment phase, 518–519\nlikelihood assessment, 517\npriority identification, 516\nresource prioritization, 519\nrisk identification, 516–517\nimpersonation attacks, 140–141, 687\nimplementation attacks, 359, 687\nimplementation phase in business continuity \nplanning, 513, 522\nimport laws, 584–585\ninappropriate activities, 491–492, 687\nincidents, 610–611\nabnormal and suspicious activity, 614\nconfiscation in, 614–615\ndata integrity and retention in, 615\ndefined, 688\nreporting, 615–616\nresponse teams for, 612\ntypes of, 611–612\nincipient stage in fires, 643, 644\nincremental attacks, 438\nincremental backups, 555, 688\nindirect addressing, 385, 688\nindistinct threats and countermeasures, 492–496\nindustrial espionage, 608, 688\ninference attacks, 224, 688\ninference engines, 227, 688\ninformation flow in security models, 404\ninformation flow models, 398, 688\ninformation hiding, 160, 246, 392–393, 688\nInformation Technology Security Evaluation and \nCertification (ITSEC), 184, 428–429\ninformative policies, 183, 688\ninfrared motion detectors, 635\ninfrastructure\nin business continuity planning, 521\nfailures in, 543–544\ninheritance, 233–234, 688\ninitial phase in Capability Maturity Model, 239\ninitial program load (IPL) vulnerabilities, 231, 496\ninitialization and failure states, 436\ninitialization vectors (IVs), 303, 688\ninitiating phase in IDEAL model, 240\ninput and output controls, 463\ninput checking, 436–437\ninput devices, 388–389\ninput/output structures, 389–390\ninrush power, 641, 688\ninstances, 233–234, 688\nIntegrated Services Digital Network (ISDN)\nin Data Link layer, 75\ndefined, 688\nin WANs, 128–129\nintegrity, 155–156\nin access control, 2–3\ncryptography for, 296\ndefined, 688\n" }, { "page_number": 786, "text": "* Integrity Axiom – Java programming language\n741\nin European Union privacy law, 590\nin incidents, 615\nin security models, 404, 422\nsoftware for, 268\n* (star) Integrity Axiom, 402, 419, 660\nintellectual property, 578–579\ncopyrights, 579–581\ndefined, 688\npatents, 582\ntrade secrets, 582–583\ntrademarks, 581–582\nintelligence attacks, 607\nintent to use applications, 581\nInterface Definition Language (IDL), 214\ninternal audits, 479\nInternational Data Encryption Algorithm (IDEA), \n319, 689\nInternational Information Systems Security \nCertification Consortium (ISC) code of ethics, \n616–617\nInternational Organization for Standardization \n(ISO), 70, 689\nInternet Advisory Board (IAB), 617\nInternet components, 96–101\nInternet Control Message Protocol (ICMP)\nin Network layer, 75, 92\npings of death in, 276–277\nSmurf attacks in, 273\nInternet Group Management Protocol (IGMP), 75, 92\nInternet Key Exchange (IKE) protocol, 356, 689\nInternet layer, 78\nInternet Message Access Protocol (IMAP), 77, 95, \n132, 689\nInternet Protocol (IP), 75\nInternet Security Association and Key \nManagement Protocol (ISAKMP), 357, 689\nInternet service providers (ISPs), 588\nInternet Worm, 212, 265–266\nInternetwork Packet Exchange (IPX), 75\ninterpreted languages, 232, 689\ninterrupt requests (IRQs), 390, 689\nintranets, 96–101, 689\nintrusion, 689\nintrusion detection, 45–46, 478\ndefined, 689\nhost-based and network-based IDSs, 46–47\nknowledge-based and behavior-based, 47–48\npenetration testing, 49–50\ntools for, 48–49\nintrusion detection systems (IDSs), 45–48, \n638–639, 689\ninventions, 582\ninvestigations, 590–591\nevidence in, 591–593\nprocess of, 593–595\nIP (Internet Protocol), 75\nIP addresses, NAT for, 125–126\nIP classes, 93–95\nIP Payload Compression (IPcomp) protocol, 356, \n689\nIP probes, 279, 689\nIP spoofing, 280–281, 690\nIPL (initial program load) vulnerabilities, 231, 496\nIPSec (IP Security)\nfor cryptography, 356–357\ndefined, 689\nfor L2TP, 124–125\nfor TCP/IP, 103–104\nIPX (Internetwork Packet Exchange), 75\niris scans, 14, 690\nIRQs (interrupt requests), 390, 689\nISAKMP (Internet Security Association and Key \nManagement Protocol), 357, 689\nISC (International Information Systems Security \nCertification Consortium) code of ethics, \n616–617\nISDN (Integrated Services Digital Network)\nin Data Link layer, 75\ndefined, 689\nin WANs, 128–129\nISO (International Organization for \nStandardization), 70, 689\nisolation, 422\nin ACID model, 219–220\ndefined, 690\nprocess, 244\nISPs (Internet service providers), 588\nISS tool, 487\nITSEC (Information Technology Security \nEvaluation and Certification), 184, 428–429\nIVPs (integrity verification procedures), 420\nIVs (initialization vectors), 303, 688\nJ\nJava applets, 214, 267\nJava programming language, 690\n" }, { "page_number": 787, "text": "742\nJava Virtual Machine – length of keys\nJava Virtual Machine (JVM), 214\njob descriptions, 176–177, 465, 690\njob responsibilities, 177, 690\njob rotation, 177, 690\nJoint Photographic Experts Group (JPEG), 77\njournals, monitoring, 30\nJVM (Java Virtual Machine), 214\nK\nKDCs (Key Distribution Centers), 21, 690\nKerberos authentication\ndefined, 690\nin SSO, 21–22\nKerchoff's principle, 298\nkernel operating mode, 381\nkernel proxy firewalls, 690\nkernels\nin protection rings, 375–376\nsecurity, 417–418\nkey ciphers, 309–310\nKey Distribution Centers (KDCs), 21, 690\nkey escrow database, 304\nkeyboard logging, 15\nkeyboards, 389\nkeys, 634–635\nin cryptography, 19, 298, 311, 673\nasymmetric, 313–315, 337, 337, 663\ndistributing, 312, 322–323\nescrow system, 324, 691\nlength of, 311\nfor databases, 217–218\ndefined, 690\nin PKI, 350\nkeystroke monitoring, 485, 691\nkeystroke patterns, 15, 691\nKnapsack algorithm, 338\nknowledge-based intrusion detection, 47–48, 691\nknowledge-based systems, 226–227\nDecision Support Systems, 228\nexpert systems, 227\nneural networks, 228\nsecurity applications, 229\nknowledge bases, 227, 691\nknowledge redundancy, 177\nknown plaintext attacks, 359, 691\nKoblitz, Neil, 339\nKryptoKnight authentication mechanism, 22, 691\nL\nL2F (Layer 2 Forwarding) protocol, 75, 124, 692\nL2TP (Layer 2 Tunneling Protocol), 75, 90, \n103–104, 124–125, 692\nlabeled security (B1) systems, 426\nlabels, 164\ndefined, 710\nin mandatory access controls, 23\nfor media, 458\nin security models, 418\nLAN extenders, 102, 691\nland attacks, 55, 276, 691\nLANs (local area networks)\ndefined, 692\nvs. WANs, 79\nworking with, 84–87\nlattice-based access control, 26, 26, 401, 691\nlaw enforcement agencies, 593–594\nlaws, 572\nadministrative, 574\ncivil, 573–574\ncomputer crime, 575–578\ncriminal, 572–573\nexam essentials for, 595–596\nimport/export, 584–585\nintellectual property, 578–583\nlicensing, 584\nprivacy, 585–590\nreview questions, 598–603\nsummary, 595\nwritten lab for, 597, 604\nLayer 2 Forwarding (L2F) protocol, 75, 124, 692\nLayer 2 Tunneling Protocol (L2TP), 75, 90, \n103–104, 124–125, 692\nlayered environment, access control in, 5\nlayering, 160, 391–392, 692\nlayers\nOSI. See OSI (Open Systems Interconnection) \nmodel\nTCP/IP. See TCP/IP protocol\nlearning phase in IDEAL model, 240\nleast significant string bits, 303\nlegal personnel, 616\nlegal requirements. See also laws\nin administrative management, 457\nin business continuity planning, 514–515\nlength of keys, 339\n" }, { "page_number": 788, "text": "Library of Congress – managed phase in Capability Maturity Model\n743\nLibrary of Congress, 579\nlicensing, 584, 692\nlife cycle assurance, 452\nlife cycles in system development, 234–235\ncode review walk-through in, 236\nconceptual definition, 235\ndesign review in, 236\nfunctional requirements determination, 235\nmaintenance in, 237\nmodels, 237–240\nIDEAL, 240, 241\nsoftware capability maturity model, 239–240\nspiral model, 238–239, 239\nwaterfall model, 237–238, 238\nprotection specifications development, 235–236\nsystem test review in, 236\nlife safety, 640–647\nlighting, 633, 692\nlikelihood assessment, 517\nlimit checks in software development, 230, 231\nLine Print Daemon (LPD), 77, 95\nlinear topology, 88, 88\nlink encryption, 355, 692\nLinux operating system, 496\nLLC (Logical Link Control) sublayer, 75\nlocal alarm systems, 636, 638, 692\nlocal application security, 210–212\nlocal area networks (LANs)\ndefined, 692\nvs. WANs, 79\nworking with, 84–87\nlocking database records, 221\nlockout, account, 12, 52\nlocks, 634–635\nlogic bombs, 211, 264, 693\nlogical access controls, 4, 693\nlogical bounds, 422\nLogical Link Control (LLC) sublayer, 75\nlogical locations in authentication, 7\nlogical operations in cryptography, 300–302\nlogical reasoning in expert systems, 227\nlogical security boundaries, 139\nlogistics in disaster recovery planning, 558\nlogon credentials\ndefined, 693\nin two-factor authentication, 7\nlogon scripts, 23\nlogs and logging, 44, 478–479\nanalysis of, 478, 692\ndefined, 692\nintegrity of, 615\nmonitoring, 30\ntransmission, 132\nLOMAC (Low Water-Mark Mandatory Access \nControl), 496, 693\nlook and feel copyrights, 579\nloopback addresses, 94, 693\nloss of support, 493\nlow-pressure water mists, 647\nLow Water-Mark Mandatory Access Control \n(LOMAC), 496, 693\nLPD (Line Print Daemon), 77, 95\nM\nMAAs (Mutual Assistance Agreements), \n550–551, 696\nMAC (Media Access Control) addresses, 75, 694\nMAC sublayer in Network layer, 75\nmachine language, 232, 693\nmacro viruses, 261, 693\nmailbombing attacks, 134, 693\nmaintenance\nin business continuity planning, 513, 525\ndefined, 693\nin disaster recovery planning, 561\nin system development, 237\nmaintenance hooks, 438, 693\nmalicious code, 258, 495, 612\nactive content, 267\ncountermeasures, 267–268\ndefined, 693\nlaws against, 576\nlogic bombs, 264\nsources of, 258–259\nTrojan horses, 264–265\nviruses, 259–264\nworms, 265–266\nman-in-the-middle attacks, 56–57\non cryptography, 360\ndefined, 694\nman-made disasters, 541–545, 694\nman-made risks, 517\nmanaged phase in Capability Maturity Model, 240\n" }, { "page_number": 789, "text": "744\nmanagement planning – MPP\nmanagement planning, 181–182\nmandatory access controls, 24–25, 423, 693\nmandatory protection systems, 426\nmandatory vacations, 178, 694\nmantraps, 633, 633, 694\nmanual recovery, 455\nmarking of media, 458\nMarzia virus, 263\nmasquerading attacks, 140–141, 280–281, 638, 694\nmassively parallel processing (MPP), 372, 694\nMaster Boot Record (MBR) viruses, 259–260, 694\nMaster Boot Records (MBRs), 694\nmaterial evidence, 591\nmathematics in cryptography, 299–305\nMAX function, 223\nmaximum tolerable downtime (MTD)\nin business impact assessment, 516, 520\ndefined, 694\nMBR (Master Boot Record) viruses, 259–260, 694\nMBRs (Master Boot Records), defined, 694\nMD2 (Message Digest 2), 342, 694\nMD4 (Message Digest 4), 342–343, 694\nMD5 (Message Digest 5), 343, 694\nMDs (message digests), 340–341, 694\nmean time to failure (MTTF), 459, 648, 694\nmean time to repair (MTTR), 648\nMedia Access Control (MAC) addresses, 75, 694\nmedia controls, 464\nmedia in record retention, 483\nmeet-in-the-middle attacks, 359, 695\nMelissa virus, 261\nmemory, 225–226\naddressing, 384–385\ndefined, 695\nRAM, 383–384\nregisters, 384\nROM, 382–383\nsecondary, 385–386\nsecurity issues with, 386\nmemory cards, 637\nmemory-mapped I/O, 389–390, 695\nmemory pages, 695\nMerkle-Hellman Knapsack algorithm, 338\nmesh topology, 89, 89\nMessage Digest 2 (MD2), 342, 694\nMessage Digest 4 (MD4), 342–343, 694\nMessage Digest 5 (MD5), 343, 694\nmessage digests, 340–341, 695\nmessages in OOP, 234\nmeta-models, 695\nmetadata, 225, 695\nmetamodels, 238\nmethods in OOP, 233–234\nmice, 389\nMichelangelo virus, 264\nmicrocode, 391, 695\nMicrosoft Challenge Handshake Authentication \nProtocol (MS-CHAP), 124\nmiddle management, 181\nMIDI (musical instrument digital interface), 77\nmilitary attacks, 607, 695\nMiller, Victor, 339\nMIME Object Security Services (MOSS), 134, \n352, 695\nMIN function, 223\nMINs (mobile identification numbers), 138\nMIPS (million instructions per second), 372\nmirroring, remote, 552\nmitigated risks, 187, 695\nmobile identification numbers (MINs), 138\nmobile sites, 549, 695\nmodems, 389\nmodification attacks, 141\nmodule testing, 696\nmodulo operation, 302, 696\nMONDEX payment system, 355, 696\nmonitoring, 30, 44, 159, 478–479, 484\ndefined, 696\nexam essentials for, 499–501\ninappropriate activities, 491–492\nindistinct threats and countermeasures, 492–496\npenetration testing techniques, 486–491\nreview questions, 502–507\nsummary, 497\ntools and techniques in, 485–486\nmonitors, 388–389\nMoore's Law, 339\nMorris, Robert Tappan, 266\nMOSS (MIME Object Security Services), 134, \n352, 695\nmost significant string bits, 303\nmotion detectors, 635, 696\nmount command, 494\nMoving Picture Experts Group (MPEG), 77\nMPP (massively parallel processing), 372, 694\n" }, { "page_number": 790, "text": "MS-CHAP – nondiscretionary access controls\n745\nMS-CHAP (Microsoft Challenge Handshake \nAuthentication Protocol), 124\nMTD (maximum tolerable downtime)\nin business impact assessment, 516, 520\ndefined, 694\nMTTF (mean time to failure), 459, 648, 694\nMTTR (mean time to repair), 648\nmulticast communications, 85, 696\nmultihomed firewalls, 98–99\nmultilevel security mode, 220, 246, 380, 696\nmultipartite viruses, 263, 696\nmultiple-factor authentication, 7\nmultiple sites, 550\nmultiprocessing, 372–373, 696\nmultiprogramming, 373, 696\nmultistate processing systems, 374, 696\nmultitasking, 372, 696\nmultithreading, 373, 696\nmusical instrument digital interface (MIDI), 77\nMutual Assistance Agreements (MAAs), \n550–551, 696\nMyer, Albert, 295\nN\nNAT (Network Address Translation), 125–126\ndefined, 697\nin Network layer, 75, 92–93\nNational Computer Crime Squad, 593\nNational Flood Insurance Program, 539\nNational Information Infrastructure Protection Act \nof 1996, 577\nNational Institute of Standards and Technology \n(NIST), 576\nNational Interagency Fire Center, 540\nNational Security Agency (NSA), 576\nnatural disasters, 537, 630\ndefined, 697\nearthquakes, 537–538\nfires, 540\nfloods, 537, 539\nregional events, 540\nstorms, 539, 540\nnatural risks, 517\nNDAs (nondisclosure agreements), 178, 583, 697\nneed-to-know access, 30–31\nneed-to-know axiom, 453, 697\nnegligence, 577, 697\nNetSP authentication mechanism, 22, 697\nNetwork Access layer, 78\nNetwork Address Translation (NAT), 125–126\ndefined, 697\nin Network layer, 75, 92–93\nnetwork-based IDSs, 46–47, 697\nNetwork File System (NFS), 76, 96\nnetwork interface cards (NICs), 74\nNetwork layer, 75–76, 697\nNetwork layer protocols, 91–95\nNetwork News Transport Protocol (NNTP), 77\nnetworks\nattacks and countermeasures, 139–142\ncabling in\nbaseband and broadband, 80–81\ncoaxial, 80\nconductors, 82–83\ntwisted-pair, 81–82\nwireless, 83\ncryptography for, 355–357\ndevices on, 100–101\nexam essentials for, 112–113\nfirewalls on, 97–100, 99\nOSI model. See OSI (Open Systems \nInterconnection) model\nremote access security management, 102–103\nreview questions, 114–119\nsecurity mechanisms, 103–106\nservices for, 107–108\nsingle points of failure, 108–111\nsummary, 111–112\ntopologies in, 87–89, 87–89\nwireless, 83, 357–358\nneural networks, 228, 697\nNext-Generation Intrusion Detection Expert \nSystem (NIDES), 229\nNFS (Network File System), 76, 96\nNICs (network interface cards), 74\nNIST (National Institute of Standards and \nTechnology), 576\nNNTP (Network News Transport Protocol), 77\nno lockout policies, 549\nnoise, electrical, 642, 697\nnonces, 303, 697\nnondedicated lines, 128\nnondisclosure agreements (NDAs), 178, 583, 697\nnondiscretionary access controls, 24, 697\n" }, { "page_number": 791, "text": "746\nnondistributed application security – oxygen in fire triangle\nnondistributed application security, 210–212\nnoninterference models, 398, 697\nnonrepudiation\nin asymmetric key algorithms, 315\ncryptography for, 297\ndefined, 697\nin security management, 159\nin symmetric key algorithms, 312\nnonstatistical sampling in auditing, 482\nnonvolatile storage, 226, 387, 698\nnormalization, database, 218, 698\nNOT operations, 301, 698\nnotice requirements in European Union privacy \nlaw, 590\nNSA (National Security Agency), 576\nO\nOAKLEY protocol, 357\nobject evidence, 591\nobject linking and embedding (OLE), 215, 698\nObject Management Group (OMG), 214–215\nobject-oriented programming (OOP), 217, 233–\n234, 698\nObject Request Brokers (ORBs), 214–215, 215\nobjects\nin access, 2\ndefined, 698\nin OOP, 233\nin secure systems, 420–421\nOccupant Emergency Plans (OEPs), 640\nOCSP (Online Certificate Status Protocol), 350\nODBC (Open Database Connectivity), 222, 223\nOEPs (Occupant Emergency Plans), 640\nOFB (Output Feedback) mode, 318, 699\noffline key distribution, 322\noffsite storage, 554–557\nOLE (object linking and embedding), 215, 698\nOMG (Object Management Group), 214–215\nOne-Click Shopping patent, 582\n100Base-T cable, 80–81, 660\n1000Base-T cable, 81, 660\none-time pads, 308–309, 698\none-time passwords, 10, 19, 698\none-upped constructed passwords, 12, 698\none-way encryption, 12, 698\none-way functions, 302–303, 698\nOnline Certificate Status Protocol (OCSP), 350\nonward transfer requirements in European Union \nprivacy law, 590\nOOP (object-oriented programming), 217, \n233–234, 698\nOpen Database Connectivity (ODBC), 222, 223\nOpen Shortest Path First (OSPF) protocol, 75\nopen systems, 421\nOpen Systems Interconnection model. See OSI \n(Open Systems Interconnection) model\noperating modes for processors, 380–382\noperational assurance, 452\noperational plans, 182, 698\noperations controls, 462–464\noperations security. See administrative \nmanagement\noperations security triples, 698\noptimizing phase in Capability Maturity Model, 240\nOR operations, 300–301, 699\nOrange Book, 425–427\nORBs (Object Request Brokers), 214–215, 215\norganization analysis in business continuity \nplanning, 511–512\norganizational owners, 179\nOSI (Open Systems Interconnection) model, 70\nApplication layer, 77–78\nData Link layer, 74–75\ndefined, 697\nencapsulation in, 72–73, 72–73\nfunctionality, 71, 71\nhistory of, 70–71\nNetwork layer, 75–76\nPhysical layer, 74\nPresentation layer, 77\nSession layer, 76\nTransport layer, 76\nOSPF (Open Shortest Path First) protocol, 75\noutput devices, 388–389\nOutput Feedback (OFB) mode, 318, 699\novert channels, 699\noverwriting media, 460\nowners\nin access control, 24, 31\nof data, 180, 674\ndefined, 699\norganizational, 179\noxygen in fire triangle, 643, 644\n" }, { "page_number": 792, "text": "packages – physically bounded processes\n747\nP\npackages, 699\npacket-filtering firewalls, 97–98\npacket switching, 126–127\npackets, 699\npadded cell systems, 49, 699\npalm geography, 699\npalm scans, 14\npalm topography, 699\nPAP (Password Authentication Protocol), 106, \n124, 700\nPaper Reduction Act of 1995, 577\nparallel layering, 160\nparallel tests, 561, 700\nparameter checking, 436–437\nparol evidence rule, 592, 700\npartial knowledge teams, 488\npartitioning databases, 221, 675\npartitions, 631\npass phrases, 11, 700\npassive audio motion detectors, 635\npassive proximity readers, 637\npasswd file, 268–271\nPassword Authentication Protocol (PAP), 106, \n124, 700\npassword tokens, 19\npasswords, 10\nin access control, 6\nattacks on, 266, 268\nbrute force, 51–52\ncountermeasures, 270–271\ndictionary attacks, 269–270\npassword guessing, 269\nsocial engineering, 270\ndefined, 700\nin Linux, 496\npolicies for, 52\ndefined, 700\nwith new employees, 29\nrestrictions on, 11, 700\nsecuring, 12–13\nselecting, 10–11\nin Unix systems, 494\nPAT (Port Address Translation), 93, 702\nPatent and Trademark Office, 581\npatents, 582, 700\nPatriot Act, 588, 721\npattern-matching detection, 47–48\nPBX (private branch exchange), 135, 703\nPDUs (protocol data units), 73, 73\nPEM (Private Enhanced Mail) encryption, 105, \n134, 351–352, 355, 703\npenetration, 187\npenetration testing, 49–50, 486–487\ndefined, 700\ndumpster diving, 490\nethical hacking, 488\nplanning, 487\nproblem management, 491\nradiation monitoring, 490\nsniffing and eavesdropping, 489\nsocial engineering, 491\nteams for, 488\nwar dialing, 488–489\npeople in business continuity planning, 520–521\nperformance, cache RAM for, 383–384\nperiod analysis, 308\npermanent virtual circuits (PVCs), 108, 127, 700\npermissions in access control, 30–32, 33\npersonal identification numbers (PINs), 5–6, 700\npersonnel\ncontrols on, 464–465\nmanaging, 700\nsafety of, 640\npersonnel notification in disaster recovery \nplanning, 553–554\nPERT (Program Evaluation Review Technique), \n242, 703–704\nPGP (Pretty Good Privacy), 105, 134, 319, 351, 702\nphone phreaking, 137–138, 608, 700\nphotoelectric motion detectors, 635\nphreakers, 137–138, 608\nphysical access, 5, 52\nphysical intrusion detection systems, 638\nPhysical layer, 74, 701\nphysical security, 139\nenvironment and life safety in, 640–647\nequipment failure in, 647–648\nexam essentials for, 649–651\nfacility requirements in, 628–631\nphysical controls in, 5, 629, 631–636, 633, 701\nreview questions, 652–657\nsummary, 648–649\ntechnical controls in, 4, 629, 636–640, 717\nthreats to, 628\nphysically bounded processes, 422\n" }, { "page_number": 793, "text": "748\npiggybacking – processes phase in business continuity planning\npiggybacking, 638, 701\nping function, 273, 701\nping of death attacks, 55, 276–277, 701\nPINs (personal identification numbers), 5–6, 700\nPKI (public key infrastructure), 346\ncertificates in, 346–347\ncertificate authorities for, 347–348\ngeneration and destruction of, 348–350\ndefined, 704\nkey management in, 350\nplain old telephone service (POTS), 135, 701\nplaintext messages, 297, 701\nplanning goals, 523\nplatforms for viruses, 261–262\nplayback attacks, 57\nplumbing leaks, 643\nPoint-to-Point Protocol (PPP), 74, 103, 130–131, 701\nPoint-to-Point Tunneling Protocol (PPTP), 90, \n103–104, 124, 701\npolicies\nand architecture, 393–394\nemployment, 182–185\npassword, 52\npolicy protection mechanisms, 394–395\npolling in CSMA/CD, 87\npolyalphabetic substitution, 307, 701\npolyinstantiation\nfor databases, 221\ndefined, 701\npolymorphic viruses, 263, 701\npolymorphism\ndefined, 701\nin OOP, 234\nPOP3 (Post Office Protocol, version 3), 77, 95, \n132, 702\nPorras, Philip, 229\nPort Address Translation (PAT), 93, 702\nport scans, 279, 702\nports\nApplication layer, 95\ndefined, 701\nPhysical layer, 74\nin TCP, 90\nPost Office Protocol, version 3 (POP3), 77, 95, \n132, 702\npostmortem reviews, 702\npostwhitening technique, 321\nPOTS (plain old telephone service), 135, 701\npower\noutages, 542–543\nproblems with, 640–641\npower-on self-test (POST), 382\nPPP (Point-to-Point Protocol), 74, 103, 130–131, 701\nPPTP (Point-to-Point Tunneling Protocol), 90, \n103–104, 124, 701\npreaction systems, 646, 702\nPresentation layer, 77, 702\nPretty Good Privacy (PGP), 105, 134, 319, 351, 702\npreventative control, 3, 461, 702\nprewhitening technique, 321\nPRI (Primary Rate Interface) ISDN, 129, 702\nprimary keys for databases, 218\nprimary memory, 225, 702\nprimary storage, 225, 387, 702\nprinciple of least privilege, 30, 394, 453, 702\nprinters, 389\npriorities\nin business continuity planning, 519\nin business impact assessment, 516\nin protection rings, 375–376\nin recovery strategy, 545–546\nprivacy, 157, 457, 586\ndefined, 702\nEuropean Union privacy law, 588–590\nU.S. privacy laws, 586–588\nin workplace, 589\nPrivacy Act of 1974, 586, 703\nprivate branch exchange (PBX), 135, 703\nPrivate classification, 164, 703\nPrivate Enhanced Mail (PEM) encryption, 105, \n134, 351–352, 355, 703\nprivate IP addresses, 125\nprivate keys, 337, 337, 703\nprivileged entity controls, 463\nprivileged mode, 245, 381, 703\n\\privileged operations functions, 454, 703\nprivileged programs, 438\nprivileges in protection rings, 375–376\nproblem management, 491\nproblem states, 376–377, 703\nprocedures, 184–185, 703\nprocess confinement, 422\nprocess isolation, 244, 393, 703\nprocess states, 377–378, 378\nprocesses phase in business continuity planning, \n520–521\n" }, { "page_number": 794, "text": "processors – records\n749\nprocessors, 371–372\ndefined, 703\nexecution types, 372–373\noperating modes for, 380–382\nprocessing types, 374\nprotection mechanisms, 374–379, 376, 378\nsecurity modes for, 378–381\nProgram Evaluation Review Technique (PERT), \n242, 703–704\nprogrammable read-only memory (PROM), \n382–383, 704\nprogramming\nlanguages for, 232\nsecurity flaws in, 439\nproprietary alarm systems, 638\nproprietary data, 164, 704\nprotection mechanisms, 374–375\nin computer design, 391–396\noperating modes, 380–382\nprocess states, 377–378, 378\nrings, 375–376, 376\nin security management, 159–161\nsecurity modes, 378–381\nprotection of personal information, 457\nprotection profiles, 704\nprotection rings, 244–246, 245\nprotection specifications development, 235–236\nprotocol data units (PDUs), 73, 73\nprotocol security mechanisms, 103–106\nprotocol services, 107–108\nprotocols, 70, 704\nprovisions in business continuity planning, 521\nproxies, 102, 704\nproximity readers, 637, 704\nproxy firewalls, 98\nprudent man rule, 577, 704\npseudo-flaws, 281–282, 704\nPublic classification, 165, 704\npublic IP addresses, 124, 704\npublic key infrastructure (PKI), 346\ncertificates in, 346–347\ncertificate authorities for, 347–348\ngeneration and destruction of, 348–350\ndefined, 704\nkey management in, 350\npublic keys, 302, 313\nin asymmetric cryptography, 337, 337\ndefined, 704\ndistribution of, 322\npurging media, 460–461, 704\nPVCs (permanent virtual circuits), 108, 127, 700\nQ\nQICs (Quarter Inch Cartridges) for backups, 556\nqualitative decision making, 515, 705\nqualitative risk analysis, 193–194, 705\nquality assurance checks, 705\nquantitative decision making, 515, 705\nquantitative risk analysis, 190–193, 705\nQuarter Inch Cartridges (QICs) for backups, 556\nR\nracial harassment, 492\nradiation monitoring, 388–389, 490, 639–640, 705\nradio frequency interference (RFI), 642, 705\nradio frequency (RF) radiation, 490, 639–640\nRADIUS (Remote Authentication Dial-In User \nService), 27–28, 106, 707\nRAID (Redundant Array of Independent Disks), \n110–111\nrainbow series, 424–428\nRAM (random access memory), 383–384, 705\nrandom access storage, 226, 387–388, 705\nrandom number generators, 303\nRARP (Reverse Address Resolution Protocol), \n74–75, 92, 707\nRAs (registration authorities), 348, 706\nRBAC (role-based access controls), 23, 25–26, 708\nRC5 (Rivest Cipher 5) algorithm, 320\nRDBMSs (relational database management \nsystems), 216\nread-only memory (ROM), 382–383, 705\nready state, 377, 705\nreal evidence, 591, 705\nreal memory, 225, 383, 705\nrealized risk, 190–191, 706\nreconnaissance attacks, 278–280\nrecord retention\nin administrative management, 458\nin auditing, 483\ndefined, 706\nrecord sequence checking, 131, 706\nrecords, 217, 706\n" }, { "page_number": 795, "text": "750\nrecovery controls – RF radiation\nrecovery controls, 4, 461, 706\nrecovery strategy, 545\nalternative processing sites in, 547–550\nbusiness unit priorities in, 545–546\ncrisis management in, 546\ndatabase recovery, 551–552\nemergency communications in, 546\nMutual Assistance Agreements in, 550–551\nrecovery vs. restoration, 558–559\nworkgroup recovery in, 546–547\nrecovery time objective (RTO), 706\nRed Book, 427\nred boxes, 138\nreducing risk, 195, 706\nredundancy\nfor failover servers, 543\nknowledge, 177\nRedundant Array of Independent Disks (RAID), \n110–111\nredundant servers, 109\nreference monitors, 245\ndefined, 706\nin TCB, 417–418\nreference profiles, 706\nreferential integrity, 218, 706\nrefreshing RAM, 384\nregenerated keys\nasymmetric, 315\nsymmetric, 313\nregister addressing, 385, 706\nregistered trademarks, 581–582\nregisters, 384, 706\nregistration authorities (RAs), 348, 706\nregistration with biometric devices, 16–17\nregulatory policies, 183, 706\nregulatory requirements, 514–515\nreject risk, 195, 706\nrelational database management systems \n(RDBMSs), 216\nrelational databases, 217–219, 706\nrelationships, 217, 266, 706\nrelease control, 243\nrelevant evidence, 591, 707\nremote access, 102–103\nRemote Authentication Dial-In User Service \n(RADIUS), 27–28, 106, 707\nremote backup locations, 551–552\nremote control technique, 107\nremote journaling, 552, 707\nremote mirroring, 552, 707\nremote node operation, 107\nRemote Procedure Call (RPC), 76\nrepeatable phase in Capability Maturity Model, 239\nrepeaters, 83, 100\ndefined, 707\nin Physical layer, 74\nreplay attacks, 57, 141, 360, 707\nreporting\nin auditing, 481–482\nincidents, 615–616\nrequest control, 242\nresidual risk, 195, 707\nresources in business continuity planning\nprioritizing, 519\nrequirements, 513–514\nresponse teams for incidents, 612\nrestoration vs. recovery, 558–559\nrestricted interface model, 403, 707\nretention in incidents, 615\nretina scans, 14, 707\nReverse Address Resolution Protocol (RARP), \n74–75, 92, 707\nreverse engineering, 707\nreverse hash matching, 360, 707\nreview questions\naccess control, 36–41\nadministrative management, 470–475\napplied cryptography, 363–368\nattacks, 62–67, 284–290\nauditing, 502–507\nBusiness Continuity Planning (BCP), 528–533\ncommunications security, 146–151\ncomputer crime, 621–626\ncomputer design, 408–413\ncryptography, 328–333\nDisaster Recovery Planning (DRP), 564–569\nemployment policies and practices, 202–207\nlaws, 598–603\nmonitoring, 502–507\nnetworks, 114–119\nphysical security, 652–657\nsecurity management, 168–173\nsecurity models, 443–448\nsystem development controls, 250–255\nrevocation for certificates, 349–350, 707\nRF (radio frequency) radiation, 490, 639–640\n" }, { "page_number": 796, "text": "RFC 1918 – Secret classification\n751\nRFC 1918, 707\nRFI (radio frequency interference), 642, 705\nrights in access control, 30–32, 33\nRijndael cipher, 320–321, 708\nring topology, 87, 88\nrings, protection, 375–376, 376\nRIP (Routing Information Protocol), 75\nrisk\nin business continuity planning\nacceptance and mitigation, 525\nassessment, 524\nidentification, 516–517\ndefined, 708\nrisk analysis, 185, 708\nrisk management, 185\ndefined, 708\nhandling risk, 195–196\nmethodologies, 188–190\nqualitative analysis, 193–194\nquantitative analysis, 190–193\nterminology, 186–187, 187\nrisk mitigation, 195\nrisk tolerance, 195, 708\nRivest, Ronald, 337, 342\nRivest, Shamir, and Adleman (RSA) encryption, \n337–338, 708\nRivest Cipher 5 (RC5) algorithm, 320\nRogier, Nathalie, 342\nrole-based access controls (RBAC), 23, 25–26, 708\nroles, security, 179–180\nROLLBACK command, 219\nROM (read-only memory), 382–383, 705\nroot accounts, 494\nroot level, 708\nrootkits, 278, 708\nRosenberger, Rob, 264\nROT3 (Rotate 3) cipher, 294, 307\nrouters, 101\ndefined, 708\nin Network layer, 75\nRouting Information Protocol (RIP), 75\nrows in databases, 217\nRoyce, Winston, 237\nRPC (Remote Procedure Call), 76\nRSA (Rivest, Shamir, and Adleman) encryption, \n337–338, 708\nRTO (recovery time objective), 706\nrule-based access controls, 24, 708\nrunning key ciphers, 309–310, 708\nrunning state, 377, 708\nS\nS-HTTP (Secure HTTP), 353, 710\nS/MIME (Secure Multipurpose Internet Mail \nExtensions) protocol, 105, 134, 352–353, 710\nS-RPC (Secure Remote Procedure Call), 77, 104, 710\nsabotage, 493\nsafe computing, 451\nsafe harbor sites, 590\nsafeguards, 187\ncalculating, 192–193\ndefined, 708\nin distributed architecture, 395–396\nsafety\nof people, 520–521, 640\nin physical security, 640–647\nsags, 641, 709\nsalami attacks, 438, 709\nsalts for passwords, 496, 709\nsampling in auditing, 482, 709\nsandbox concept, 214, 268, 709\nsanitation of media, 460, 709\nSAs (security associations), 357, 710\nSATAN tool, 487\nscalability in symmetric key algorithms, 313\nscanning attacks, 279–280, 611, 709\nscavenging, 490, 709\nschemas, database, 219, 709\nSchneier, Bruce, 319, 321\nscreened hosts, 98–99\nscreening job candidates, 177–178\nscript kiddies, 258, 609\nscripted access, 23, 709\nscripts, logon, 693\nSDLC (Synchronous Data Link Control) protocol\ndefined, 716\npolling in, 87\nin WANs, 79, 108, 130\nsearch warrants, 594, 614, 709\nsecond-tier attacks, 140–141, 709\nsecondary evidence, 592, 709\nsecondary memory, 385–386, 709\nsecondary storage, 225, 387, 709\nSecret classification, 164, 709\n" }, { "page_number": 797, "text": "752\nsecure communication protocols – security perimeter\nsecure communication protocols, 710\nSecure Electronic Transaction (SET) protocol, 77, \n105, 354–355, 710\nSecure European System for Applications in a \nMultivendor Environment (SESAME) \nauthentication mechanism, 22, 711\nsecure facility plans, 629\nSecure Hash Algorithm (SHA), 341–342, 710\nSecure HTTP (S-HTTP), 353, 710\nSecure Multipurpose Internet Mail Extensions \n(S/MIME) protocol, 105, 134, 352–353, 710\nSecure Remote Procedure Call (S-RPC), 77, 104, 710\nSecure Shell (SSH), 355–356, 710\nSecure Sockets Layer (SSL) protocol, 104\ndefined, 710\nin Session layer, 76, 96\nfor Web, 353\nX.509 for, 347\nsecurity associations (SAs), 357, 710\nsecurity awareness training, 196–197\nsecurity clearances, 178\nsecurity control architecture, 244–246\nabstraction in, 246\nprocess isolation in, 244\nprotection rings in, 244–246, 245\nsecurity modes in, 246\nservice level agreements in, 247\nsecurity control types, 461\nsecurity domain (B3) systems, 426\nsecurity guards, 634\nsecurity IDs, 635, 710\nsecurity kernel, 245\ndefined, 710\nin TCB, 417–418\nsecurity labels, 23, 710\nsecurity management, 154\naccountability in, 159\nauditing in, 159\nauthentication in, 158\nauthorization in, 158\navailability in, 156–157\nchange control in, 161\nconfidentiality in, 154–155\ndata classification in, 162–165\nexam essentials for, 166–167\nidentification in, 157–158\nintegrity in, 155–156\nnonrepudiation in, 159\nplanning, 181–182\nprivacy in, 157\nprotection mechanisms in, 159–161\nreview questions, 168–173\nsummary, 165–166\nsecurity models, 397, 416\naccess control matrices, 399–400\nBell-LaPadula model, 400–402, 401, 419\nBiba model, 402, 403, 419–420\nBrewer and Nash model, 403–404\ncertification in, 416–417\nClark-Wilson model, 403, 420\nclassifying and comparing, 404–405\nclosed and open systems, 421\nconfidentiality, integrity, and availability in, 422\ncontrols in, 423\nevaluation in, 424\ncertification and accreditation, 432–434\nCommon Criteria, 429–432\nITSEC classes, 428–429\nrainbow series, 424–428\nTCSEC classes, 425–426\nexam essentials for, 441–442\nflaws and issues in, 435\ncovert channels, 435\ndesign and coding, 435–437\nelectromagnetic radiation, 439–440\nincremental attacks, 438\ninput and parameter checking, 436–437\nmaintenance hooks and privileged \nprograms, 438\nprogramming, 439\ntiming, state changes, and communication \ndisconnects, 439\ninformation flow model, 398\nnoninterference model, 398\nobjects and subjects in, 420–421\nreview questions, 443–448\nstate machine model, 397–398\nsummary, 440\nTake-Grant model, 398\nTCB in, 417–418\ntokens, capabilities, and labels in, 418\ntrust and assurance in, 423\nsecurity modes, 246, 378–381\nsecurity perimeter\ndefined, 710\nin TCB, 417\n" }, { "page_number": 798, "text": "security policies – sites\n753\nsecurity policies, 4, 182–183, 710\nsecurity professional role, 180, 711\n* (star) Security Property, 400–401, 419, 660\nsecurity requirements in European Union privacy \nlaw, 590\nsecurity roles, 179–180, 711\nsecurity through obscurity, 311\nsegmentation, hardware, 244, 393, 685\nsemantic integrity in databases, 221\nsendmail program, 132, 266\nsenior management, 179–180\nin business continuity planning, 513\ndefined, 711\nSensitive classification, 165, 711\nSensitive but unclassified classification, 164, 711\nsensitive information and media, 458–461\nsensitivity adjustments for biometric devices, \n15–16, 711\nsensors, 635\nseparation of duties and responsibilities\nin access control, 31–32, 33\ndefined, 711\nin employment practices, 177\nseparation of privilege, 394, 711\nSequenced Packet Exchange (SPX), 76, 711\nsequential storage, 226, 387–388, 711\nSerial Line Internet Protocol (SLIP), 74, 105, 711\nseries layering, 160\nserver rooms, 631\nservers\ncountermeasures on, 267\nredundant, 109\nservice bureaus, 549\nService Level Agreements (SLAs)\nin contracts, 515\ndefined, 711\nfor hardware, 648\nissues addressed by, 247\nservice ports, 90\nservice-specific remote access technique, 107\nservices, network and protocol, 107–108\nSESAME (Secure European System for \nApplications in a Multivendor Environment) \nauthentication mechanism, 22, 711\nsession hijacking, 281, 712\nSession layer, 76, 712\nSET (Secure Electronic Transaction) protocol, 77, \n105, 354–355, 710\nsetgid utility, 494\nsetuid utility, 494\nsexual harassment, 492\nSHA (Secure Hash Algorithm), 341–342, 710\nshadow file, 271\nShamir, Adi, 337\nshared secret encryption keys, 312\nshielded twisted-pair (STP) wire, 81, 712\nShiva Password Authentication Protocol (SPAP), 124\nshoplifting, 608\nshoulder surfing, 13, 631, 712\nshrink-wrap license agreements, 584, 712\nsign off letters, 195\nsignature-based filters, 268\nsignature detection method, 47–48, 262, 712\nsignatures, 344\nin asymmetric key algorithms, 314\nin biometric identification, 15, 712\ndefined, 676\nDSS, 345–346\nHMAC, 345\nin message digests, 341\nSimple Integrity Axiom (SI Axiom), 402, 419, 712\nSimple Key Management for Internet Protocols \n(SKIP) tool, 75, 104, 712\nSimple Mail Transfer Protocol (SMTP)\nin Application layer, 77, 95\ndefined, 712\nin WANs, 132\nSimple Network Management Protocol (SNMP)\nin Application layer, 77, 96\nfor scans, 611\nSimple Security Property (SS Property), 400, 419, 712\nsimplex session mode, 76\nsimulation tests, 561, 712\nsingle loss expectancy (SLE), 191\ndefined, 712\nin impact assessment, 518\nsingle points of failure, 108–111\nSingle Sign On (SSO) mechanism, 20\ndefined, 712\nexamples, 22–23\nKerberos authentication in, 21–22\nsingle state processing systems, 374, 713\nsingle-use passwords, 10, 713\nsites\nalternative, 521, 547–550\nselection, 629\n" }, { "page_number": 799, "text": "754\nSKIP – static packet-filtering firewalls\nSKIP (Simple Key Management for Internet \nProtocols) tool, 75, 104, 712\nSkipjack algorithm, 320, 713\nSLAs (Service Level Agreements)\nin contracts, 515\ndefined, 711\nfor hardware, 648\nissues addressed by, 247\nSLE (single loss expectancy), 191\ndefined, 712\nin impact assessment, 518\nSLIP (Serial Line Internet Protocol), 74, 105, 711\nsmart cards, 637, 713\nSMDS (Switched Multimegabit Data Services), \n108, 130, 716\nsmoke actuated systems, 645\nsmoke damage, 647\nsmoke stage in fires, 643, 644\nSMP (symmetric multiprocessing), 372, 716\nSMTP (Simple Mail Transfer Protocol)\nin Application layer, 77, 95\ndefined, 712\nin WANs, 132\nSmurf attacks, 54, 55, 273–274, 274, 713\nsniffer attacks, 57, 713\nsniffing, 489, 713\nSNMP (Simple Network Management Protocol)\nin Application layer, 77, 96\nfor scans, 611\nsnooping attacks, 57\nsocial engineering, 12, 491\ndefined, 713\nin password attacks, 270\nthrough voice communications, 136–137\nsockets, 713\nsoftware\nconfiscating, 614–615\ncopyrights for, 579\ndeveloping, 229\nassurance procedures, 229–230, 231\nobject-oriented programming, 233–234\nprogramming languages in, 232\nsystem failure avoidance, 230–231, 231\nescrow arrangements for, 557–558\nfailures in, 543\ntesting, 243–244\nsoftware capability maturity model, 239–240\nsoftware IP encryption (SWIPE) protocol, 104, 713\nSPA Anti-Piracy group, 584\nspam, 713\nspamming attacks, 57–58, 134, 713\nSPAP (Shiva Password Authentication Protocol), 124\nspikes, 641, 713\nspiral model, 238–239, 239\nsplit knowledge, 304, 713\nspoofing\nwith ARP, 141\ndefined, 714\nin e-mail, 134\nIP, 280–281\nspoofing attacks, 55–56, 714\nsprinklers, 646\nSPX (Sequenced Packet Exchange), 76, 711\nSQL (Structured Query Language), 76, 218–219, 715\nSS Property (Simple Security Property), 400, 419, 712\nSSH (Secure Shell), 355–356, 710\nSSL (Secure Sockets Layer) protocol, 104\ndefined, 710\nin Session layer, 76, 96\nfor Web, 353\nX.509 for, 347\nSSO (Single Sign On) mechanism, 20\ndefined, 712\nexamples, 22–23\nKerberos authentication in, 21–22\nstandards, 184\nfor computer security, 576\ndefined, 714\nstar topology, 88, 88\nstate changes, 439\nstate laws, 573\nstate machine model, 397–398, 714\nstate packet-filtering firewalls, 714\nstateful inspection firewalls, 98, 714\nstateful NAT, 126\nstatements in business continuity planning\nof importance, 523–524\nof organizational responsibility, 524\nof priorities, 524\nof urgency and timing, 524\nstates\ndefined, 714\nprocess, 377–378, 378\nstatic electricity, 642\nstatic NAT, 93\nstatic packet-filtering firewalls, 97–98\n" }, { "page_number": 800, "text": "static passwords – Tagged Image File Format\n755\nstatic passwords, 10, 714\nstatic RAM, 384\nstatic tokens, 18–19, 714\nstatistical attacks, 359, 714\nstatistical intrusion detection, 48\nstatistical sampling in auditing, 482\nstatus accounting, configuration, 243\nstealth viruses, 263, 714\nsteganography, 354, 714\nSTOP errors, 230–231, 714\nstopped state, 378, 715\nstorage, 225\nin disaster recovery planning, 554–557\nof media, 459\nsecurity for, 388\nthreats to, 226–227\ntypes of, 225–226, 386–388\nstorms, 539, 540\nSTP (shielded twisted-pair) wire, 81, 712\nstrategic plans, 182, 715\nstrategy development in business continuity \nplanning, 519–520\nstream attacks, 55, 715\nstream ciphers, 310, 715\nstrikes, 544\nstrong passwords, 11, 715\nstructured protection (B2) systems, 426\nStructured Query Language (SQL), 76, 218–219, 715\nstructured walk-through tests, 560–561, 715\nsub-technologies, 84–85\nsubjects\nin access, 2\ndefined, 715\nin secure systems, 420–421\nsubnet masks, 94–95\nsubpoenas, 614, 715\nsubstitution ciphers, 306–308, 715\nSUM function, 223\nsupervisor states, 376, 715\nsupervisory operating mode, 245, 381, 715\nsupplies in disaster recovery planning, 558\nsurge protectors, 641\nsurges, 641, 715\nsuspicious activity, 614\nSVCs (switched virtual circuits), 108, 127, 716\nSWIPE (software IP encryption) protocol, 104, 713\nSwitched Multimegabit Data Services (SMDS), \n108, 130, 716\nswitched virtual circuits (SVCs), 108, 127, 716\nswitches, 100\nin Data Link layer, 75\ndefined, 715–716\nswitching technologies, 126–127\nsymmetric cryptography, 316\nAES, 320–322\nBlowfish, 319–320\nDES, 316–318\nIDEA, 319\nkeys in, 312–313, 312, 322–323, 716\nSkipjack, 320\nTriple DES, 318–319\nsymmetric multiprocessing (SMP), 372, 716\nSYN flood attacks, 53–55, 271–272, 272, 716\nSYN packets, 91\nsynchronous communications, 85\nSynchronous Data Link Control (SDLC) protocol\ndefined, 716\npolling in, 87\nin WANs, 79, 108, 130\nsynchronous dynamic password tokens, 18–19, 716\nsystem calls, 376, 716\nsystem compromises, 611–612, 670\nsystem development controls, 229\nexam essentials for, 248–249\nGannt charts, 240, 241\nlife cycles in. See life cycles in system \ndevelopment\nPERT, 242\nreview questions, 250–255\nsecurity control architecture, 244–246, 245\nsoftware development, 229–234\nsoftware testing, 243–244\nsummary, 247\nwritten lab for, 249, 256\nsystem failures, 230–231, 231\nsystem-high security mode, 246, 379–380, 716\nsystem operating mode, 381\nsystem test review, 236\nT\ntable-top exercises, 560–561\ntables in databases, 217, 717\nTACACS (Terminal Access Controller Access \nControl System), 27–28, 106, 717\ntactical plans, 182, 717\nTagged Image File Format (TIFF), 77\n" }, { "page_number": 801, "text": "756\nTake-Grant model – training and education\nTake-Grant model, 398, 717\ntapes for backups, 556–557\nTarget of Evaluation (TOE), 428\ntask-based access controls, 23, 717\nTCB (trusted computing base), 417–418, 720\nTCP (Transmission Control Protocol), 76, 90, 719\nTCP/IP protocol, 89–90, 90\nmodel, 78–79, 78\nNetwork layer, 91–95\nTransport layer, 90–91\nTCP wrappers, 717\nTCSEC (Trusted Computer System Evaluation \nCriteria) classes, 184, 425–426, 452\nteams\nfor business continuity planning, 512\nfor penetration testing, 488\nteardrop attacks, 55, 274–275, 275, 717\ntechnical controls, 4, 629, 636–640, 717\ntechnical protection mechanisms, 391–393\ntelecommuting, 107\ntelephone trees, 554\nTelnet protocol, 77, 95\ntemperature, 642\nTEMPEST (Transient Electromagnetic Pulse \nEquipment Shielding Techniques) devices, 370\ncombating, 639–640\ndefined, 717\nmonitors, 388–389, 490\n10Base-2 cable, 80–81, 660\n10Base-5 cable, 80–81, 660\n10Base-T cable, 80–81, 660\nTerminal Access Controller Access Control System \n(TACACS), 27–28, 106, 717\ntermination procedure policies, 178–179\ntermination process, 465\nterrorist acts, 541–542\nterrorist attacks, 608–609, 718\ntest data method, 244, 718\ntestimonial evidence, 593, 718\ntesting\nin business continuity planning, 513, 526\nin disaster recovery planning, 560–561\npenetration. See penetration testing\nsoftware, 243–244\nTFN (Tribal Flood Network) toolkit, 273–274\nTFTP (Trivial File Transfer Protocol), 77, 95\nTGS (Ticket Granting Service), 21–22, 718\ntheft, 493, 544–545\nthicknet cable, 80\nthin clients, 22, 718\nthinnet cable, 80\nthreads, 373\nthreat agents, 186, 718\nthreat events, 186, 718\nthreats, 186, 492–496, 718\n3–4–5 rule, 82\n3DES (Triple DES) standard, 318–319, 720\nthroughput rate with biometric devices, 17, 718\nTicket Granting Service (TGS), 21–22, 718\ntickets, 21, 718\nTier 3 countries, 585\nTier 4 countries, 585\nTIFF (Tagged Image File Format), 77\ntime frames\nauditing, 480\nrecord retention, 483\nreporting, 482\ntime-of-check (TOC), 439, 718\ntime-of-check-to-time-of-use (TOCTTOU) \nattacks, 278, 439, 718\ntime-of-use (TOU), 439, 718\ntime slices, 377, 718\ntime stamps, 221\ntiming as security flaw, 439\nTLS (Transport Layer Security) protocol, 353\nTOE (Target of Evaluation), 428\nToken Ring, 74, 84, 718\ntokens, 6, 18–20\nin CSMA/CD, 86\ndefined, 718\nin security models, 418\nin Token Ring, 84\nTop Secret classification, 163, 718\ntopologies, 87–89, 87–89, 719\ntornadoes, 539\ntotal risk, 195, 719\nTOU (time-of-use), 439, 718\nTower of Hanoi strategy, 557\nTPs (transformation procedures), 420\ntrade secrets, 582–583, 719\ntrademarks, 581–582, 719\ntraffic analysis, 485, 495–496, 719\ntraining and education, 197\nin business continuity planning, 513, 522–523\nfor crises, 546\ndefined, 679, 719\nin disaster recovery planning, 559–560\non inappropriate activities, 492\n" }, { "page_number": 802, "text": "transactions – UTP wire\n757\nfor password attacks, 270\non safe computing, 451\non security awareness, 196–197\ntransactions, database, 219–220\ntransferring risk, 195, 719\ntransformation procedures (TPs), 420\nTransient Electromagnetic Pulse Equipment Shielding \nTechniques (TEMPEST) devices, 370\ncombating, 639–640\ndefined, 717\nmonitors, 388–389, 490\ntransients, 641, 719\nTransmission Control Protocol (TCP), 76, 90, 719\ntransmission error correction, 132, 719\ntransmission logging, 132, 719\ntransmission protection, 102\ntransparency in communications, 131, 719\ntransponder proximity readers, 637\nTransport layer\ndefined, 719\nin OSI model, 76\nin TCP/IP, 90–91\nTransport Layer Security (TLS) protocol, 353\ntransport mode in IPSec, 356–357, 719\ntransposition ciphers, 306, 719\ntrap doors, 278, 719\ntraverse mode noise, 642, 719\ntree topology, 88, 88\ntrend analysis, 485, 495–496\nTribal Flood Network (TFN) toolkit, 273–274\ntriggers\nin auditing, 478\nin fire detection systems, 645\nin motion detectors, 635, 662\nTrinoo toolkit, 274\nTriple DES (3DES) standard, 318–319, 720\nTripwire package, 263\nTrivial File Transfer Protocol (TFTP), 77, 95\nTrojan horses, 211, 264–265, 720\nTropical Prediction Center, 539\ntrust in security models, 423\ntrust relationships, 266\nTrusted Computer System Evaluation Criteria \n(TCSEC) classes, 184, 425–426, 452\ntrusted computing base (TCB), 417–418, 720\ntrusted paths, 417, 720\ntrusted recovery process, 436, 455, 720\ntrusted systems, 423\ntrusts, 27, 720\ntsunamis, 537\ntunnel mode, 356–357, 720\ntunneling, 123, 720\nturnstiles, 632, 633, 720\ntwisted-pair cabling, 81–82\ntwo-factor authentication, 7, 52, 720\nTwofish algorithm, 321\nType 1 authentication factor, 720\nType 1 errors, 15–16\nType 2 authentication factor, 720\nType 2 errors, 16\nType 3 authentication factor, 720\nU\nUCITA (Uniform Computer Information \nTransactions Act), 584, 721\nUDIs (unconstrained data items), 420\nUDP (User Datagram Protocol), 76, 91, 721\nUltra effort, 295–296\nUnclassified classification, 164, 721\nunconstrained data items (UDIs), 420\nunicast communications, 85, 721\nUniform Computer Information Transactions Act \n(UCITA), 584, 721\nUnix operating system\nbasics, 494\nviruses in, 261\nunshielded twisted-pair (UTP) wire, 81–82, 721\nupper management, 180\nUPSs (uninterruptible power supplies), 542–543, \n641, 721\nUSA Patriot Act of 2001, 588, 721\nuser awareness training, 451\nUser Datagram Protocol (UDP), 76, 91, 721\nuser (end user) role, 180\nuser operating mode, 245, 381, 721\nusers\nin access control, 31\naccounts. See accounts\ndefined, 721\nenrollment of, 11, 29\nremote user assistance for, 102\nutilities\nin disaster recovery planning, 558\nfailures in, 543–544\nUTP (unshielded twisted-pair) wire, 81–82, 721\n" }, { "page_number": 803, "text": "758\nvacations – work areas\nV\nvacations, mandatory, 178, 694\nvalue of assets, 188–190, 516\nVan Eck radiation, 389\nvandalism, 544–545\nVENONA project, 309\nverification for certificates, 348–349\nverified protection (A1) systems, 426\nVernam ciphers, 309, 721\nviews\nfor databases, 221\ndefined, 721\nVigenere ciphers, 307–308, 722\nviolation analysis, 482, 722\nvirtual circuits, 108, 127\nvirtual machines, 393, 722\nvirtual memory, 225, 385–386, 722\nvirtual private networks (VPNs), 122\ndefined, 722\nimplementing, 124–125\noperation of, 124\nprotocols for, 103–104\nfor TCP/IP, 90\ntunneling in, 123\nfor wireless connectivity, 83\nvirtual storage, 225\nvirus decryption routines, 264\nviruses, 211, 259\nantivirus management, 451\nantivirus mechanisms, 262–263\ndefined, 722\ndefinition files for, 262, 451\ne-mail, 134\nhoaxes, 264\nplatforms for, 261–262\npropagation techniques, 259–261\ntechnologies for, 263–264\nvisibility for physical security, 630\nvisitors, 631\nvital records program, 525\nvoice communications, 136–138\nVoice over IP (VoIP), 135, 722\nvoice patterns, 14–15, 722\nvolatile storage, 226, 387, 722\nvoluntary surrender, 722\nVPNs. See virtual private networks (VPNs)\nvulnerabilities, 186\ndefined, 722\nin distributed architecture, 395\nvulnerability analysis, 487\nvulnerability scanners, 49, 722\nvulnerability scans, 279–280, 722\nW\nwaiting state, 377, 723\nwalls, 631\nWANs (wide area networks)\ndefined, 723\nvs. LANs, 79\ntechnologies for, 128–131\nWAP (Wireless Application Protocol), 358, 723\nwar dialing, 488–489, 723\nwarm sites, 548–549, 723\nwarm-swappable RAID, 111\nwarning banners, 485, 723\nwaste of resources, 492\nwater leakage, 643\nwater suppression systems, 646\nwaterfall model, 237–238, 238\nwave pattern motion detectors, 635\nweather forecasts, 539\nWeb, cryptography for, 353–354\nweb of trust concept, 351\nwell-known ports, 90, 723\nWEP (Wired Equivalency Protocol), 358, 723\nwet pipe systems, 646, 723\nwhite box testing, 244\nwhite boxes, 138\nwhite noise for TEMPEST, 639–640\nwide area networks (WANs)\ndefined, 723\nvs. LANs, 79\ntechnologies for, 128–131\nwildfires, 540\nWinNuke attacks, 55, 723\nWIPO (World Intellectual Property Organization) \ntreaties, 580\nWired Equivalency Protocol (WEP), 358, 723\nWireless Application Protocol (WAP), 358, 723\nwireless networking, 83, 357–358\nWireless Transport Security Protocol (WTLS), 358\nwork areas, 631–632\n" }, { "page_number": 804, "text": "work function – Zimmerman\n759\nwork function, 304, 723\nworkgroup recovery, 546–547\nworkplace privacy, 589\nworks for hire, 580\nworkstation and location changes, 453\nWorld Intellectual Property Organization (WIPO) \ntreaties, 580\nWORM (Write Once, Read Many) storage, 556\nworms, 211, 265–266\ndefined, 723\nin e-mail, 134\nwrappers\nin TCP, 90\nin tunneling, 123\nWrite Once, Read Many (WORM) storage, 556\nwritten labs\nattacks, 284, 291\ncryptography, 327, 334\nDisaster Recovery Planning, 563, 570\nlaws, 597, 604\nsystem development controls, 249, 256\nWTLS (Wireless Transport Security Protocol), 358\nX\nX.25 protocol, 108\ndefined, 724\npacket switching in, 79\nWAN connections, 129\nX.509 standards, 346–347\nX Window GUI, 96\nXbox Trojan horses, 265\nXOR operations, 301–302, 724\nXTACACS (Extended Terminal Access Controller \nAccess Control System), 106\nZ\nZephyr charts, 17–18, 17\nzero knowledge proof, 304, 724\nzero knowledge teams, 488\nZimmerman, Phil, 319, 351\n" } ] }